id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
236738688
pes2o/s2orc
v3-fos-license
Electromagnetic characteristics of biosilic а from rice husk . Rice husk, being a widely available natural plant renewable agricultural resource, can be transformed into effective reinforcing fillers of special concrete and gypsum building materials. The samples of silica from rice husks were synthesized by thermal oxidative pyrolysis and their electromagnetic and microstructural characteristics were investigated. It was found that the rice husk itself is practically EM-wave transparent material in the frequency range of 0.1-7 GHz, while the products of its thermal oxidative pyrolysis have different microwave absorbing properties, depending on the amount of oxidizing agent used. The X-ray powder diffraction data showed the predominant presence of amorphous silica in the samples of rice husk ash with a small amount of α - quartz, α -cristobalite and α -tridymite. At a pyrolysis reaction temperature of rice husk of about 560 ± 20 0 C, the resulting product, in addition to amorphous silica and crystalline phases of silicon dioxide, contains traces of graphite particles, which leads to a sharp increase in dielectric characteristics and effective microwave absorption. When the temperature of the pyrolysis reaction of rice husk rises above 700 0 C the EM-wave absorption of such materials decreases. Thus, on the basis of the experiments carried out, the optimal ratios of rice husk and the used oxidizer of ammonium nitrate were revealed to obtain environmentally friendly ecological low-cost powder nanostructured biosilica additives for concrete and gypsum building compositions with increased effective radio absorption in the frequency range of the electromagnetic field above 1 GHz. Introduction Rice husk (RH) is agricultural waste and contained up about 20% of the mass of rice produced and is wide spread natural plant raw material for the production of amorphous silica [1,2]. The principal components of rice hulls are cellulose, hemicellulose, lignin and biogenic silicon dioxide [1,3]. It is known that under certain controlled combustion conditions of rice hulls, the main product is amorphous silica with high reactivity used for the production of concrete [4]. Amorphous silica derived from rice husk is a raw material for ceramic production [5], composite materials [6] and component of special concretes [7,8]. Biogenic silica from rice husks in the ash composition is from 92 to 95% [9] and is an amorphous modification of silicon dioxide, which is synthesized by plants in their vegetative parts and is associated with organelles and plant parts [2]. Rice husk silica is used abroad as an additive to create electromagnetic wave (EM) absorbing materials; however, practically nothing is known about the electromagnetic properties of rice husk silica samples obtained under various regimes of oxidative pyrolysis. This article presents the results of studies on the oxidation pyrolysis preparation of amorphous silica from rice husks with a high content of silicon dioxide and the investigation of its electromagnetic, microstructural and crystal phase characteristics for purpose of perspective using as filler for concrete and gypsum building compositions with regulated electromagnetic properties. The objects of study were samples of rice husk ash (rice plant growing place -Krasnodar Territory, Abinsky District), obtained by oxidative pyrolysis with various amounts of ammonium nitrate used as a cheap available solid oxidizer with a high positive oxygen balance, completely decomposing during heat treatment into gaseous products. Methods Air-dry rice husk weighed on an Ohaus Adventurer AR2140 analytical electronic balance (USA) and in the required various quantities was mixed with finely ground powder of ammonium nitrate NH 4 NO 3 (chemically pure, Russian Federation). The resulting mixture was transferred into a 240 ml porcelain crucible and placed on an electric stove IKA C-Mag HS7 (Germany). The mixture was heated at a speed of 20 0 C/minute. Upon reaching a temperature of about 180 0 C, the decomposition of organic components of rice husks began to be observed, and upon reaching 315 0 C, an intensive decomposition of ammonium nitrate took place with the formation of oxidizing nitrous gases, which led to a rapid burnout of organic components of rice husks with flame temperature increasing. After that, the crucible with ash was heated to a temperature of 500 0 C and kept there for 30 minutes. According to data [3,11,12] at a temperature of 500 0 C in an air atmosphere, almost complete burnout of organic components of rice husks is observed. After that, the crucible with a sample of rice husk ash was cooled in air to room temperature and a comprehensive study of the properties of the obtained biosilica powder was performed. Micrographs of rice husk ash powder samples were obtained with a scanning electron microscope Zeiss EVO HD15 (Germany) with X-ray energy dispersive microanalyzer unit for qualitative elemental composition of the rice husk ash samples. Laser particle size analysis (DLS) of rice husk ash powder samples was performed using a laser particle size analyzer Analysette 22 (Germany) in isopropyl alcohol using a liquid dispergation unit. To plot the graph of the distribution of silica particles by size, we used averaging over the results of five measurements. X-ray diffraction (XRD) powder analysis of rice husk ash samples was carried out using Shimadzu XRD-7000 X-ray diffractometer (Japan). The identification of diffraction reflections and the search for crystalline phases in the samples were carried out using a computer program Profex 4.0.0. The combustion temperature of the rice husk-NH 4 NO 3 reaction mixture in oxidative pyrolysis process was measured with a digital thermometer UT303D (China) by means of averaging over the results of three independant measurements. The bulk dry density of rice husk ash samples was determined by the pycnometric method. Specific surface area S sp (m 2 /g) values for rice husk ash samples in approximation of non-bonded isolated particles were calculated using the following equation: where D p -average weighted nanoparticle size from electron microscopy data, ρcrystallographic density of silica (~2,2 g/cm 3 ) [13]. The electromagnetic characteristics of rice husk ash samples were investigated by vector network analysis using the Deepace KC901V device (China) in a 10 cm HP-11566A coaxial transmission airline probe (USA) by the standard way in the form of a composite 50% (weight) in paraffin in the form of a pressed toroid with dimensions 4×7 mm and 4.5 mm thickness. The choice of the concentration of the investigated silica filler samples and paraffin as a matrix is due to the convenience of comparison with the known data on the previously investigated different types microwave absorbing fillers. Calculation of the electromagnetic characteristics of the samples from the measured scattering parameters S 11 and S 21 were carried out by the Nicolson-Ross-Weir method [14]. Results The calculated yield of rice husk ash (RHA) depending on the ratio of the used ammonium nitrate and rice husk is shown in Figure 1а. It can be seen that with the increased of ammonium nitrate in the reaction mixture, the yield of ash decreased and reaches 20.4%. The main reason is oxidative combustion of carbon-containing compounds and residues. Discussion According to the data of [15], the interaction of ammonium nitrate with carbon materials upon heating can be expressed by the following chemical equation: NH 4 NO 3 + 0,5C → N 2 ↑+ 2H 2 O + 0.5CO 2 ↑ The heat effect of this exothermic oxidative reaction is, depending on the type of carbon raw material, varied from 3,5 to 3,8 kJ/kg [15]. Under our conditions, the addition of ammonium nitrate as an oxidizing agent led to a pronounced increase in the heat effect of the pyrolysis reaction of rice husks. So, when using the ratio of NH 4 NO 3 /RH equal to 5.5/1, the combustion temperature of the reaction mixture, measured with a UT303D digital thermometer, reached 710 0 С. Such high reaction temperatures should promote the effective burnout of organic components of rice husks and can lead to both the transformation of the crystal chemical phases of SiO 2 and the sintering of the resulting particles. A wide complex peak in prepared RHA powder diffraction patterns in the range of reflection angles 2θ from 15 to 40 0 clear indicates the amorphousness of the obtained silica. According to the data of powder X-ray diffraction analysis, the prepared samples of amorphous silica also contain α-tridymite, α-cristobalite, α-quartz, graphite, and, probably, there are traces of α-Si 3 N 4 and SiC(3C). Laser particle analysis of the prepared silica samples ( Figure 2) showed that they are nanostructured micropowders. It can be concluded that the use of an excess of ammonium nitrate for thermal oxidation of rice husks leads to an increase in the size of nanoparticles and a noticeable sintering of agglomerates of microparticles of silica. According to SEM data, the obtained silica samples consist of aggregates of fused nanoparticles. The dependence of the weighted average diameter of nanoparticles in the studied samples shows a systematic increase in the size of silica nanoparticles with increasing excess of NH 4 NO 3 . Also, with an increase in the m(NH 4 NO 3 )/m(RH) ratio, the calculated specific surface area for silica samples from rice husks decreases from 130 to 78 m 2 /g due to the intense sintering of nanoparticles and their aggregates under the action of high temperatures during the oxidative thermolysis reaction. Nevertheless, the dependence of the measured bulk density of rice husk ash samples shows changes from 362 to 419 kg/m 3 with a maximum at m(NH 4 NO 3 )/m(RH) = 3.75. Measurements of electrical conductivity at direct current showed a sharp increase in DC electrical conductivity for silica samples with the ratios m(NH 4 NO 3 )/m(RH) equal 1.25 и 2.5. We assume, taking into account the data of X-ray spectral energy dispersive analysis, that a high content of residual carbon of about 1.6% (wt.) in the form of graphite nanoparticles, according to the available data of powder XRD structural analysis, leads to a significant observed increase in the dielectric constant and dielectric loss tangent for a The studied samples of rice husk and silica from it showed the absence of magnetic properties and are purely dielectric materials. For dry rice husk, the measured average values of dielectric constant and dielectric loss tangent in the range of 0.1-7 GHz are 2.2 ± 0.2 and 0.09 ± 0.02, which makes it possible to consider it as a microwave-transparent material. The values of dielectric constant and dielectric loss tangent in the range of 0.1-7 GHz for a sample of biosilica from rice husk obtained by thermolysis without using NH 4 NO 3 as an oxidizing agent are 3.14±0.10 и 0.03±0.01 respectively. With the optimal ratio m(NH 4 NO 3 )/m(RH) = 2.5, the dielectric constant and dielectric loss tangent of the composite are 4.19 ± 0.37 and 0.12 ± 0.02, respectively. The obtained values of dielectric constant and dielectric loss tangent for the composites derived from biogenic silica sample with the ratio m(NH 4 NO 3 )/m(RH)=2.5 allow it to be used as an environmentally friendly low-cost microwave absorbing reinforcing filler for lightweight concrete compositions to absorb electromagnetic radiation with frequencies above 2 GHz, agreeing well with data [16]. As a microwave absorbing filler, produced by carbon-containing amorphous silica from rice hulls, it is inferior to previously studied domestic magnetic radio-absorbing materials such as magnetic microspheres [17], Fe-Si-Al alloy powder [18] or ferrites of spinel series [19,20], however, it is an easily produced low cost ecological microwave dielectric absorbing material. It is important to note that the microwave-absorbing nanostructured amorphous biosilica samples obtained by us was prepared by a simple low-tech method from cheap biorenewable natural raw materials, such as rice husk. Conclusions Thus, by pyrolysis treatment of rice husk with NH 4 NO 3 , nanostructured amorphous silica powders were obtained, in which, according to powder X-ray diffraction analysis, in addition to amorphous quartz a α-tridymite, α-cristobalite, α-quartz, and graphite are present in much smaller amounts. The presence of impurities of the highly dielectric electrically conductive phase of graphite in the samples of silica obtained with the ratio m(NH 4 NO 3 )/m(RH)=1.25 and 2.5, leads to a noticeable increase in dielectric constant and dielectric loss tangent in comparison with other samples of the obtained silica from thermal destructed rice husk. The amorphous silica from rice husk with increased dielectric characteristics obtained at the optimal ratio of the oxidant NH 4 NO 3 can be used as a lowcost strengthening nanostructured micropowder dielectric microwave absorbing filler for concrete and gypsum compositions.
2021-08-03T00:05:26.104Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "63db1dbebbb55898c6a2353253089f56241c626d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1051/e3sconf/202126301013", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3c3e295c4794af7c120c6eda071d9a8964d0db3e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
221115652
pes2o/s2orc
v3-fos-license
Virucidal Action Against Avian Influenza H5N1 Virus and Immunomodulatory Effects of Nanoformulations Consisting of Mesoporous Silica Nanoparticles Loaded with Natural Prodrugs Background Combating infectious diseases caused by influenza virus is a major challenge due to its resistance to available drugs and vaccines, side effects, and cost of treatment. Nanomedicines are being developed to allow targeted delivery of drugs to attack specific cells or viruses. Materials and Methods In this study, mesoporous silica nanoparticles (MSNs) functionalized with amino groups and loaded with natural prodrugs of shikimic acid (SH), quercetin (QR) or both were explored as a novel antiviral nanoformulations targeting the highly pathogenic avian influenza H5N1 virus. Also, the immunomodulatory effects were investigated in vitro tests and anti-inflammatory activity was determined in vivo using the acute carrageenan-induced paw edema rat model. Results Prodrugs alone or the MSNs displayed weaker antiviral effects as evidenced by virus titers and plaque formation compared to nanoformulations. The MSNs-NH2-SH and MSNs-NH2-SH-QR2 nanoformulations displayed a strong virucidal by inactivating the H5N1 virus. They induced also strong immunomodulatory effects: they inhibited cytokines (TNF-α, IL-1β) and nitric oxide production by approximately 50% for MSNs-NH2-SH-QR2 (containing both SH and QR). Remarkable anti-inflammatory effects were observed during in vivo tests in an acute carrageenan-induced rat model. Conclusion Our preliminary findings show the potential of nanotechnology for the application of natural prodrug substances to produce a novel safe, effective, and affordable antiviral drug. Introduction Viral infections and the emergence of new strains resistant to therapeutic treatments are a significant global health challenge. 1 Influenza A viruses (IAVs) are responsible for serious health threats worldwide and have considerable socioeconomic impacts. [2][3][4] The highly pathogenic avian influenza virus (H5N1) can be transmitted to humans and an increased rate of infections is observed. 5 Combating IAVs is one of the major challenges to date, as they have high genetic variability, and new strains of viruses appear 6 that are resistant to available vaccines and drug inhibitors. [7][8][9][10] Thus, new antiviral agents to prevent such medical threats that are safe and affordable are urgently needed. An ideal antiviral drug would have the following characteristics: broad spectrum, minimum toxicity, and a virucidal mechanism during the activity against various viruses. 11 Nanomedicine is a rapidly expanding interdisciplinary branch of medicine in which nanotechnology is used to create nanostructures to carry targeted and controlled release nanodrugs. 12 The use of nanomaterials to inhibit viral diseases through direct interaction of the nanoparticles and viruses was reported previously. 1,13,14 The antiviral effect is enhanced by the specific physicochemical properties of the used nanomaterials: small size, high surface area, surface modification. Examples are silver, 15,16 copper iodide, 17 graphene oxide, 18 gold, [19][20][21] and polymers, such as glycan. 22 Because of their ability to encapsulate drugs with large payloads, the opportunity opens for developing promising antiviral nanoformulations as drug delivery systems, thus reducing negative side effects. [23][24][25] Among the explored nanomaterials, modified silica nanoparticles have recently attracted much attention as antiviral agents. [26][27][28] In the present study, we tested the antiviral effect of mesoporous silica nanoparticles (MSNs), ie nanoporous silica spheres 100-200 nm in diameter with pores loaded with natural prodrugs. The particle size is comparable to virus size, and therefore their efficient attachment to viruses is expected. MSNs have been studied as a versatile drug delivery vehicle in in vitro and in vivo studies for various diseases. [29][30][31][32][33] They can be administered via injection (intravenous, hypodermic, or intramuscular) or orally and are eliminated through the urine and feces. 34 MSNs have been used to enhance the solubility, targeting ability, and therapeutic activity with a combination of two or more drugs for dual therapeutic efficiency. Recently, the FDA classified silica as Generally Recognized as Safe (GRAS), allowing it to be used in cosmetics and food additives. 35 The rationally tailored delivery system for antiviral drugs must consider the cytotoxicity of the used materials. MSNs are biocompatible and considered as non-toxic 36 compared to other nanomaterials employed as antimicrobial agents. For example, silver nanoparticles exhibit considerable cytotoxic effects, even at low concentrations. 37 In this study, we employed MSNs as a nanostructured targeted carrier of antiviral prodrug compounds shikimic acid (SH) or quercetin (QR) (as schematically shown in Figure 1H). A novel nanoformulation against the highly pathogenic influenza virus H5N1 was developed. We chose these natural compounds due to their safety profiles, low cost, and small size. SH was selected because it is used as a precursor in the production of oseltamivir, commercially known as Tamiflu 38,39 and other pharmaceutical agents. It is a precursor for several pharmaceutical compounds, including anti-pyretic, anti-oxidants, anti-coagulants, antithrombotic, anti-cancer, anti-inflammatory, etc. Thus, it is promising to investigate the possibility to apply nanoformulations loaded with SH as DDS against H5N1 virus. QR was selected due to its inhibition of influenza viruses. 40,41 It has shown many pharmacological impacts in vitro and in vivo as one of the strongest anti-oxidant compounds. Several studies have shown that secretion of proinflammatory immunomodulators, such as cytokines interleukin (IL)-1β, tumor necrosis factor-alpha (TNF-α), and nitric oxide (NO) free radicals, increases considerably during influenza infections. These pro-inflammatory mediators play a crucial role in the regulation of the innate and adaptive immune systems. [42][43][44][45][46] Within this context, the proper antiinflammatory milieu is a key factor in counteracting viral infection. 47 Thus, there is a need to modulate the inflammatory immune response and virucidal activity of H5N1 viruses. Therefore, we evaluated also the immunoregulatory and anti-inflammatory effect of the proposed prodrug-loaded MSNs. Combining antiviral and anti-inflammatory effects may have a positive impact on treating influenza viruses. Materials and Methods Materials The list of all used materials is given in the supplementary information. 1:1:5). The loading ratio between the silica nanoparticles to prodrug was 1:3. For loading QR into the modified nanoparticles, the following procedure was used: 300 mg of MSN-NH 2 powder was added to the solvent containing 100 mg of QR (Sigma-Aldrich) and stirred for 24 h at room temperature. The solvent was subsequently evaporated at 50°C using the Rotavapor (Büchi, Switzerland), re-suspended in ultra-pure water to remove unloaded molecules, and dried at 40°C for 12 h in an oven. The evaporation and resuspension were repeated several times. The resulting product was denoted as MSNs-NH 2 -QR nanoformulation. For loading SH into MSNs-NH 2 , the following procedure was used: 300 mg of powder MSNs-NH 2 was added to the prepared solvent containing 100 mg of SH, and then EDC and NHS cross-linking were added to the mixture solution and stirred for 24 h at room temperature. The solvent was evaporated at 50°C using the Rotavapor (Büchi, Switzerland), and then re-suspended in ultra-pure water to remove unloaded molecules. This was repeated several times. After drying at 40°C for 12 h in an oven, the resulting product was denoted as MSNs-NH 2 -SH nanoformulation. To prepare a combined nanoformulation, the procedure for loading with SH was performed first, followed by the procedure for loading with QR. The resulting product was denoted as MSNs-NH 2 -SH-QR1 nanoformulation. The used materials are summarized in Table 1. Estimation of Prodrug Concentration The amount of prodrug in nanoformulations was determined by simultaneous thermal analysis (STA) via weight loss analysis. The STA results for the modified materials before and after loading were compared. The calculated percent of each prodrug in a nanoformulation is listed in Table 1. Hereafter, when the concentration of prodrug given in a nanoformulation is mentioned, it is the equivalent amount of prodrug used to prepare the desired concentration based on the calculated amount of each drug loaded into nanoparticles. The following equations were used to calculate the loading content based on the weight loss data from the thermogravimetric analysis: SH wt.%= MSNs-NH 2 -SH -MSNs-NH 2 *100; QR wt.% = MSNs-NH 2 -QR -MSNs-NH2*100; and SH-QR wt.% = MSNs-NH 2 -SH-QR -MSNs-NH 2 *100. Knowing the drug loading content, we prepared the Figure 1 Morphological structure, size measurements, and schematic representation of preparation steps. Notes: TEM image of nanoparticles (A), SEM image of nanoparticles (B), NTA analysis of particle size (C, D, E, F, G), and schematic representation of the synthesis method for every stage (H).First, the nanoparticles were synthesized, then modified with amino groups (-NH 2 ) via a post-synthesis method. Next, prodrugs were loaded onto modified nanoparticles to obtain nanoformulations. For combined nanoformulations, we used MSNs-NH 2 -SH as the starting material to load quercetin, and the resulting nanoformulation was MSNs-NH 2 -SH-QR1. Abbreviations: TEM, transmission electron microscopy; SEM, scanning electron microscopy; MSNs, mesoporous silica nanoparticles; MSNs-NH 2 , MSNs modified amino groups; MSNs-NH 2 -SH, MSNs-NH 2 loaded SH; MSNs-NH 2 -QR, MSNs-NH 2 loaded QR; MSNs-NH 2 -SH-QR1, MSNs-NH 2 loaded SH and QR; SH, shikimic acid; QR, quercetin. concentration from nanoformulations equal to the concentration of free prodrugs. In the case of nanoformulations, the effective prodrug concentration was estimated as the equivalent amount of drug (µg/mL) in the nanoformulation from the relatedloading percent for each prodrug or in combination to prepare the stock solution(s). In the case of pure prodrugs, the calculated amount was immediately weighed, and stock solution(s) prepared. Mix 1 and mix 2 were prepared by simply mixing the weighed amount of each drug in one solution under stirring to obtain a stock solution. We calculated also the molar concentration, which is listed in Tables S1 and S2. These concentrations are calculated based on the used µg/mL throughout the experiments. Material Characterization Techniques The images of prepared MSNs were observed by High Resolution Transmission Electron Microscope, HR-TEM (JEM 2100, JEOL, Tokyo, Japan), Field Emission Scanning Electron Microscope, FE-SEM (FE-SEM; Ultra Plus, Zeiss, Jena, Germany). The elemental analysis was acquired using the QUANTAX EDS (Bruker, Billerica, MA, USA)connected to FE-SEM. The sputtering of the samples for FE-SEM imagining was performed on sputter coater (Bal-Tech SCD 005) and/or (Q150T ES, Quorum Technologies Ltd, East Sussex, UK). The powder X-ray diffraction (X'PertPRO System, PANalytical, Marietta, GA, USA) was employed for XRD patterns. The surface area and volume properties were characterized using Gemini 2360, Micromeritics, Norcross, GA, USA. FTIR spectra were performed using a Fourier transformed infrared (FTIR) spectroscopy (Bruker Optics, Billerica, MA, USA) equipped with an Attenuated Total Reflectance (ATR, model Platinum ATR-Einheit A 255). Simultaneous Thermal Analysis (STA)-coupled with Differential Scanning Calorimetry (DSC)-connected with FTIR analysis (STA 499 F1Jupiter, NETZSCH-Feinmahltechnik GmbH, Selb, Germany) was used to determine the drug loading, crystalline state and evolved gases from the heating samples. The zeta potential (Malvern ZetaSizer, Malvern, UK) was performed to determine the type of charges on the nanoparticles in prepared suspensions. The particle size distributions were done by nanoparticle tracking analysis (NTA) with the NanoSight instrument (NS500, NanoSight, UK). Cells and Virus Used in the Study Madin Darby Canine kidney (MDCK) cells were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum, penicillin (100 U/mL), and streptomycin (0.1 mg/mL) and incubated at 37°C in a humidified atmosphere of 5% CO 2 . The MDCK cells were kindly provided by Dr. Richard Webby (St. Jude Children's Research Hospital, department of virology and molecular biology, USA). Using of cell lines had ethical approval from the ethical committee at NRC, Egypt. It was supplied as a confluent sheet in a 75 cm 2 tissue culture flask. These cells were propagated till confluence for several passages, harvested in aliquots, and then stored in liquid nitrogen Biocompatibility and Cytotoxicity Evaluation To evaluate the in vitro cell viability of the prepared nanocarriers, the MTT assay was employed according to Mosmann. 50 Biological Infectivity Assay In a 12-well tissue culture plate, MDCK cells were pretreated with 100 µL of DMEM supplemented with 0.2% bovine serum albumin. The cells were infected at 0.1multiplicity of infection (MOI) of an A/duck/Egypt/Q4596D/2012 (H5N1) virus for 1 h at 37ºC. Culture medium (500 µL) containing the safe concentration of each tested sample was added to each well: 50 µg/mL for MSNs and MSNs-NH 2 ; equivalent to 75 µg/mL of prodrug for MSNs-NH 2 -SH, MSNs-NH 2 -QR, MSNs-NH 2 -SH-QR1; and75 µg/mL for SH, QR, and mixed pure prodrugs. In the preliminary prescreening results, the combined nanoformulation MSNs-NH 2 -SH-QR1 did not attenuate the virus. Therefore, we twice increased the concentration of MSNs-NH 2 -SH-QR1 as equivalent to 150 µg/mL of prodrug inside and labeled it MSNs-NH 2 -SH-QR2. Plates were incubated at 37ºC with 5% CO 2 for 24 h and 48 h post infection. Untreated virus was included in each plate as a control. Virus titration was assessed by measuring haemagglutinin in titers in the supernatant of infected cells in the presence and absence of tested compounds at 24 and 48 h post infection (hpi). Briefly, 50 μL of PBS was aliquoted across a 96-well U-shaped plate (Greiner Bio-One, Germany). Next, 50 μL from infected cells was added to the first well and two-fold serial dilutions performed across the plate. Finally, 50 μL of 0.5% chicken RBCs was added to all wells and the plate was shaken to ensure mixing. The plate was incubated for 30 min at room temperature before examining HA titer. The HA titer of the virus was calculated as the reciprocal of the highest dilution of virus that caused complete agglutination of chicken RBCs. Negative results (no agglutination) appeared as dots in the center of round-bottom plates. To calculate, the percentage of hemagglutination titer inhibition = hemagglutination titer of virus control untreatedhemagglutination titer of treated virus/hemagglutination titer of virus control untreated x 100. To confirm the obtained results from biological assay, the new preparation of tested compounds was tested against H5N1 virus at different MOI (0.01,0.001, and 0.0001). The cells in 96-well tissue culture plate were infected with 100 µL at different MOI of an A/duck/ Egypt/Q4596D/2012 (H5N1) virus for 1 h at 37ºC. During incubation, in a 96-well U-shaped plate, serial dilutions of tested compounds were prepared in DMEM supplemented with 0.2% bovine serum albumin. Next, a volume of 100 µL of each dilution was added to infected cells and incubated for 24 h post infection at a humidified incubator at 37°C. We identified the lowest dilution of tested compound that resulted in 100% inhibition of the virus detection by HA assay. Plaque Reduction Assay and Mechanisms of Action The antiviral activities of tested compounds were also determined by plaque reduction assay. 51,52 Briefly, MDCK cells were seeded in 6-well culture plates (10 5 cells/mL) and incubated for 24 h at 37°C in 5% CO 2 . Previously titrated H5N1 virus (46*10 6 PFU/mL) was diluted to optimal virus dilution, which gave countable plaques, and mixed with the safe concentration (for MDCK) of each tested material. The virus was incubated for 1 h at 37°C before being added to cells. Growth medium was removed from the 6-well cell culture plates and virus-compound mixtures inoculated in duplicate. After 1 h contact time for virus adsorption, 3 mL of DMEM supplemented with 2% agarose, 1% antibiotic antimycotic mixture, and 4% bovine serum albumin (BSA, Sigma) was added to the cell monolayer. The plates were left to solidify and incubated at 37°C until the formation of viral plaques (3 days). Formalin (10%) was added to each well for 1 h and the overlayer removed. Fixed cells were stained with 0.1% crystal violet in distilled water. Untreated virus was included in each plate as a control. Finally, plaques were counted and the percentage reduction in virus count recorded as follows: % inhibition= viral count (untreated)viral count (treated)/viral count (untreated) x 100 To estimate the concentration that inhibits 50% (IC50) of PFU for the H5N1 virus, we performed a plaque reduction assay with different equivalent concentrations of prodrug(s) as follows: 10, 25, 50, and 75 µg/mL for MSNs-NH2-SH, and 20, 50, 100, and 150 µg/mL for MSNs-NH2-SH-QR2. The mechanism of viral replication was assayed according to Kuo et al. 53 MDCK cells were cultivated in a 6-well plate (10 5 cell/sml) for 24 h at 37°C. Virus was applied directly to the cells (100 µL of optimal virus dilution (10 −4 )) and incubated for 1 hat 37°C. Non-adsorbed viral particles were removed by washing cells three successive times with supplement-free medium. The nanoformulations (MSNs-NH 2 -SH and MSNs-NH 2 -SH-QR2) were added, and after 1 h contact time 3 mL of DMEM supplemented with 2% agarose was added to the cell monolayer. Plates were left to solidify and incubated at 37°C until the appearance of viral plaques. Cell monolayers were fixed in 10% formalin solution for 1 h and stained with crystal violet. Virus control wells including infected MDCK cells without treatment were included in each plate. Plaques were counted and the percentage reduction in plaque formation compared to the virus control wells. The viral adsorption mechanism was assayed according to Zhang et al 54 with minor modifications. MDCK cells were cultivated in a 6-well plate (10 5 cells/mL) for 24 h at 37°C. Each examined nanoformulation was applied in 200 µL medium without supplements and co-incubated with the cells for 2 hat 4°C. Non-absorbed nanoformulation particles were removed by washing cells three successive times with supplement-free medium, the diluted H5N1 virus was co-incubated with the pretreated cells for 1 h, and then 3 mL DMEM supplemented with 2% agarose was added. Plates were left to solidify and then incubated at 37°C to allow the formation of viral plaques. The plaques were fixed and stained as described above to calculate the percentage reduction in plaque formation compared to virus control wells, which comprised untreated MDCK cells directly infected with H5N1. The virucidal mechanism was assayed according to Schuhmacher et al. 55 In a 6-well plate, MDCK cells were cultivated (10 5 cells/mL) for 24 h at 37°C. A total volume of 200 µL of serum-free DMEM containing 100 µL of the optimal H5N1 virus dilution and 100 µL of tested compounds was mixed. After 1 h incubation at room temperature, the mixture was added to pre-washed MDCK cell monolayer. After 1 h contact time, a DMEM overlayer was added to the cell monolayer. Plates were left to solidify and incubated at 37°C to allow the formation of viral plaques. The plaques were fixed and stained as described above to calculate the percentage reduction in plaque formation. This value was compared to virus control wells comprising cells infected with virus that was not pretreated with the tested material. Isolation and Culture of Splenocytes Splenocytes were isolated from male Swiss albino mice (8-12 weeks old) obtained from the Animal House Colony of the National Research Centre, Cairo, Egypt. Briefly, mice were euthanized by cervical dislocation; their spleens were excised and gently homogenized to obtain single-cell splenocyte suspensions using RPMI-1640 media supplemented with 10% (v/v) heat-inactivated, fetal calf serum (FCS) 1000 U/mL penicillin and 1000 μg/mL streptomycin (complete medium). Red blood cells were lysed by resuspending spleen cells in ammonium chloride-lysis buffer (144 mM ammonium chloride, 17 mM Tris, pH 7.2) and incubating on ice for 10 min. Splenocytes were washed twice with phosphate-buffered saline (PBS) and then resuspended in complete medium. Cell Viability and Proliferation Assay The effect of prepared materials on splenocyte viability and proliferation in response to a T cell mitogen (phytohemagglutinin; PHA) was evaluated using the MTT assay. 56 Freshly prepared splenocytes (2 × 10 5 cells/200µL/well) were cultured in a 96-well U-bottom microtiter plate (Nunc) in complete medium. Cells were treated in the presence or absence of a final concentration of 25, 50, or100 µg/mL of nanoparticles, equivalent prodrug concentration in nanoformulations, or pure prodrugs alone or mixed (using 0.5% DMSO) in triplicate, and then incubated at 37°C in a 5% CO 2 atmosphere. To test the immunoregulatory effects on splenocyte stimulation, PHA (5µg/mL) was added to the culture media to induce T cell proliferation for 24 h. Nanoparticles, prodrugs alone or mixed, and an equivalent amount of drug in nanoformulation were added at 25 µg/mL. The proliferative response to PHA+MSNs and MSNs alone served as controls. After 72 h, the cells were pelleted and the media removed by centrifugation. MTT salt solution (5 mg/mL) was added to each well and incubated at 37°C for 4 h. Formazan crystals were dissolved by adding 100 µL 10% SDS before measuring absorbance at 570 nm in a spectrophotometer using a reference wavelength of 690 nm. Cell viability was expressed using the following formula with optical density (OD): % of viable cells= (OD of test samples/OD of MSNs) x 100 The splenocyte stimulation index was determined as the ratio of OD values in test samples versus PHA+MSNs control. Isolation of Peritoneal Macrophages Mice were sacrificed by cervical dislocation and the abdominal cavity lavaged aseptically with 5 mL RPMI1640 medium. The peritoneal cavity was then gently massaged and the media aspirated. Cells were precipitated by centrifugation and re-suspended in RPMI1640 medium containing 10% FCS and considered peritoneal macrophages. 57 Nitric Oxide and Cytokine Measurements To assess the immunomodulatory effect in vitro, we stimulated the mouse intraperitoneal macrophages with lipopolysaccharide (LPS), an outer membrane compound in Gramnegative bacteria that promotes the activation and cell differentiation of macrophages upon exposure. 58,59 Activation by LPS resulted in increased production of pro-inflammatory factors, such as cytokines (TNF-α and IL-1β) and NO. 60 Peritoneal macrophage cell suspensions (1 ×10 5 cells/200 μL) were plated in 96-well microplates. The cells were treated in the presence or absence of 25µg/mL of prodrugs, MSNs, MSNs-NH 2 , or equivalent amount of prodrug in nanoformulations (0.5% DMSO) in triplicate for 4 h before the addition of 1µg/mL LPS from E. coli strain 0111:B4. After 24 h incubation, cell culture supernatants were collected to evaluate NO and cytokine levels. The production of NO in the supernatant was determined by measuring the quantity of nitrite by the Griess reaction, 61 and TNF-α and IL-1β were detected by enzyme-linked immune sorbent assay (ELISA) according to the manufacturer's instructions. Anti-Inflammatory Activity Screening Anti-inflammatory activity was determined in vivo using the acute carrageenan-induced paw edema rat model 62 characterized by acute accumulation of fluids and hyperalgesia in the paw-following carrageenan administration. This model allows the anti-inflammatory potential to be assessed by quantifying changes in the size of the paw edema. The inflammation induced by carrageenan injection into animals is thought to have a biphasic process. In the early stage, histamine and serotonin are produced at high levels, peaking at 3 h with the release of kinin-like compounds. In the late stage, prostaglandins, proteases, and lysozymes are produced. 63 Male Wistar rats weighing 150-180 g were obtained from the Animal House Colony of the National Research Centre (Cairo, Egypt) and housed under standardized conditions (room temperature 23±2°C; relative humidity 55±5%; 12 h light/dark cycle) with free access to tap water and standard mouse chow throughout the whole experimental period. The rats were divided into five groups of six animals each. All experimental protocols were performed in accordance with the recommendations for the proper care and use of laboratory animals following the regulations of the ethical committee of the National Research Centre (Protocol number: 16156). MSNs-NH2, MSNs-NH2-SH-QR2, SH-QR mix 2, and indomethacin were dissolved in saline at 10 mg/kg and given intraperitoneally 2 h before induction of inflammation. Carrageenan paw edema was induced by subcutaneous injection of a freshly prepared 1% carrageenan solution into the subplantar tissue of the right hind paw. The thickness of the paw was measured using a digital caliper at 1, 4, and 24 h and compared to the initial hind paw thickness of each rat to determine the edema thickness. Statistical Analysis Data are expressed means ± SD of triplicates, one-way ANOVA with post hoc Fisher's least significant difference test LSD and p < 0.05 was considered significant difference. Morphological and Elemental Analysis of MSNs and Nanoformulations The structures of the nanoparticles were in good agreement with those reported in a previous study. 48 HR-TEM and FE-SEM images ( Figure 1A and B) showed that the MSNs were well dispersed and had uniform nearly spherical shapes with an average diameter of~100 nm. EDS analysis ( Figure S1) confirmed the elemental content of all prepared materials. The prodrugs were organic carboncontaining compounds; therefore, the carbon percentage is a good indicator of their loading into MSNs. The carbon content changed from 5.9 wt.% for the MSNs to 15.1wt.% for MSNs-NH 2 , 15.4 wt.% for MSNs-NH 2 -SH, 20.6wt.% for MSNs-NH 2 -QR1, and 21.5 wt.% for MSNs-NH 2 -SH-QR2 ( Figure S1 and Table 2). This observation reveals the successful loading of prodrugs into MSNs particles. Nanoparticle Tracking Analysis (NTA) The mean size was 118 nm (MSNs), 116 nm (MSNs-NH 2 ), 77 nm (MSNs-NH 2 -SH), 136 nm (MSNs-NH 2 -QR), and 149 nm (MSNs-NH 2 -SH-QR1) ( Figure 1C-G and Table 2). For MSNs-NH 2 -SH-QR2, the only difference was twice the amount of MSNs-NH 2 -SH-QR1 and, therefore, it is not mentioned in characterizations. The reduction in size for MSNs-NH 2 -SH is an unexpected observation but could be due to some surface reactions. As expected, the size of MSNs-NH 2 gradually increased after loading, which may indicate some segregation of the prodrug on the surface or be due to low drug loading. XRD and DSC Analysis In the case of prodrugs loaded in the nanopores of amorphous silica, one would expect them to take on an amorphous structure similar to the host. For molecules on the surface, without constraint of the host material, they may take on a crystalline structure. As seen from the XRD patterns (Figure 2A), after loading of SH on MSNs, no peaks corresponding to SH were detected, which may indicate low loading of this prodrug. In contrast, several small peaks were detected for the loading of QR alone or in combination, which may indicate that some QR molecules remain on the exterior surface of the nanoparticles and crystalize. As the detected peaks are small, it seems that only a negligible amount of prodrug crystalized on the MSN surface. The simple physical mixing of prodrugs and MSNs resulted in high-intensity peaks corresponding to prodrugs not embedded in the nanopores. The DSC plots ( Figure 2B) show that, for loaded drugs, there were no endothermic peaks at the melting temperatures compared to SH (175ºC) and QR (320ºC). This observation confirms that most of the loaded prodrug is situated in the nanopores, and this confinement restricts their crystallization ( Figure 2A). The DSC results in our study are consistent with other reports showing the potential to suppress the crystallization of amorphous drugs by enclosing them in MSNs. 68,69 The DSC plots show some exothermic peaks at 290°C, which decrease as the prodrug loading increases. This may correspond to some surface reactions of the modified nanoparticles. For nanoformulations, a shoulder is seen at approximately 400°C, which may correspond to the gradual decomposition of the prodrug. Simultaneous Thermal Analysis To further quantify the amount of prodrug loaded onto MSNs, we performed STA. We calculated the %wt. of - NH 2 and the amount of prodrug based on weight loss ( Figure 3A and Table 2). The amount of aminopropyl-NH2 was 10.15wt%, indicating successful functionalization of nanoparticles with aminopropyl groups. The loading percentages were 3 wt.% for MSNs-NH 2 -SH, 22.45wt.% for MSNs-NH 2 -QR, and 24.41 wt% for MSNs-NH 2 -SH-QR. These results give clear evidence that the MSNs are suitable for encapsulating two or more drugs. The same type of the MSNs showed high loading capacity of prodrug of thymoquinone as drug delivery system for brain cancers. 32 FTIR Characterization The functional groups detected by Fourier transmission infrared spectroscopy (FT-IR) are shown in Figure 3B. The siliceous framework is represented by the peaks observed at 1075, 960, 805, and 450 cm −1 , in agreement with previous reports. 70,71 Modification by amino groups in MSNs-NH 2 leads to new peaks at 730 and 692 cm −1 . Peaks at 1558 and 1486 cm −1 correspond to the -NH 2 asymmetric bending 72 and C-H asymmetric and symmetric bending vibrations, 64 respectively. However, low loading of SH (MSNs-NH 2 -SH) leads to the appearance of the 948 cm −1 band. In the case of QR alone (MSNs-NH 2 -QR) or in combination (MSNs-NH 2 -SH-QR), several peaks were observed corresponding to QR. As a comparison, the spectra of SH and QR are presented in Figure S2. The results demonstrate successful loading with QR. The FTIR results of some peaks, as well as the XRD spectra, indicate that some QR remains on the surface of nanoparticles. STA-FTIR Analysis As mentioned, our prodrugs are organic substances, and during decomposition of such organic compounds CO 2 is the main gas evolved. As illustrated in Figure S3, STA-FTIR showed that the intensity of the CO 2 peak (2200 cm −1 ) at 320°C was 0.14 absorbance units (AU) for MSNs. MSNs-NH 2 had an intensity of0.15 AU of CO 2 at a similar temperature (320°C). For MSNs-NH 2 -SH, the CO 2 peak had similar AU as MSNs-NH 2 , which could be explained by the fact that SH loading was small (3wt%). A shift in CO 2 evaporation to approximately 410ºC was observed. For MSNs-NH 2 -QR or MSNs-NH 2 -SH-QR, the CO 2 peaks were characterized by 0.29 and 0.49 AU at 410°C. Generally, the loading of prodrugs onto MSNs led to a shift in the temperature, which may correspond to the decomposition of the prodrugs. Surface Charge Properties of Nanoparticles and Nanoformulations The surface charge property of MSMs, MSNs-NH 2 , and nanoformulations in aqueous solution is illustrated in Figure 3C. All samples had a positive zeta potential when exposed to an acidic pH from 2.5 to 5. With further increasing acidity towards the physiological pH (7.5), a low positive zeta potential was observed. This could be attributed to abundant silanol groups on nanoparticles. At pH 7.5, they became deprotonated and exhibited negative zeta potential, in agreement with previous results. 73 An exception was the nanoparticles, which had negative zeta potential as expected. Further increasing pH towards alkaline (pH at 10 and 12.5) led to negative zeta potential. The highest negative zeta value was recorded for MSNs-NH 2 -SH-QR (−52.5 mV) at pH 10 and for MSNs-NH 2 -SH-QR and MSNs-NH 2 -SH (−52.8 mV) at pH 12.5. Up to a pH of 8, we observed modification of nanoparticles increasing the potential, with the addition of SH having no effect on the potential and loading with QR decreasing the potential. The data are in agreement with previous results for MSNs functionalized and loaded with drug molecules. 74,75 Antiviral Evaluations Cytotoxicity Screening Studies When exploring any antiviral therapy, the toxic effect of the used compounds/materials should be taken into consideration. Therefore, we evaluated the cytotoxicity of the MSNs and MSNs-NH 2 , all nanoformulations, and pure prodrugs. MSNs and MSNs-NH 2 were almost not toxic for MCKD cells up to a nanoparticle dose of 50 µg/mL ( Figure 4A). The toxic effect was dose-dependent, which is in agreement with a recent study. 76 A 500 µg/mL nanoparticle dose resulted in Notes: STA analysis of nanoparticles, modified nanoparticles, nanoformulations, and combined nanoformulations (A). The thermal analysis of the dried powders was employed to detect the weight loss from materials (nanoparticles and prodrugs) at high-temperature condition. The dried powder of each sample was heated up to 800ºC leading to lose water content, organic compounds used in surface modification and loaded prodrugs. The MSNs are inorganic material which is stable at high temperaturethis help to calculate the drug loading amount. The drug loading % in nanoformulations was calculated based on the weight loss between modified nanoparticles and after loading into nanoformulations. FTIR spectra of nanoparticles, modified nanoparticles, nanoformulations, combined nanoformulations, and pure prodrugs (B). Zeta potential measurements of materials in aqueous suspensions from acidic to alkaline (C Figure 4B), the cytotoxicity depended on the prodrug used. At a concentration of 150 µg/mL of prodrug, or equivalent amount of prodrug in nanoformulation, moderate toxicity was detected for MSNs-NH2-SH (~65%), which may be due to low loading content and SH/QRmix2 (~50%). At a concentration of 75 µg/mL, none of the tested samples decrease the cell viability to 50% or less. Therefore, for further studies, we selected the safe concentration of 75 µg/mL for prodrugs or equivalent amount in nanoformulations (the exception was MSNs-NH2-SH-QR2 contained 150 µg/mL) for subsequent antiviral studies. Inhibition of H5N1 Virus Titers Using Nanoformulations in vitro The percentage reduction in the hemagglutination titer of all samples is shown in Figure 4C based on the percentage of reduction in virus control (no treatment). After 24 hpi, all tested samples exhibited full virus titer inhibition and no significant differences were found among them. Increasing the incubation time of infected cells and tested compounds resulted in a reduction of virus titer. For nanoparticles, MSNs and MSNs-NH 2 inhibited virus titers to 17% and 7%, respectively. SH and QR exhibited weak titer inhibition equal to 14% and 15%, respectively. The mixed prodrugs exhibited higher titer inhibition than separate prodrugs: SH/QR mix.2 (28%) and SH/QR mix.1 (42%). For nanoformulations, MSNs-NH2-SH-QR2 exhibited nearly full inhibition of titers (~98%), and MSNs-NH 2 -SH had a high inhibition effect of approximately 75%. MSNs-NH 2 -QR had a moderate effect of approximately 40%. Interestingly, MSNs-NH 2 -SH-QR2 had a strong effect (98% strong inhibition) compared to MSNs-NH 2 -SH-QR1. This effect is not attributable to prodrug action because the prodrug content is high, but to the amount of nanoparticles. Twice the amount of nanoparticles (the particles loaded with prodrug) leads to a twice stronger inhibition effect, as more nanoparticles are available to interact with virus in their inhibition. In contrast, doubling the SH-QR mixed prodrug concentration did not lead to a double inhibition effect. This observation shows the importance of using nanostructures Figure 4 Cytotoxicity of materials tested for cell viability by MTT assay in MDCK cells. Notes: As a function of nanoparticle and modified nanoparticle concentrations (A). As a function of prodrug and nanoformulation concentrations equal to pure prodrugs used alone (B). SH/QR mix2 (pure prodrug mixture) and MSNs-NH 2 -SH-QR2 are the same materials as SH/QR mix1 and MSNs-NH 2 -SH-QR1 but twice the amountsee Table 1. Antiviral activity against H5N1 virus (C). The safe concentration (equivalent to 75 µg/mL of free compound or found in nanoformulations) was used for all materials. The safe concentration used for unmodified and modified nanoparticles was 50 µg/mL. Data are presented as mean ± SD. *Means significant differences at p <0.05. The concentration for MSNs and MSNs-NH 2 was directly made of prepared powder of both nanoparticles. The concentration used in nanoformulations of MSNs-NH 2 -SH, MSNs-NH 2 -QR, MSNs-NH 2 -SH-QR, and MSNs-NH 2 -SH-QR2 was calculated based on the prodrug loaded into nanoparticles as an equivalent to free prodrugs (SH, QR, SH/QR mix1, and SH/QR mix). The free prodrug concentration was normally made from the powder of pure prodrugs without any calculation. Notably, the difference in nanoformulation effects compared to prodrugs appeared 48 hpi only. This may be related to the nature of virus cycle events (ie, division and rapid replication of influenza virus), which is time dependent. These events occur between 24 and 48 hpi, 77 and viruses produce several proteins for the replication process between 24 and 30 hpi. Thus, the prodrugs act efficiently during this period. Our results are in line with previous data related to the time effect and titer reduction. 78 The strong activity of nanoformulations compared to nanoparticles and prodrugs was observed. Therefore, this approach by combining nanoparticles with drugs can be required compared to the common strategy that is followed by a combination of multiple antiviral drugs since it can be resulted in adverse drug reactions, and careful limitations should be taken into account. 79 Furthermore, based on the virucidal effect discussed in the following sections, inactivating the virus structure can be affected through virion protein capsid destring its genome. So far, the viral particle integrity could also be affected. Inhibition of H5N1 Virus Plaque Formation During in vitro Tests with Nanoformulations To further evaluate the antiviral action of the tested nanoformulations, we employed the plaque reduction assay, which is accepted as the gold standard for antiviral studies of viruses, such as influenza virus A. 80 The plaque formation units (PFU) of H5N1for the infected MDCK cells in the presence of the used nanoparticles, prodrugs, and nanoformulations are shown in Figure 5A. MSNs inhibited 32±2% PFU 24 hpi compared to 16.6±1.15% by MSNs-NH 2 .Both can be considered non-significant effects. The difference in action of both nanoparticles is in agreement with recently published data in which the viral inhibition effect is associated with the interaction of virus with nanoparticles and the surface properties of silica nanoparticles. 27 The difference in surface charge of nanoparticles seems to be the reason for the obtained differences. At physiological pH 7.4 ( Figure 3C), MSNs have a negative charge compared to the positive charge of MSNs-NH 2 . This may lead to an electrostatic interaction between negative MSNs and positive amino acids of glycoproteins in the viral envelope, followed by a charge transfer between them. 26 For prodrugs, weaker effects were obtained. The percentage inhibition of virus titer was 24% for SH, 12% for QR, 11% for SH/QR mix1, and 14% for SH/QR mix1. The nanoformulations exhibited a strong inhibition effect in regards to PFU: 98.5±1.23% for MSNs-NH 2 -SH, 83.0±0.23% for MSNs-NH 2 -QR, and 95.2±0.3% for MSNs-NH 2 -SH-QR2. MSNs-NH 2 -SH-QR1 had a weaker effect (27%). MSNs-NH 2 -SH, MSNs-NH 2 -QR, and MSNs-NH 2 -SH-QR2 exhibited significantly stronger effects (p<0.05) than all others used. Regarding the effect of a double amount of particles for MSNs-NH 2 -SH-QR2 compared to MSNs-NH 2 -SH-QR1, we observed enhanced inhibition of the number of PFU, a similar trend as with titer inhibition. Therefore, an increased concentration of nanoparticles in formulation induces a stronger effect of inhibition. The plaque reduction assay is an accurate method due to the direct quantification of infectious viruses from in vitro cell culture. 81 Therefore, the results are particularly relevant for assessing real antiviral effects compared to HI assay, which counts all dead and live plaques. A strong antiviral effect against H5N1 was found for the nanoformulations MSNs-NH 2 -SH and MSNs-NH 2 -SH-QR2. It is known that the phenolic structure of SH results in several pharmacological effects, including antiviral, antioxidant, anticancer, and antibacterial. 82 Additionally, regarding the role of prodrug, the antiviral effects of SH could be due to its phenolic chemical structure, as reported for anti-influenza natural prodrug substances with a phenolic structure from medicinal plants. 83,84 However, the nanoformulations were more efficient than SH in the free form. Also, the unloaded nanoparticles showed no antiviral effect. In particular, a significant inhibition of H5N1 virus by MSNs-NH 2 -SH was observed even at a small loading of 3 wt.%. This may be attributed to the targeting effect, where the prodrug is directly transferred from the nanoformulation to the virus surface when they are in close contact. Mutual attraction of the virus and nanoformulation particle is plausible due to its interaction with capsid protein through carboxylic groups of SH molecules. Another factor contributing to attraction could be electrostatic effects, as nanoformulations had negative zeta potential (at basic pH condition) as shown in zeta potential results ( Figure 3C). The discussed below results address in more detail the main mechanism of MSNs-NH 2 -SH and MSNs-NH 2 -SH-QR2 action against H5N1 virus. The IC50 of Effective Nanoformulations The results presented in Figure 5B show that MSNs-NH 2 -SH significantly inhibited PFU with all used concentrations, and the IC50 was 42.2 µg/mL. Full inhibition (100%) for PFU was obtained with 75 µg/mL. The IC50 value for MSNs-NH 2 -SH-QR2was72.3µg/mL. However, this nanoformulation reached only 86.5±0.5%with a nanoparticle concentration of 150 μg/mL compared to MSNs-NH 2 -SH. These findings confirm that SH-based nanoformulation has superior antiviral activity against the inhibition of A/duck/Egypt/ Q5569D/2012 (H5N1) influenza virus compared to QR loaded in combination, even though a double amount of nanoparticles was used. Therefore, SH is highly recommended as a prodrug candidate for novel nanotherapeutic formulations against infectious diseases. For comparison with antiviral drug, testing the sensitivity of the H5N1 virus toward zanamivir showed that H5N1 virus was sensitive to zanamivir with IC50 <1.66 µg/mL after 48 hpi. Mechanism of the Nanoformulation's Antiviral Action The viral-infection stages (direct effect on virus or during replication after virus attaches to host cells or during adsorption to host cells) play an important role in targeting viruses. 16,19,28,85 To investigate the underlying mechanism of the effective nanoformulations (MSNs-NH 2 -SH and MSNs-NH 2 -SH-QR2) action against H5N1 virus, three main possible antiviral mechanisms representing the viralinfection stages were considered. (i) blocking access of the virus to the cells by blocking of the host cell receptor to hinder the initial step of infection, inhibiting the viral entry (viral adsorption); (ii) interference with intracellular viral replication (viral replication); and (iii) direct inactivation of the viral particle (virucidal activity). The above modes of actions could account for antiviral activities either independently or in combination. 85 The experiments based on plaque reduction assay are a standard method to evaluate the mechanisms of viral action. This assay is done in three different ways to be able to estimate how the nanoformulations could affect the three viral-infection stages. The results are described in Figure 6A and a schematic representation of the virus inactivation mechanism is shown in Figure 6B. Concerning the viral adsorption mechanism, when the virus was treated with MSNs-NH 2 -SH-QR2, a negligible reduction in plaque formation was detected. A small inhibition of plaques was found with MSNs-NH 2 -SH treatment, indicating that the latter nanoformulation reduces viral adsorption to MDCK cells to a certain extent ( Figure 6A, a and b). Regarding the suppression of the intracellular viral replication, H5N1 virus treated with MSNs-NH 2 -SH resulted in relatively small plaques compared to treatment with MSNs-NH 2 -SH-QR2. The formation of a small number of plaques compared to untreated control, especially in the case of MSNs-NH 2 -SH, indicating the possibility of inhibiting the virus via this mechanism to some extent ( Figure 6A, c and d). Concerning the viricidal mechanism (iii), no plaque formation confirms that the main action of both nanoformulations is virucidal activity ( Figure 6A, e and f). The virucidal mechanism is probably due to direct and strong attraction between nanoformulations and the virus glycoproteins 26 present on the surface of H5N1, thus sequestering it at early infection stages. We assume that the nanoformulations antiviral effect against the H5N1 is caused by targeting the viruses, which leads to direct contact between the virus and nanoformulations particle, and as a consequence its inactivation. Thus, mechanism (iii) from the above mentioned is presumably active. Direct interaction is facilitated by the fact that the nanoformulation particle and the virus are of similar size. Attraction and contact are presumably caused by the interaction between carboxylic groups (leading to negative charge of SH in nanoformulations) and amine groups of the glycoproteins (as HA) on the virus surface. The SH or QR molecules migrating from nanoformulation to the virus may effectively inhibit viral proteins and prevent penetration of the virus into the cells. In this regard, the shikimic pathway is the key intermediate in the synthesis of oseltamivir phosphate, the FDAapproved drug known as Tamiflu ® , an efficient NA enzyme inhibitor used in the treatment of influenza infection. 86 Consequently, it prevents the virus from attaching to MDCK cells. 16,20,28,87 In conclusion, the main mechanism virucidal activity is probably a direct interaction of the nanoformulations with the H5N1 virus, which leads to its inactivation at early infection stages. This main effect may be accompanied by additional effects on viral replication and adsorption, especially for MSNs-NH 2 -SH. In future practical applications of such nanoformulations, they can be administered by means of oral or injection routes depending on the possible application of MSNs. This is in line with the practical application of antiviral drugs such as oseltamivir. Effects of Tested Materials on Lymphocyte Cell Viability To perform immunological experiments, we initially assessed the effects of nanoparticles, prodrugs, and nanoformulations on lymphocyte cell viability ( Figure 7A). At a concentration of 25 µg/mL, we found no differences between cells treated with nanoparticles, prodrugs, and nanoformulations. The cell viability was more than 96%. At a concentration of 50 µg/mL, decreased viability was observed for some treatments. The greatest cytotoxic effect was found when cells were treated with 100 µg/mL compared to 25 µg/mL and 50 µg/mL. Consequently, 25 µg/mL was selected as a safe dose for subsequent investigations. Our results for nanoparticles agree with results in human peripheral blood lymphocytes treated with silica nanoparticles in vitro. 88,89 Effects of Nanoformulations on Lymphocyte Stimulation Index The lymphocyte proliferative response to mitogen, expressed as the lymphocyte stimulation index (LSI), Figure 7 In vitro cytotoxicity evaluation immune cells and in vivo anti-inflammatory. Lymphocyte cell viability with nanoparticles, modified nanoparticles, prodrugs and their mixtures, nanoformulations, and combined nanoformulations (A). Lymphocyte index for nanoparticles, modified nanoparticles, prodrugs and their mixtures, nanoformulations, and combined nanoformulations (B). Anti-inflammatory activity in carrageenan-induced rats as the change in paw thickness (mm) (C). Data are expressed as mean ± SD. *p<0.05, medium significance (**) and high significance (***). Abbreviations: MSNs, mesoporous silica nanoparticles; MSNs-NH 2 , MSNs modified amino groups; MSNs-NH 2 -SH, MSNs-NH 2 loaded SH; MSNs-NH 2 -QR, MSNs-NH 2 loaded QR; MSNs-NH 2 -SH-QR1, MSNs-NH 2 loaded SH and QR; MSNs-NH 2 -SH-QR2 was used in twice amount; SH/QR mix.1, mixture of SH and QR; SH/QR mix.2, mixture of SH and QR used in twice amount; SH pure, shikimic acid; QR pure, quercetin; IC50, the half-maximal inhibitory concentration; SD, standard deviation. reflects the potential to enhance or inhibit cellular immunity in response to treatment. LSI ≤ 2.5 indicates suppression of lymphocyte proliferation, whereas LSI ≥ 2.5 indicates enhanced lymphocyte cell division. When lymphocytes are exposed to PHA as a stimulus (PHAstimulated cells), it triggers high production of cytokines and other immunomodulators. 90 We can perform tests to explore the effects of samples on the LSI (Figure 7B). At 12.5µg/mL, the nanoparticles did not show any significant differences from untreated control, with an LSI close to 2.5. Prodrugs showed different responses: SH and SH/QR mix.2 significantly suppressed the LSI to 2.2 ± 0.04 and 2.3 ± 0.04, respectively. QR and SH/QR mix.1 had results similar to the control. Nanoformulations significantly (p<0.05) suppressed the LSI (MSNs-NH 2 -SH, 2.3 ± 0.07; MSNs-NH 2 -QR, 2.4 ± 0.08; MSNs-NH2-SH-QR1, 1.9 ± 0.03; and MSNs-NH 2 -SH-QR2, 1.7±0.10). Interestingly, as the concentration increased to 50 µg/mL, significantly high suppression was observed when nanoformulations and prodrugs were used. The strongest suppression effect was found for nanoformulations, followed by prodrugs. The lowest inhibition of LSI was seen when cells were treated with MSNs-NH 2 -SH-QR2 (1.29 ± 0.28). These findings demonstrate that the LSI is concentration dependent. Our results are in line with the previous studies in which lymphocytes activated by PHA were used (eg, resveratrol 91 and Yerba mate Ilex paraguariensis). 92 As nanoformulations containing a concentration equivalent to 25 µg/mL of prodrug inhibited immune T lymphocyte proliferation, they have a beneficial effect as anti-inflammatory or immunomodulation factors, leading to the activation of different molecular signaling pathways in the development of diseases, such as autoimmune diseases, bacterial and viral infection, and cancers. This effect may be due to the drug carrier nanostructure of MSNs mediating the inhibition of lymphocytes 93 and associated with the prodrug activities, such as antioxidant activity. 94 Effect of Nanoformulations on LPS-Induced Nitric Oxide, TNF-α, and IL-1β Levels in vitro The innate immune system includes phagocyte compartments (macrophages, neutrophils, and dendritic cells), proteins, and natural killer cells. 95 The neutrophils produce the first defensive action against an attack in innate immunity, but the macrophages are most important for the activation of inflammatory responses. 96 High amounts of various kinds of cytokines (eg, TNF-α, IL-1β, IL-6) and NO are secreted by macrophages. Thus, they play a crucial role in regulating immunopathological phenomena throughout the inflammation reactions for various diseases. These findings confirm that the treatment of macrophages with nanoparticles does not modulate cytokine (TNF-α and IL-1β) and NO production compared to LPS-stimulated macrophages. They have a similar effect as LPS-stimulated cells. Prodrugs had a moderate inhibitory effect, especially with SH/QR mix.2. SH seems to be a stronger inhibitor than QR. However, they still are not adequate to modulate the three examined parameters. The nanoformulations, especially MSNs-NH 2 -SH-QR2 and MSNs-NH 2 -SH-QR1, exert a strong inhibitory effect on the secretion of TNF-α, IL-1β, and NO in LPS-stimulated macrophages. Consequently, this desirable effect depends on SH and QR being carried by nanoparticles. It seems that the mechanism by which the nanoformulations modulate inflammatory mediators within immune cells is complex. As the nanoformulation consists of prodrugs and nanoparticles, one can expect that there are two possible mechanisms, one related to the prodrug effect and one related to the nanoparticle effect. Regarding the prodrug effect, for SH-treated LPS-induced macrophages in vitro and in vivo, suppression of NO, TNF-α, and IL-1β occurs through the ERK 1/2 and p38 phosphorylation pathway. 102 Their down-regulation by SH can be through inhibition of nuclear factor-kappa B (NF-κB) via the phosphorylation of signaling proteins in the mitogen-activated protein kinase (MAPK) family. 103 As regulatory signaling, the MAPK cascade (including ERK, c-Jun N-terminal kinases (JNKs), and p38 phosphorylation) is accountable for various functions in macrophages. 104 Inhibition of the MAPK pathway by different inhibitors results in block release and prevents the action of inflammatory cytokines, including TNF-αand IL-1β, 105 as well as the synthesis of NO. Several studies have indicated the capacity of QR to modulate pro-inflammatory cytokines and NO in LPS-stimulated macrophages through the MAPK and NF-κB pathways and other pathways. 106,107 Concerning the nanoparticle effect, depending on their unique physicochemical characteristics, nanoparticles interact with immune cells and proteins, causing stimulation or suppression of the innate immune responses. 108 In this context, some studies have indicated that nanoformulations largely enhance suppression of pro-inflammatory cytokines and NO, comparable to drugs evaluated in LPSstimulated macrophages. 105,109 We think the superior results for our nanoformulations provide evidence that their immunomodulation/anti-inflammatory actions occur through the MAPK signaling pathway; consequently, they would play critical roles in targeted inflammatory diseases. At the same time, we propose an indirect link between the immunomodulatory effect and the results of the inhibition of H5N1 virus. Several previous studies have shown the relationship between the immunomodulation effect and virus inhibition. [42][43][44][45][46] Anti-Inflammatory Effect of Combined Nanoformulations in vivo We confirmed the anti-inflammatory effects described above using an animal model of inflammation induced by carrageenan in the paws of rats. This is a well-known model used in the development of anti-inflammatory drugs. We selected the effective nanoformulation (MSNs-NH 2 -SH-QR2) and compared it to MSNs, mixed pure prodrugs, and standard drugs used in therapy (indomethacin). As shown in Figure 7C, treatment with mixed SH-QR2, MSNs-NH 2 -SH-QR2, and indomethacin significantly (p<0.05) reduced the paw thickness at 1 h to 1.8 ± 0.17 mm, 1.6± 0.07 mm, and 1.10 ± 0.1 mm, respectively, compared to 2.2 ± 0.1 mm and 2.27 ± 0.06 mm for controls and MSNs. In addition, SH/QR mix.2 and MSNs-NH 2 -SH-QR2 had similar efficacy in reducing paw edema at 1 h, but MSNs-NH 2 -SH-QR2 had an enhanced effect at 4 h and 24 h compared to SH-QR2. Furthermore, indomethacin had a superior capacity to suppress inflammation-induced paw swelling compared to MSNs-NH 2 -SH -QR2 at early time points (1 h and 4 h), but had almost similar anti-inflammatory properties to MSNs-NH 2 -SH-QR2 at 24 h ( Figure 7C). The observed delayed reduction in paw edema by MSNs-SH-QR2 could possibly be due to the slow release of SH and QR from MSNs. Another possibility is the enhancement of the solubility and bioavailability of prodrugs in nanoformulations compared to pure mixed compounds. These findings indicate that MSNs-NH 2 -SH-QR2 is more effective over a long period of time than the pure mixed prodrugs of SH-QR2 and indomethacin. The enhanced efficacy of MSNs-NH 2 -SH-QR2 in vivo is in agreement with results obtained from in vitro studies, which suggests that inhibition of inflammation-induced paw edema could be attributed to down-regulation of pro-inflammatory cytokine and NO levels. MSNs-NH 2 -SH-QR2 modulates immune responses and may find its application as an antiinflammatory agent to replace synthetic drugs, such as indomethacin, in the near future. Our results are consistent with previous studies investigating nanoscale preparations and pure forms of drugs/bioactive compounds in carrageenaninduced inflammation in Wistar rats when treated with αtocopheryl polyethylene glycol 1000 succinate-stabilized curcumin nanoparticles, 110 and dexamethasone drug loaded in a lipid nanostructure carrier. 111 Conclusions For the first time, we propose an effective antiviraltargeted nanoformulation where natural prodrugs inactivate avian influenza virus H5N1. The nanoformulation consist of mesoporous silica nanospheres (MSN) functionalized with amine groups (NH 2 ) and loaded with shikimic acid (SH) or quercetin (QR) prodrugs. The nanoformulations displayed both an antiviral and anti-inflammatory effect. MSNs-NH 2 -SH and MSNs-NH 2 -SH-QR2 cause full inhibition of both virus titers and plaque formation 48 hpi. The nanoparticles and prodrugs alone are less efficient as antiviral formulations than the drug-loaded mesospheres. The effect depends on the concentration of nanoparticles in the nanoformulation. SH was particularly efficient in fighting the virus, even for low loading into MSNs, compared to QR. MSNs-NH 2 -SH and MSNs-NH 2 -SH-QR2 displayed a stronger antiviral effect. The main mechanism of antiviral action of the nanoformulations against H5N1 was through direct interaction between the virus and nanoparticles, which inactivates the virus at early stages. MSNs-NH 2 -SH-QR2 followed by MSNs-NH 2 -SH has the strongest inhibitory effect on inflammatory modulators NO, TNF-α, and IL-1β, reducing them to more than 50% of their concentration in immune macrophages. The efficient modulatory effect results in superior anti-inflammatory activity in an in vivo animal model. Loading of two prodrugs into MSNs opens the way towards multidrug-hold nanoformulations for novel nanotherapeutic applications. The proposed nanoformulations against avian H5N1 virus could effectively act as novel antiviral agents through direct (virucidal mechanism) and indirect (immunomodulatory effects) pathways. First preliminary results have been obtained towards the development of a novel therapy of influenza infections, as well as antiinflammatory nanotherapy. Funding This work was supported by the National Research Centre (NRC, Egypt)and under NRC-internal research project fund 11010310. We thank the Egyptian Science and Technology Development Fund (STDF, Egypt) for funding under grant 5175. Many thanks to the National Center for Research and Development, Poland (STRATEGMED3/ 306888/3/NCBR/2017, project iTE, Poland). Part of the research was carried out using equipment funded by the project CePT Poland, reference: POIG.02.02.00-14-024/ 08, European Regional Development Fund for Poland, Operational Programme "Innovative Economy" for 2007-2013. Special acknowledgement: The authors dedicate this work to spirit of our co-author Dr. Asmaa M.M. Salman from the National Research Centre (NRC, Egypt). She passed away as a result of her long struggle with disease during the preparation of the manuscript to resubmit to the journal.
2020-07-23T09:02:16.671Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "8aa06750929e9070a21a8b1de9f07924f29dcb62", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=59908", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d92d5961f82c7600fdfd8f031352dd488cbaaf5a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
256234849
pes2o/s2orc
v3-fos-license
The regulatory effects of second-generation antipsychotics on lipid metabolism: Potential mechanisms mediated by the gut microbiota and therapeutic implications Second-generation antipsychotics (SGAs) are the mainstay of treatment for schizophrenia and other neuropsychiatric diseases but cause a high risk of disruption to lipid metabolism, which is an intractable therapeutic challenge worldwide. Although the exact mechanisms underlying this lipid disturbance are complex, an increasing body of evidence has suggested the involvement of the gut microbiota in SGA-induced lipid dysregulation since SGA treatment may alter the abundance and composition of the intestinal microflora. The subsequent effects involve the generation of different categories of signaling molecules by gut microbes such as endogenous cannabinoids, cholesterol, short-chain fatty acids (SCFAs), bile acids (BAs), and gut hormones that regulate lipid metabolism. On the one hand, these signaling molecules can directly activate the vagus nerve or be transported into the brain to influence appetite via the gut–brain axis. On the other hand, these molecules can also regulate related lipid metabolism via peripheral signaling pathways. Interestingly, therapeutic strategies directly targeting the gut microbiota and related metabolites seem to have promising efficacy in the treatment of SGA-induced lipid disturbances. Thus, this review provides a comprehensive understanding of how SGAs can induce disturbances in lipid metabolism by altering the gut microbiota. Introduction The use of antipsychotic medications as a treatment for patients with schizophrenia is surging, and the incidence of schizophrenia is also rising dramatically worldwide (Hert et al., 2011;Gonçalves et al., 2015). However, long-term use of these drugs can cause numerous adverse effects on patients, especially the disruption of lipid levels, including high-density lipoprotein (HDL), low-density lipoprotein (LDL), triglyceride (TG), and total cholesterol (TC) (Jaberi et al., 2020). Although individuals with schizophrenia may exhibit dyslipidemia before the initiation of treatment, mounting evidence has shown that antipsychotics can independently induce further abnormalities. It has been noted that patients with firstepisode schizophrenia have abnormal lipid profiles, and those with multiple-episode schizophrenia are more likely to have dyslipidemia (Mackin et al., 2007;Vancampfort et al., 2015;Mhalla et al., 2018;Pillinger et al., 2019;Yang et al., 2022). Notably, second-generation antipsychotics (SGAs) have stronger associations with lipid abnormalities than firstgeneration antipsychotics (FGAs) have (Buhagiar and Jabbar, 2019). Metabolic abnormalities especially lipid metabolism disorders, are major risk factors contributing to cardiovascular events (Fan et al., 2013). They also play a role in the pathophysiological process of systemic organ damage and are causative factors in the development and progression of atherosclerotic cardiovascular disease. According to previous reports, patients with schizophrenia have a life expectancy that could be 15 years shorter than that of the general population. Additionally, more than two-thirds of patients with schizophrenia die from coronary heart disease, which is significantly higher than the mortality rate in the general population (Hennekens et al., 2005). A growing body of research indicated that SGA-induced disturbances of lipid metabolism and other metabolic abnormalities are the key factor linking to an increased risk of cardiovascular disease in patients with schizophrenia, in addition to some confounding risk factors such as smoking, physical inactivity, unhealthy lifestyle, and poor dietary habits (Arias et al., 2018). Although the particular processes by which SGAs cause dysfunctional lipid metabolism are complex, a growing body of evidence suggests that the gut microbiota is involved in SGAinduced defects in lipid metabolism. From birth, humans have microbes in their digestive tract (Yatsunenko et al., 2012). Hundreds of millions of microorganisms, including bacteria, fungi, and viruses, exist in the healthy human gastrointestinal system, forming a microbial community that has a major impact on the body (Mirzaei and Maurice, 2017). A large number of these bacteria make up the collective intestinal flora. The intestinal flora contains 1000 to 1500 species of bacteria, which outnumber the body's cells by more than 10 times (Kim and Jazwinski, 2018) and have more than 100 times the total number of genes as humans (Cox et al., 2019). These bacteria play important roles in host metabolism, digestion, the immune system, and the central nervous system (John and Mullin, 2016;Rogers et al., 2016;Dinan and Cryan, 2017;Ipci et al., 2017;Kanji et al., 2018). The theory that the gut microbiota affects lipid metabolism has been extensively studied in mice. For example, germfree (GF) mice on a chow diet showed lower fasting systemic TG, TC, HDL cholesterol, and portal vein TG (Martinez-Guryn et al., 2018), as well as higher liver cholesterol and lower TG levels, than conventionally raised (Conv-R) mice (Rabot et al., 2010). Rabot et al. found that Conv-R mice had increased blood TG, HDL, and TC levels after consuming a high-fat diet (Rabot et al., 2010). To maintain the same weight as Conv-R mice, GF mice had to increase their caloric intake by at least 30% (Hsiao et al., 2008). Further evidence that the intestinal flora affects lipid metabolism has been observed in fecal transplantation experiments. Peter et al. showed for the 1 time that the ability of the gut microbiota to harvest energy from the diet was a transmissible trait. GF mice colonized with an "obesity microbiota" had a much higher increase in total fat than GF mice colonized with a "lean microbiota" (Turnbaugh et al., 2006). Similarly, obese patients with reduced microbial gene abundance (40%) showed more pronounced metabolic disturbances and had increased total serum cholesterol and serum TG levels (Cotillard et al., 2013). The sex and age of the host as well as the site in the gastrointestinal tract influence the makeup and variety of the intestinal flora (Kim and Jazwinski, Cox et al., 2019). Independent of host variables, diet, lifestyle, and medicine can alter the composition of the gut flora (Kanji et al., 2018). Studies in recent years have shown that SGAs have some antibacterial activity and can alter the gut microbiota of patients with psychosis (Nehme et al., 2018;Ait Chait et al., 2020). Olanzapine can have direct antibacterial in vitro effects against the mammalian gut bacteria Escherichia coli and Enterococcus faecalis, which are the two most common species in the intestine (E. coli: Proteobacteria; E. faecalis: Firmicutes) (Morgan et al., 2014). Similarly, chlorpromazine (Kristiansen, 1979) has shown antibacterial effects against Mycobacterium tuberculosis in vitro, and thioridazine (Thorsing et al., 2013) acts against methicillin-resistant Staphylococcus aureus. These medications targeted a more comparable pattern of species than their degree of chemical similarity would suggest (Maier et al., 2018). This raises the possibility that direct bacterial inhibition by SGAs is not merely a side effect but also a part of their molecular mechanism. The effect of gut microbes on lipid metabolism has been supported by many in vivo and in vitro studies, and evidence of the effect of SGAs on gut microbes is gradually emerging with the advancement of microbiological research techniques. Furthermore, positive results have been achieved with therapeutic strategies that directly target the gut microbiota and related metabolites, thereby ameliorating antipsychotic-induced disorders of lipid metabolism. This certainly identifies the gut microbiome as a potential target and establishes that the potential mechanism underlying lipid metabolism disturbances associated with antipsychotics is worthy of further investigation. However, the role of the gut microbiome in antipsychotic-induced disorders of lipid metabolism has not been systematically explained. This review aims to provide a comprehensive understanding of the potential for SGAs to alter the gut microbiota and promote adverse lipid metabolism events. Keyword search on PubMed is detailed in Figure 1, based on the Preferred Reporting Item Guidelines for Systematic Reviews and Meta-Analyses. The critical impact of SGAs on the gut microbiota: Evidence from animals and humans Research on the microbiota in schizophrenia patients treated with SGAs is very scarce within the general microbiota literature. Several studies have investigated the effect of the gut microbiome on animal and human models (Table 1; Table 2). Mice treated with risperidone (Ridaura et al., 2013;Bahr et al., 2015b;Riedl et al., 2021) and olanzapine (Morgan et al., 2014) have increased ratios of Firmicutes to Bacteroidetes, which is one of the distinguishing features of the microbiota of obese individuals (Turnbaugh et al., 2006;Schwiertz et al., 2010;Ferrer et al., 2013). Firmicutes and Proteobacteria are the two major phyla associated with the human intestinal microbiome, constituting the majority of intestinal bacteria (approximately 90%) (Human Microbiome Project Consortium, 2012). These studies have been well replicated in humans (Bahr et al., 2015a;Yuan et al., 2018;Ma et al., 2020). Exceptions to this rule are Kao et al. (Kao et al., 2018) and Pelka et al. (Pełka-Wysiecka et al., 2019). These studies showed no significant effects of olanzapine on the gut microbiota in female rats or in women with schizophrenia. The results of studies on changes in the phylum Actinomycetes are also inconsistent. Bahr et al. (Bahr et al., 2015b) found an increase in the relative abundance of the actinomycete clade in the feces of mice treated with risperidone, whereas Davey et al. (Davey et al., 2012) showed that the relative abundance of the actinomycete clade in mice administered olanzapine was decreased. During olanzapine treatment, the relative abundance of Erysipelotrichi and Gammaproteobacteria increased, while the relative abundance of Bacteroidia decreased (Morgan et al., 2014). Both Erysipelotrichi and Gammaproteobacteria are associated with non-alcoholic fatty liver disease (NAFLD) independent of weight gain (Spencer et al., 2011;Henao-Mejia et al., 2012). Risperidone treatment increased the relative abundance of Allobaculum spp. Bacteroides spp. Bifidobacterium SREBP-1c, sterol response element-binding protein-1c; CD68, Cluster of Differentiation 68; IL-6, interleukine-6; IL-8, interleukine-8; IL-1β, interleukine-1beta; TNF-α, tumor necrosis factor-alpha; FAS, fatty acid synthase; FFAs, Free fat acids; ACC, acetyl coenzyme A carboxylase; GPR43, G-protein-coupled receptor 43. Frontiers in Pharmacology frontiersin.org 03 spp. and E. coli and decreased the relative abundance of Lactobacillus spp. Alistipes spp. Akkermansia spp, and Clostridium coccoides groups (Ridaura et al., 2013;Bahr et al., 2015b;Yuan et al., 2018). It should be noted that the results of studies on the change in relative abundance of Bacteroides spp. Are inconsistent: one study showed an increase, while another showed no significant change (Bahr et al., 2015b; KEGG, kyoto encyclopedia of genes and genomes; hs-CRP, high-sensitivity C-reactive protein; HCY, homocysteine; IL-4, interleukine-4; IL-6, interleukine-6. Frontiers in Pharmacology frontiersin.org 04 2018). Furthermore, the abundance of Bifidobacterium spp. In the feces of mice treated with risperidone was negatively correlated with serum LDL levels; E. coli was negatively correlated with serum TG levels (Yuan et al., 2018). However, there is little evidence regarding the relationship between changes in lipid metabolism and alterations in the gut microbiota in humans following SGA treatment. Of particular interest is the search for potential probiotic bacteria such as Akkermansia muciniphila, which is a previously reported 'lean gut microbiota' species. A. muciniphila, a member of the phylum Verrucomicrobia, is the only species of the genus Akkermansia. A. muciniphila is a mucin degrader in the intestine and is significantly and negatively associated with altered fat metabolism and obesity (Henao-Mejia et al., 2012;Schneeberger et al., 2015). A significantly reduced abundance of fecal A. muciniphila was found in patients with bipolar disorder who were treated with a range of SGAs, such as clozapine, olanzapine, and risperidone, compared to controls (Flowers et al., 2017). SGA-induced lipid disorders: An intimate involvement with microbiota To describe the relationship between intestinal flora and SGAinduced lipid metabolism, Davey et al. investigated the effect of antibiotic-induced alterations in the gut microbiota on the metabolism of female rats treated with olanzapine (Davey et al., 2013). They found that clinically relevant doses of olanzapine accelerated metabolic disturbances and weight gain in C57BL/6J mice fed a high-fat diet. When the rats were treated with both olanzapine and a cocktail of broad-spectrum antibiotics, including oral neomycin, metronidazole, and polymyxin, the increases in the proportions of Firmicutes and Bacteroidetes bacteria were reversed, and this treatment reversed the olanzapine-induced metabolic disturbances and weight gain induced by high-fat diets in C57BL/ 6J mice. Thus, Morgan et al. conducted a further study and found that this phenomenon was consistent with a previously described study conducted under sterile conditions but that olanzapine-induced metabolic disturbances and weight gain occurred soon after gut microbial colonization (Morgan et al., 2014). Further experimental work has been conducted on mice treated with prebiotics in combination with SGAs. Coadministration of olanzapine and the prebiotic B-GOS led to a significant increase in circulating levels of TNFα in mice, which has been reported to affect lipid metabolism, elevate fecal Bifidobacterium spp. And reduce body weight, and these effects were not seen in response to olanzapine or B-GOS treatment alone (Kao et al., 2018). Similarly, the probiotic A. muciniphila was observed to have a similar effect . These studies suggest that intestinal microbes are necessary and sufficient for SGAinduced disruption of lipid metabolism. It is worth noting that none of these experiments were replicated in humans. Sex differences in SGA-induced lipid disorders: A potential role of microbiota? Accumulating evidence shows that female patients who take SGAs seem to have poorer lipid profiles than those of male patients, as well as a higher prevalence of metabolic syndrome and cardiovascular risk factors, including weight gain and dyslipidemia . It is noteworthy that sex-dependent differences in the host's metabolism may be associated with gut microbiota (Wu et al., 2007;Lange et al., 2017). For example, women usually have considerably higher Firmicutes:Bacteroidetes ratios as compared to men in a population-based cross-sectional investigation (Koliada et al., 2021). Given that SGA-induced lipid disturbances are frequently associated with an increased ratio of Firmicutes to Bacteroidetes, this finding raises the possibility that women are more susceptible than men to abnormal lipid metabolism (Morgan et al., 2014). Another substantial indication that men and women have different microbes is the fact that sex hormones can affect the composition of the host microbiome. Significant changes in the host gut microbiota, such as a drop in the abundance of butyrate-producing bacteria and a decline in alpha diversity, are linked to elevated levels of estrogen in pregnant women (Koren et al., 2012). These differentiations can result in a significant impact on SAG-induced changes in lipid metabolism between genders. Unfortunately, available studies are not enough to systematically explain the link between sex differences in gut microbes and sex differences in disorders of lipid metabolism caused by antipsychotics. However, this phenomenon might offer some guidance for future studies on sex differences regarding the side effects of SGAs. Mechanisms of SGA-induced disorders of lipid metabolism mediated by the intestinal microbiota Microorganisms and their metabolites are crucial in understanding how the gut microbiome is implicated in SGAinduced systemic lipid disorders (Skonieczna-Żydecka et al., 2019). Short-chain fatty acids (SCFAs), bile acids (BAs), and neurotransmitters are among the metabolites that the intestinal microbiota can create. Bacteroidetes and Firmicutes can create butyric acid, which accounts for approximately 20% and 60% of the total intestinal flora, respectively, while Proteobacteria and Actinobacteria produce very small amounts of SCFAs (5%-10% and 3%, respectively). Sulfate-reducing bacteria may use lactic acid to make acetic acid and hydrogen sulfide, while Veillonellaceae can convert it to propionic acid. Bacteroidetes is a phylum that can convert succinic acid to propionic acid, and its population density is related to the amount of propionic acid in the intestine (Karlsson et al., 2013). The dominant genera for BA production are Lactobacillus, Bifidobacterium, Enterobacter, Anaplasma, and Clostridium (Krautkramer et al., 2021). In addition, Candida, Streptococcus, and Escherichia can produce 5-hydroxytryptamine (5-HT; serotonin) (Krautkramer et al., 2021). Approximately 36% of the small molecules in human blood are produced or modified by microbial metabolism. The total SCFA concentration in the colons of GF mice is 100 times higher than that of ordinary animals. Acetic acid is the most concentrated SCFA in organisms and is central to carbohydrate and lipid metabolic pathways (Kimura et al., 2020). Miller et al. used radioisotope analysis and showed that the main pathway for bacterial production of acetate is the Wood-Ljungdahl pathway (Miller and Wolin, 1996). Moreover, olanzapine treatment of patients with schizophrenia significantly increased plasma acetate concentrations (Kao et al., 2018). Increased levels of the Kyoto Encyclopedia of Genes and Genomes (KEGG) metabolic pathways of butyric acid and propionic acid were found in a group of schizophrenia patients Frontiers in Pharmacology frontiersin.org 05 treated with risperidone (Bahr et al., 2015a). Hepatocytes create primary BAs, which are then 7-dehydroxylated by intestinal bacteria to produce secondary BAs. The gut microbiome affects the composition of the BA pool, such as the primary BA/secondary BA ratio, and thus affects the function of BAs, especially the metabolism of lipids (Wahlström et al., 2016). SGAs can cause an increase in total serum BAs. Specific SGAs, including chlorpromazine (BREUER, 1965), olanzapine (Lui et al., 2009), haloperidol (Fuller et al., 1977), risperidone (Wright and Vandenberg, 2007), and quetiapine (Shpaner et al., 2008), have been reported to cause cholestasis in a small number of patients taking the medication and are unpredictable with no significant correlation between dose and the duration of administration. The intestinal microbiota signals to enteroendocrine (EE) cells through metabolites in multiple ways, resulting in the secretion of a range of intestinal hormones, such as glucagon-like peptide 1 (GLP-1), 5-HT, gastrin, leptin, cholecystokinin (CCK) and peptide tyrosinetyrosine (PYY) (Martin et al., 2019). First, the microbiota produces SCFAs, which signal to EE cells through free fatty acid receptors 2 or 3 (FFAR2/3) (Offermanns, 2014) or by activating nuclear histone deacetylase (HDAC) (Waldecker et al., 2008;Fellows et al., 2018;Larraufie et al., 2018). Second, secondary BAs signal to EE cells via the Takeda G-protein-coupled BA receptor TGR5 or the nuclear receptor known as farnesoid X receptor (FXR) (Wahlström et al., 2016). Numerous human and animal studies have demonstrated that leptin, ghrelin (Sentissi et al., 2008), and 5-HT levels (Bahr et al., 2015a) have a substantial positive link to aberrant lipid profiles and body mass index before and after SGA treatment in schizophrenia patients. This supports the idea that the intestinal flora and its metabolites play an important role in SGA-induced metabolic abnormalities. Specific microorganisms can synthesize specific lipids There appear to be distinct bacteria that are more or less related to specific classes of lipids. Gut commensal microorganisms (Bacteroides, Prevotella and Porphyromonas) are significantly altered by SGAs, and they can produce sphingolipids, including ceramide phospholipids and deoxy sphingolipids (Brown et al., 2019). Acute SGA treatment dramatically altered the homeostasis of central and peripheral sphingolipids (Castillo et al., 2016;Weston-Green et al., 2018). Notably, sphingolipids from bacteria were incorporated into the mammalian sphingolipid pathway (Johnson et al., 2020). The probiotic Bacteroides has also been shown to produce the endothelin-like molecule N-acyl-3-hydroxypalmitoyl-glycine (commendamide) (Cohen et al., 2015;Lynch et al., 2017). Furthermore, olanzapineinduced metabolic effects have been shown to be dependent on the endogenous cannabinoid system (Abolghasemi et al., 2021). Everard et al. showed that treating obese mice with A. muciniphila increased intestinal 2-oleoylglycerol (2-OG), 2arachidonoylglycerol (2-AG) and 2-palmitoylglycerol (2-PG) levels (Everard et al., 2013). However, a recent study reported that A. muciniphila exerted its beneficial effects on metabolism independent of general changes in plasma endocannabinoidome mediators (Depommier et al., 2021). The gut microbiota-endocannabinoid axis is a key topic in the studies listed above, and it is likely to be a new target for SGA-induced lipid metabolism disorders. Central mechanism Current evidence suggests that hyperphagic effects are responsible for a large percentage of the observed aberrations in lipid profiles, and there is a lack of satiety in both human and animal models in the presence of SGAs (Hartfield et al., 2003;Huang et al., 2020). The diversity of the gut flora is vital for appetite and metabolism regulation. Different gut bacteria and metabolites influence the gut's ability to perceive nutrients, influencing the host's appetite and energy metabolism (Oliphant and Allen-Vercoe, 2019). This is where the gut-brain axis becomes active. The gut-brain axis is a fundamental mechanism that links biochemical signals from the gastrointestinal tract to brain function (Carabotti et al., 2015). A sophisticated network of neurons regulates energy homeostasis in the host. Two types of neurons are particularly important for appetite control: neurons that express the neuropeptide proopiomelanocortin (POMC) (Baldini and Phelan, 2019) and those that express neuropeptide Y/agouti-related peptide (NPY/AgRP) (Han et al., 2018). These neurons interact with each other to form a switch that instantly adjusts appetite (Quarta et al., 2021). Among them, POMC neurons promote satiety, while AgRP neurons increase appetite. On the one hand, gastrointestinal hormones affect the balance of the POMC/AgRP system, which controls appetite. Leptin (Endomba et al., 2020), CCK (Fan et al., 2004), PYY (Loh et al., 2015), GLP-1 (Teff et al., 2013), and 5-HT (Sohn et al., 2011;Bonn et al., 2013) activate POMC neuronal activity via receptors in the hypothalamus and inhibit NPY in AgRP neurons, sending appetite suppressant signals and regulating energy homeostasis and metabolism. Ghrelin is the only known gut hormone that promotes appetite by directly activating AgRP neurons and increasing the inhibitory effect of AgRP neurons on POMC neurons (Lage et al., 2010;Varela et al., 2011). On the other hand, the intestinal flora metabolites SCFAs and BAs can also influence appetite via the gut-brain axis. An increase in acetate production activates the parasympathetic nervous system, leading to an increase in gastrin secretion, which promotes host appetite (Perry et al., 2016). SCFAs also have the potential to enter the circulation, cross the blood-brain barrier, and directly affect the central nervous system (Morrison and Preston, 2016). In addition, BAs can reach the hypothalamus and are highly correlated with circulating BA levels, which can reach the hypothalamus via passive diffusion, causing a brief increase in hypothalamic BA concentrations and triggering the expression of the AgRP/NPY neuronal membrane receptor TGR5, which in turn regulates appetite (Perino et al., 2021). It is worth noting that an imbalance in the amount of proinflammatory pathogenic bacteria can compromise intestinal wall integrity, affecting brain-gut axis transmission (Küme et al., 2017;Tilg et al., 2020). A study showed that the mRNA levels of NPY and AgRP were significantly increased in the hypothalamus of olanzapine-administered rats and were considerably lower than those in normal animals (Zhu Z et al., 2022). Some notable causes include several of these mechanisms affecting neuronal function through the gut-brain axis, which leads to hyperphagia and results in abnormal lipid profiles. Interestingly, this study showed that olanzapine-induced increases in weight gain percentage (WG%) occurred only when the vagus nerve was intact, while the negative effects of olanzapine-induced increases in white Frontiers in Pharmacology frontiersin.org adipose tissue percentage (WAT%) and decreases in brown adipose tissue percentage (BAT%) were reversed by the disruption of the gut microbiota-brain axis (vagotomy), suggesting that an intact gut microbiota-brain axis may be necessary for olanzapine-induced disruption of lipid metabolism. Lipopolysaccharide (LPS), a component of the outer membrane of most Gram-negative bacteria, is released upon bacterial cell death and enters the circulation through a "leaky gut", resulting in increased levels of LPS in the blood (known as endotoxemia, which is a leading cause of metabolic diseases, such as insulin resistance, and is promoted by increased IL-6 and tumor necrosis factor (TNF) (Tilg et al., 2020)), which acts as a powerful stimulator of host immunity (Park and Lee, 2013). LPS is detected by Toll-like receptor 4 (TLR4) on the immune cell surface, resulting in the release of numerous cytokines and chemokines (Rhee, 2014). LPS can also interact directly with lipid molecules. All lipoproteins can bind to LPS and neutralize its toxicity in vitro and in vivo (Barcia and Harris, 2005). Peripheral tissue SCFAs SCFAs are used as a carbon source for the production of important endogenous host metabolites, such as fat and cholesterol (Besten et al., 2013). SCFAs produced by the intestinal flora are rapidly absorbed by colonic cells, due in part to monocarboxylate transporters, including the proton-coupled monocarboxylate transporter 1 (MCT1) and sodiumcoupled monocarboxylate transporter 1 (SMCT1) (Dalile et al., 2019). The principal substrates for lipid synthesis in rat colonic epithelial cells, which convert SCFAs to acetyl coenzyme A (CoA), are acetate and butyrate (Zambell et al., 2003). CoA generates energy through the tricarboxylic acid cycle and produces palmitic acid under the action of the cytoplasmic enzyme system, which can be transferred to mitochondria to lengthen the carbon chain and form triglycerides with other substances stored in adipose tissue. In contrast, SCFAs that are not digested in colon cells enter the portal circulation of the liver through the basolateral membrane and provide substrates for hepatocyte energy metabolism. Carbohydrateresponsive element-binding protein (ChREBP) plays a key role in this process (Iizuka et al., 2020). A member of the acetyl-CoA synthetase short-chain family, encoded by Acss2, is induced by ChREBP and converts acetate to acetyl-CoA, which is used as a substrate for lipogenesis (BERG, 1956). Thus, regulating lipogenic gene expression and hepatic acetyl-CoA production from gut microbial acetate by inhibiting hepatic ChREBP is expected to prevent SGA-induced TG accumulation by inhibiting lipogenic gene expression and hepatic acetyl-CoA production. SCFAs are also involved in the biosynthesis of cholesterol and fatty acids in hepatocytes (Dalile et al., 2019). Chen et al. performed radiolabeling studies and showed that acetate was involved in the increase in de novo fat synthesis. Furthermore, antibiotic-treated mice showed reduced de novo fat synthesis (Kindt et al., 2018). In addition, SCFAs are signaling molecules that regulate host-related functions mainly through two signaling pathways: the HDAC and G protein-coupled receptor signaling pathways. SCFAs have been shown to bind to the G protein-coupled receptors GPR43/FFAR2 and Gpr41/ FFAR3 (Kimura et al., 2020), leading to further activation of downstream signaling cascades, including the phospholipase C (PLC), mitogenactivated protein kinase (MAPK), phospholipase A2 (PLA2) and nuclear factor-κB (NF-κB) pathways. Acetate inhibits insulin-mediated fat accumulation and improves lipid and glucose metabolism via GPR43. Mice lacking GPR43 were obese on a normal diet, whereas mice specifically overexpressing GPR43 in adipose tissue remained lean even when fed a high-fat diet. Both types of mice recovered under sterile conditions or after being treated with antibiotics (Kimura et al., 2013). GPR41 has been shown to regulate host energy homeostasis in a gut microbiota-dependent manner. Mice with knockout of the GPR41 gene exhibited a leaner body weight, but this difference was not observed in GF mice (Samuel et al., 2008). SCFAs also activate AMPactivated protein kinase (AMPK), a downstream signal of the G-proteincoupled receptor signaling pathway, and AMPK activation increases peroxisome proliferator-activated receptor-γ coactivator 1α (PGC-1α) expression in adipose tissue and skeletal muscle (Taylor et al., 2005;Wan et al., 2014;Yan et al., 2016). In addition, PGC-1α regulates the transcriptional activity of peroxisome proliferator-activated receptor α (PPARα) and peroxisome proliferator-activated receptor γ (PPARγ) (Muoio et al., 2002;Lin et al., 2005). Butyrate and propionate can activate PPARγ (Alex et al., 2013). Activation of liver and adipose tissue PPARγ by SCFAs regulates lipid metabolism by increasing energy expenditure, reducing inflammation in adipose tissue, improving insulin sensitivity, reducing body weight, and decreasing hepatic TG accumulation (Besten et al., 2015). A study in fish showed that the effects of olanzapine on lipid metabolism may be related to the regulation of the gut microbiota-SCFA-PPAR signaling pathway (Chang et al., 2022). The gut microbiome was significantly altered in carp that were administered olanzapine, as evidenced by an increase in the abundance of SCFA-producing bacteria, which led to an increase in the production of SCFAs. In addition, many genes that are components of the PPAR signaling pathway were significantly altered; specifically, the mRNA levels of genes related to lipid synthesis (including PPARγ, fatty acid synthase (FAS), and SREBP1) were significantly increased, and lipolysis-related genes (such as hormone-sensitive lipase (HSL) and PPARα) were significantly decreased. The activated AMPK signaling pathway can also promote the expression of HSL and adipose triglyceride lipase (ATGL), which promote lipolysis (Cantó and Auwerx, 2010;Deng et al., 2020;Guo et al., 2020;Tang et al., 2020). Jocken et al. performed in vitro experiments with a human white adipocyte model (human multipotent adipose tissue-derived stem (hMADS) cells). Acetate was found to be the main driver of the antilipolytic effect of SCFAs and attenuated HSL phosphorylation in hMADS adipocytes in a Gi-coupled manner (Jocken et al., 2017). This is reminiscent of the fact that the effect of SGAs on AMPK may also be an indirect consequence of the activation of AMPK by SCFAs in peripheral tissues. Indeed, olanzapine can reduce AMPK phosphorylation and activation in hepatocytes and 3T3-L1 cells, accompanied by a concomitant increase in SREBP-dependent lipid synthesis (Oh et al., 2011;Li et al., 2016). Interestingly, acetate supplementation did not attenuate olanzapine-induced weight gain in mice but appeared to increase it (Kao et al., 2019a). This concept of SCFAinduced weight gain appears to be consistent with the olanzapine-induced increase in plasma acetate (Kao et al., 2018). BAs BAs bind to FXR and TGR5 in the host and regulate lipid and energy metabolism (Chiang and Ferrell, 2019). FXR is a transcription factor that binds to the promoter region and induces the expression of multiple target genes and is expressed in the liver, ileum, kidney, and other tissues (Lefebvre et al., 2009;Teodoro et al., 2011). The most potent ligand for FXR is chenodeoxycholic acid (CDCA), followed by cholic acid (CA), deoxycholic acid (DCA), and lithocholic acid (LCA), all of which are FXR agonists. CDCA is converted to ursodeoxycholic Frontiers in Pharmacology frontiersin.org acid in humans through a sequence of processes, and it does not activate FXR but rather inhibits FXR activity (Wang et al., 1999;Mueller et al., 2015). In addition, Sayin et al. identified two natural FXR antagonists: the taurine-conjugated murine BAs tauro-αmuricholic acid (TαMCA) and tauro-β-muricholic acid (TβMCA) (Sayin et al., 2013). TGR5 is a binding G-protein-coupled receptor expressed in tissues such as the intestine, liver, and brown-white adipose tissue. TGR5 is mainly activated by the secondary BAs LCA and DCA (Maruyama et al., 2002;Kawamata et al., 2003). FXR-deficient animals had increased hepatic and serum TG and cholesterol levels (Sinal et al., 2000). This finding indicates that FXR is required for lipid metabolism and energy homeostasis (Trauner et al., 2010). Reduced sterol-response element-binding protein-1c (SREBP-1c) expression caused by natural or synthetic FXR agonists via the FXR-SHP (small heterodimer partner) pathway could explain the inhibitory effect of BAs on TG production (Watanabe et al., 2004). In addition, Caron et al. used immortalized human hepatocyte (IHH) and HepaRG cell lines, which are glucose-responsive human hepatocyte lines, to show that the activation of FXR inhibits the transcriptional activity of ChREBP in human hepatocytes (Caron et al., 2013). BAs can also induce the expression of the human PPARα gene, which is a nuclear receptor that controls lipid and glucose metabolism and exerts antiinflammatory effects via FXR (Pineda Torra et al., 2003). The activation of FXR has been shown to induce a decrease in serum apolipoprotein (Apo) CIII concentrations, leading to the amelioration of TG-rich remnant lipoprotein metabolism to reduce serum TG levels and cardiovascular risk profiles (Claudel et al., 2003). The FXR signaling pathway in mice and humans is significantly affected by SGAs. At present, pharmacological therapies that target FXR in combination with SGAs are still needed to translate the positive findings of these studies into practical outcomes. Exposure of a mouse precision-cut liver slice (PCLS) model to chlorpromazine significantly altered cholesterol and BA cellular transport regulated by FXR and BA regulation of glucose and lipid metabolism via FXR (Szalowska et al., 2013). In addition, a study also showed the downregulation of FXR targets such as Bsep, Mdr3, Ntcp, and Cyp8b1. This finding was consistent with that observed in chlorpromazine-treated HepaRG cells (Anthérieu et al., 2013). As a next step, more experiments on the effect of SGAs on FXR are needed to further understand the beneficial effects of chlorpromazine. TGR5 has also been shown to be a BA-responsive receptor involved in host lipid metabolism. In muscle and brown adipose tissue, TGR5 may play a role in energy homeostasis by promoting intracellular thyroid hormone activity and thereby increasing energy expenditure (Watanabe et al., 2006). In addition, TGR5 has been shown to activate PPARα and PGC-1α to increase mitochondrial oxidative phosphorylation and energy metabolism (Chiang and Ferrell, 2020). However, there are limited data on changes in TGR5 receptor activity in schizophrenia patients during the use of SGAs. GLP-1 Ishøy et al. published the first clinical data supporting the use of the GLP-1 agonist liraglutide to treat clozapine-induced lipid profile disturbances and weight gain in schizophrenia (Ishøy et al., 2013). Consistent with this study, Larsen et al. and Siskind et al. demonstrated that GLP-1 agonists could be effective in reducing clozapine-or olanzapine-induced lipid metabolism disorders (Kouidrat and Amad, 2019). GLP-1, a glucose-dependent incretin, plays a crucial role in lipid metabolism and body weight maintenance by binding to the GLP-1 receptor (GLP-1R). Many human tissues, including the pancreas, liver, muscle, fat, gastrointestinal tract, heart, and brain, express GLP-1R (Campbell and Drucker, 2013). When bound to GLP-1, GLP-1R acts through its coupled G protein (Gαs) in pancreatic β-cells, activating adenylyl cyclase and increasing the intracellular levels of cyclic adenosine monophosphate (cAMP) (Doyle and Egan, 2007); this increase in cAMP exerts a series of effects. The activation of factors (protein kinase A (PKA) (Béguin et al., 1999) and exchange protein directly activated by cAMP (EPAC) (Kang et al., 2008)) leads to calcium influx, increased transcription of the proinsulin gene, and the stimulation of insulin secretion. In addition, GLP-1R may regulate pancreatic β-cell metabolism by activating the phosphoinositide 3-kinase (PI3K)/AKT (protein kinase B)/mTOR (mammalian target of rapamycin) and MAPK signaling pathways (Rowlands et al., 2018). The binding of GLP to GLP-1R in adipocytes activates the adenylyl cyclase (AC)/cAMP signaling pathway, regulates the apoptosis and proliferation of preadipocytes through various cellular signaling pathways, such as extracellular signal-regulated kinase (ERK), protein kinase C (PKC), and AKT, and alters the expression of PPARγ and its target genes (Challa et al., 2012;Chen et al., 2017). Furthermore, by reducing macrophage infiltration in adipose tissue, GLP-1 can directly block the inflammatory signaling pathway, improving insulin resistance, lowering liver fat levels, and considerably alleviating NAFLD (Blaslov et al., 2014). This explains how GLP-1R might decrease hepatic substrate supply (e.g., glucose and non-esterified fatty acids (NEFAs)) by affecting adipose tissue, which may be partially responsible for the overall effect. GLP-1R-based treatment of metabolic diseases has been reported to act on hepatocyte lipid metabolism through PI3K, type 1 protein phosphatase (PP-1), and PKC (Redondo et al., 2003). Interestingly, a study showed that liraglutide ameliorated hepatocyte steatosis by inducing autophagy through the AMPK/mTOR pathway (He et al., 2016). Additionally, GLP-1 may promote hepatocyte survival by downregulating microRNA-23, resulting in increased expression of PGC-1α and uncoupling protein 2 (UCP2) . Recent studies have shown that GLP1/GLP-1R signaling is involved in the effect of brexpiprazole, a new multitarget antipsychotic drug (APD) approved by the US FDA in 2015 that induces disorders of glucose and lipid metabolism . Brexpiprazole administration significantly reduced the protein and mRNA levels of GLP1 in the pancreas and small intestine by inhibiting Ca 2+ /calmodulin-dependent kinase IIα (CaMKIIα), AMPK, and β-catenin. Brexpiprazole administration also caused islet dysfunction and decreased GLP-1R, PI3K, and IRβ expression in the pancreas. Cotreatment with liraglutide and brexpiprazole is an effective strategy for certain aberrant metabolisms. Leptin Leptin was found to be involved in lipid metabolism and energy balance by mediating certain signaling pathways. Leptin inhibits acetyl-CoA carboxylase (ACC) activity by activating AMPK in skeletal muscle, thereby stimulating the oxidation of fatty acids (Minokoshi et al., 2002). Consistently, another study revealed that the activation of AMPK can have a therapeutic effect on metabolic syndrome only if leptin is present and active (Stockebrand et al., 2013). In addition, p38 MAPK may also contribute to the effect of leptin on fatty acid oxidation (Dardeno et al., 2010). In non-adipose tissues, leptin may promote fatty acid oxidation by activating PPARα-induced CoA expression via signal transducer and activator of transcription 3 Frontiers in Pharmacology frontiersin.org (STAT3) (Unger et al., 1999). Maya et al. found that leptin could regulate lipid metabolism and inflammation by modulating the PI3K/ Akt/mTOR pathway (Maya-Monteiro and Bozza, 2008). Consistent with this report, Schmidt et al. found that olanzapine simultaneously upregulated the mTOR pathway and downstream signaling cascades, including the activation of mTORC1, in mice (Schmidt et al., 2013). mTORC1 activation interferes with lipid and energy metabolism, leading to the upregulation of lipid biosynthesis and the accumulation of TGs. Furthermore, activation of the mTOR pathway inhibits autophagy, thereby increasing intracellular lipid accumulation (Zhuo et al., 2022). Enhanced mTOR activity disrupts hepatic lipid homeostasis by regulating the expression of the transcription factor SREBP-1c (Takashima et al., 2009). Strategies for modifying the gut microbiome to ameliorate SGA-induced disorders of lipid metabolism Pharmacological interventions To date, the mechanisms of SGA-induced metabolic changes have not been thoroughly investigated. However, in clinical treatment, side effects of SGAs on lipid metabolism can usually be suppressed by other drugs, and some interventions have yielded significant results. Researchers found that metformin, a biguanide antihyperglycemic agent, had a positive effect on the lipid profile, insulin resistance, and body weight in patients with schizophrenia, which has been supported by animal (Zhu W et al., 2022) and human models (Wu et al., 2016;Vancampfort et al., 2019;Jiang et al., 2020). Interestingly, the intestinal flora plays a vital role in the positive effects of metformin. Luo et al. (Luo et al., 2021) and Wang et al. (Wang et al., 2021) found that metformin not only prevented olanzapine-induced disruption of the lipid profile and hepatic histopathological changes but also partially reversed olanzapine-induced alterations in the gut microbiota and helped correct peripheral and central satiety-related neuropeptide disorders. This finding demonstrated that the gut-brain axis is a mediator by which metformin ameliorates SGA-induced metabolic dysfunction. Statins are also considered a potential preventive and therapeutic approach to reduce SGA-induced weight gain and dyslipidemia in patients with schizophrenia. It has been reported that pravastatin (Vincenzi et al., 2014), atorvastatin (Ojala et al., 2008), lovastatin (Ghanizadeh et al., 2014), rosuvastatin (Hert et al., 2006), or simvastatin (Tajik-Esmaeeli et al., 2017 in combination with SGAs can reduce TC, LDL cholesterol, and TG levels in patients with schizophrenia. Animal studies have shown that statins improve SGA-induced metabolic disturbances partly due to statin-mediated modulation of BAT activity and inhibition of the hepatic mTOR signaling pathway (Liu et al., 2019). Interestingly, statins were also recently shown to improve the gut microbiota, which seems to partially explain the associated clinical improvements (Kim et al., 2019;Vieira-Silva et al., 2020). Non-pharmacological interventions New biological therapeutic strategies, including probiotics, prebiotics, gut hormone, and fecal microbiota transplantation (FMT), are being explored to directly target the gut microbiota and its metabolite products to improve SGA-induced dyslipidemia. Probiotics have been shown to play a vital role in lipid homeostasis in the host (Table 3). However, there have been few studies on the effects of probiotics and prebiotics on SGA-induced changes in lipid metabolism and energy. Tomasik et al. discovered that probiotics and prebiotics could alleviate SGA-induced gastrointestinal distress (Tomasik et al., 2015). However, the effects of probiotics and prebiotics on SGA-induced changes in lipid metabolism are unclear and controversial because the effects of these factors on lipid metabolism are strain and population specific. For example, the probiotic A. muciniphila ) or prebiotic B-GOS (Kao et al., 2018 can partially reverse olanzapine-induced disturbances in the gut microbiota and lipid metabolism in rats. The probiotic mixture VSL#3, a mixture of eight different bacterial probiotic species, was shown to attenuate olanzapine-induced body weight gain, uterine fat deposition, and dyslipidemia (Dhaliwal et al., 2019). Importantly, while their effectiveness has been relatively well documented in animal studies, translation to humans has sometimes shown controversy. Kao et al. found that B-GOS supplementation did not affect SGA-induced weight gain or changes in circulating metabolic markers, contrary to their observations in rats (Kao et al., 2019b). Yang et al. reported that the addition of probiotics, including Bifidobacterium and Lactobacillus, was not sufficient to reduce weight gain in patients with schizophrenia, nor did it significantly improve lipid profiles (Yang et al., 2021). In comparison, the combined use of probiotics and dietary fiber was effective in reducing olanzapine-induced weight gain without any apparent adverse effects while maintaining the desired psychopathological effect Huang et al., 2022a;Huang et al., 2022b). Therefore, more randomized controlled trials in humans are needed to translate beneficial findings in animals. Indeed, A. muciniphila has been shown to be safe and effective in human trials, and pasteurized A. muciniphila is more effective than live A. muciniphila (Depommier et al., 2019). In addition, the gut hormone GLP-1 has demonstrated the potential to improve SGA-induced disorders of lipid metabolism. The combination of liraglutide, a GLP-1 receptor agonist, and SGAs has potential benefits on body weight and lipid metabolism in patients with schizophrenia, but patients must receive daily subcutaneous injections and have a relatively high rate of adverse events (Whicher et al., 2019). In contrast, FMT, which is being researched as an alternative to SGAs (Settanni et al., 2021), lacks experimental data to demonstrate its potential in SGAinduced metabolic disorders. Future perspectives Long-term use of SGAs can cause weight gain and increase lipids, which can lead to an increased chance of patients suffering from metabolic syndrome, thereby increasing the risk that they will develop hypertension and cardiovascular and cerebrovascular diseases. During this process, the microbiome is both essential and sufficient, and several pathways involved in lipid metabolism have been postulated ( Figure 2). First, SGAs directly inhibit the growth of microbial species that produce specific lipids (e.g., endogenous cannabinoids, and cholesterol). Second, SCFAs and BAs produced by the gut microbiota can regulate gut hormones such as CCK, PYY, GLP-1, and 5-HT. On the one hand, these signaling molecules can stimulate Frontiers in Pharmacology frontiersin.org the vagus nerve or be carried into the brain to affect appetite via the gut-brain axis; on the other hand, they can regulate lipid metabolism via peripheral signaling pathways ( Figure 3). However, many unanswered questions remain. Which components of the gut microbiota and host metabolism are chiefly associated with schizophrenia? Are the microbiota changes observed in schizophrenia treated with SGAs secondary to SGA treatment? What are the metabolic side effects of SGAs and their impact on the microbiota? Do changes in the gut microbiota affect the efficacy of SGAs? To answer these questions, further experimental data are needed. This will lead to improved schizophrenia treatment options, individualized therapy, and the prediction and mitigation of side effects. FIGURE 2 Schematic presentation of the potential mechanism of lipid metabolism disorders secondary to SGA treatment based on the gut microbiota. Treatment with SGAs may increase the relative ratio of Firmicutes to Bacteroidetes bacteria, As well as decrease the relative abundance of Bifidobacterium and Akkermansia muciniphila. The products of the gut microbiota (lipids, LPS, SCFAs, and BAs) change as a result of this transformation. SCFAs activate FFAR2/3 or HDAC. BAs send signals to EE cells through TGR5 or nuclear FXR, allowing EE cells to synthesize and secrete various gut hormones. LPS, SCFAs, BAs, and gut hormones are important players in interorgan crosstalk by affecting appetite, regulating gut integrity, and improving liver, pancreas, and adipose tissue function and lipid metabolism. POMC, Proopiomelanocortin; CART, Cocaine-and amphetamine-regulated transcript; NPY, Neuropeptide Y; AgRP, Agoutirelated peptide; FXR, Farnesoid X receptor; HDAC, Histone deacetylase; ATP, Adenosine triphosphate; TGR5, Takeda G protein-coupled receptor 5; FFAR2/3, Free fatty acid receptors 2/3; MCT1, Proton-coupled monocarboxylate transporters 1; SMCT1, Sodium-coupled monocarboxylate transporters 1; BAs, Bile acids; SCFAs, Short-chain fatty acids; LPS, Lipopolysaccharide; SGAs, Second-generation antipsychotics. Frontiers in Pharmacology frontiersin.org Current strategies to modulate the gut microbiome to improve SGA-induced lipid metabolism disturbances are particularly promising. Prebiotics, probiotics, and FMT have achieved certain curative effects in animal experiments. As a next step, more randomized controlled trials into humans are needed to translate the beneficial findings in animals. It is important to observe changes in host lipid metabolism after concurrent administration of SGAs and the abovementioned treatments. Compared with prebiotic therapy and other drug interventions, probiotic treatment offers superior specificity and safety. To develop this specific microbial therapeutic approach, a better understanding of the precise role of microbes in SGA-related lipid metabolism and elucidation of the linkages between specific microbiota and lipid profiles of the gastrointestinal tract will be needed. Furthermore, proper exercise and diet must not be overlooked. Author contributions HC collected the literature and wrote the article. TC conducted the preliminary research. BZ, HC supervised the study. All authors reviewed and approved the manuscript. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Frontiers in Pharmacology frontiersin.org
2023-01-26T15:03:53.403Z
2023-01-25T00:00:00.000
{ "year": 2023, "sha1": "9940bea0c7595a5c66f5c3e2ae8acc64b3889bed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9940bea0c7595a5c66f5c3e2ae8acc64b3889bed", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
1416824
pes2o/s2orc
v3-fos-license
Regulation of gene expression by photosynthetic signals triggered through modified CO2 availability Background To coordinate metabolite fluxes and energy availability, plants adjust metabolism and gene expression to environmental changes through employment of interacting signalling pathways. Results Comparing the response of Arabidopsis wild-type plants with that of the mutants adg1, pgr1 and vtc1 upon altered CO2-availability, the regulatory role of the cellular energy status, photosynthetic electron transport, the redox state and concentration of ascorbate and glutathione and the assimilatory force was analyzed in relation to the transcript abundance of stress-responsive nuclear encoded genes and psaA and psbA encoding the reaction centre proteins of photosystem I and II, respectively. Transcript abundance of Bap1, Stp1, psaA and psaB was coupled with seven metabolic parameters. Especially for psaA and psaB, the complex analysis demonstrated that the assumed PQ-dependent redox control is subordinate to signals linked to the relative availability of 3-PGA and DHAP, which define the assimilatory force. For the transcripts of sAPx and Csd2 high correlations with the calculated redox state of NADPH were observed in pgr1, but not in wild-type, suggesting that in wild-type plants signals depending on thylakoid acidification overlay a predominant redox-signal. Strongest correlation with the redox state of ascorbate was observed for 2CPA, whose transcript abundance regulation however was almost insensitive to the ascorbate content demonstrating dominance of redox regulation over metabolite sensing. Conclusion In the mutants, signalling pathways are partially uncoupled, demonstrating dominance of metabolic control of photoreaction centre expression over sensing the redox state of the PQ-pool. The balance between the cellular redox poise and the energy signature regulates sAPx and Csd2 transcript abundance, while 2CPA expression is primarily redox-controlled. time scales ranging from milliseconds to days. In addition to biochemical regulation of enzyme activities, control of gene expression is essential for long-term adjustment of metabolic capacities. However, the photosynthetic electron transport chain and chloroplast metabolism have evolved as patchworks composed of nuclear and plastidencoded proteins and depend on coordination of gene expression between compartments. In recent years, physiological and mutant approaches addressed the regulatory function of metabolite signals [1], energy status [2] and chloroplast redox signals [3]. For example, carbohydrates induce expression of genes involved in starch biosynthesis and suppress those involved in starch degradation and carbon assimilation [4]. In parallel, in the plastids, expression of e.g. core subunits of the photoreaction centres and the large subunit of RuBisCO are suppressed either by coregulation of transcription or epistatically. Photosynthates further inhibit the Calvin-Cycle biochemically [5] and decrease the consumption of NADPH and ATP, which results in an increased reduction state of the NAD(P)-system and a higher phosphorylation state of adenylates [6,7]. The lack of NADP + as electron acceptor and the generation of a high trans-thylakoid ΔpH by decreased photophosphorylation reduce the carriers of intersystem electron transport, like plastoquinone pool (PQ), and stimulate ROS formation [8][9][10]. PQH 2 and ROS are redox signals controlling nuclear and plastid gene expression [11,12]. The combination of decreased NADP + regeneration and high thylakoid acidification promotes the violaxanthin cycle, which can support biosynthesis of the plant hormone abscisic acid [13] if ascorbate availability is limiting [14]. The plant signalling networks evolved in the context of a strong interference of photosynthates and redox and energy signals. Therefore, in wt distinction of particular signals is often difficult. Here, for differentiation, gene expression regulation was analyzed in wt, in the thylakoid acidification mutant pgr1 [15], in the starch-biosynthetic mutant adg1 [16] and in the ascorbate-deficient mutant vtc1 [17] in response to the CO 2 availability. pgr1 is mutated in the Rieske subunit of the cytochrome b 6 f complex (PetC; [15]) that is involved in electron and proton transfer processes between photosystem II and I as a plastohydroquinol-plastocyanin reductase. Due to a shift in the pK or the redox potential of the Rieske protein, thylakoid lumen acidification is restricted in the mutant to pH 6 even in high light [18]. Consequently, the mutant has altered capacities for ADP photophosphorylation [19] and PsbS protonation [20], and may be limited in violaxanthin de-epoxidation [21] and cyclic electron transport [22]. In pgr1, high light intensities increase the electron pressure to the PQ pool and release the electron pressure downstream of the cytb 6 f complex. Pgr1 was selected in this communication for its limitation in membrane energization and photosynthetic electron transport. adg1 carries a mutation in the small subunit of ADP-glucose pyrophosphorylase [16]. It was selected for its altered carbohydrate metabolism. In contrast to pgr1, in adg1 photosynthetic electron transport is affected only indirectly by carbohydrate-induced feedback inhibition [23]. The third selected mutant, vtc1, carries a point mutation in GDP-mannose pyrophosphorylase [17], which catalyzes precursor formation in the biosynthesis of the low-molecular weight antioxidant ascorbate. vtc1 was included in this analysis to investigate the importance of ascorbate availability and ascorbate-related redox processes on nuclear and plastid gene expression. Arabidopsis wt and the mutants were compared under conditions of limiting, ambient and saturating CO 2 concentrations. Below the CO 2 compensation point, RuBisCO preferentially catalyzes oxygenation of ribulose-1,5-bisphosphate and activates photorespiration, which triggers an extra demand for ATP. Per reaction cycle, chloroplastic NADPH consumption is low under photorespiratory as compared to assimilatory conditions. In addition, at very low CO 2 concentrations, depletion of the carbohydrate pool may suppress photorespiration and limit photoprotection. Concomitantly, the ROS load increases due to high peroxisomal H 2 O 2 production and chloroplast superoxide generation. At ambient conditions, 30 -40% of the carbon fluxes diverts to photorespiratory CO 2 release, while saturating CO 2 suppresses photorespiration and the requirement for ATP to NADPH approaches a ratio of 3:2. In parallel, energization and reduction state of the cells are decreased [24]. The hypothesis of the work was that the combination of working with defined mutants and altering CO 2 availability for modulation of photosynthesis allows addressing the question of how selected nuclear and plastidic genes are regulated in Arabidopsis in response to redox, metabolite and energy signals. Results The work aimed at differentiating signals involved in the control of gene expression. To this end Arabidopsis wt and mutants defective in chloroplast starch biosynthesis (adg1; [16]), thylakoid acidification (pgr1; [15]) and the biosynthesis of the major low-molecular-weight antioxidant ascorbate (vtc1; [17]) were compared in relation to their metabolite patterns, photosynthetic performance and transcript amount regulation. Following growth at ambient conditions, the CO 2 availability was decreased for 6 h to levels below the CO 2 compensation point or increased to saturation of RuBisCO to establish contrast-ing conditions for photosynthesis with high and low acceptor availability and concomitant high and low reduction pressure. Metabolic regulation The energy status In wt during the 6 h fumigation period the ADP content increased 1.5-2-fold irrespective of the treatment (Table 1). In parallel, the ATP accumulated only in 0 ppm CO 2 , but decreased in 350 and 2000 ppm CO 2 . The ATP/ADP ratio was nearly unchanged around 6 in 0 ppm, but decreased to 2.2 and 1.8, respectively, in 350 and 2000 ppm CO 2 ( Table 1). Mutant metabolism resulted in specific modifications of the wt-pattern: In adg1, ATP and ADP accumulated 2.2-and 5.5-fold in 0 ppm CO 2 and 1.3-and 1.9-fold in 350 ppm CO 2 indicating an increase in the total adenylate concentration, but a decrease in the ATP/ADP ratio. In 2000 ppm CO 2 the ATP content increased insignificantly, while the ADP content decreased resulting in an ATP/ADP ratio similar to the initial values obtained before the treatment ( Table 1). The ATP content of vtc1 slightly but steadily decreased with increasing CO 2 availability, while the ADP content increased from the lowest levels observed prior to the fumigation experiment 6.2-fold in 0 ppm CO 2 , 2.8-fold in 350 ppm CO 2 and 3.3-fold in 2000 ppm CO 2 ( Table 1). The high ATP and especially the low ADP contents after 6 Dietz & Heber 1989). These derived parameters were calculated from the primary data above. h resulted in a strong decrease in the ATP/ADP-ratio from 9.8 to around 2 (Table 1). pgr1 is limited in thylakoid acidification [18]. Like in vtc1, the ATP/ADP ratio decreased to values around two during the experiment independent of the CO 2 concentration applied (Table 1). However, the relative decrease was much less in pgr1 due to an already much lower initial ATP/ADP ratio (vtc1: 9.8; pgr1: 2.9). In addition to the generally low ATP/ADP ratio, the total adenylate concentration was also low in pgr1 suggesting coupling between adenylate biosynthesis and ADP-phosphorylation efficiency. The ADP contents were similar to that in wt and did not increase in 0 ppm like in vtc1 and adg1 (Table 1), while the ATP content slightly decreased demonstrating a mutation-specific difference in regulation of the adenylate concentration in parallel to the ATP/ADP ratio (Table 1). Assimilatory force and the reduction state of the NADP system The assimilatory force F A , i.e. the product of the phosphorylation potential [ ratios of the mutants were little increased in wt (175%), adg1 (155%) and pgr1 (123%), but elevated in vtc1 (366%) compared to the values at onset of fumigation ( Table 1). The calculated reduction states of NADP at high CO 2 were increased in wt (241%), adg1 (152%) and pgr1 (153%), but hardly changed in vtc1 (108%) relative to ambient conditions ( Table 1). The strong decline in the 3-PGA content in CO 2 -free air indicated an increased assimilatory force in low CO 2 (wt: 685%; adg1: 511%; pgr1: 301%; vtc1: 326% relative to ambient air). 3-PGA contents slightly increased in 350 ppm CO 2 during the 6 h of treatment, remained unchanged in wt in 2000 ppm CO 2 and decreased in the mutants. Antioxidant protection Antioxidant enzymes and low molecular weight redox metabolites constitute the antioxidant defence system of the plants. Modifications of antioxidant enzyme activities often reflect general changes in the redox status and in ROS generation of the tissue, or compensatory responses to specific redox changes [26]. Ascorbate contents and redox states At the beginning of the fumigation period, the ascorbate content was lowest in vtc1 with 1.44 ± 0.31 μmol Asc/g FW reflecting the mutational defect in ascorbate biosynthesis [17] (Fig. 1). The highest ascorbate levels were observed in wt, and intermediate contents in adg1 and pgr1. In wt the ascorbate content increased approximately 1.3-fold in ambient air, but hardly changed in low and high CO 2 . In adg1, starting with a lower content than wt, ascorbate levels increased to high values at 350 ppm and were unchanged in 0 and 2000 ppm CO 2 . In vtc1, with its low ascorbate contents due to the mutation in one of the main ascorbate biosynthetic enzymes, ascorbate increased 1.3-fold in CO 2 -free air and 2.1-and 1.8-fold in 350 and 2000 ppm CO 2 . It maximally reached 50% of the wt level in 350 ppm CO 2 . In pgr1, the ascorbate content increased to 4.5 ± 0.75 μmol Asc/g FW in 0 ppm and to similar levels around 6 μmol Asc/g FW in 350 ppm and 2000 ppm CO 2 . The CO 2 availability affected the redox state of ascorbate to a minor extent. From an almost fully reduced level at the beginning of the fumigation period, the mean values after 6 h fumigation indicate slightly higher oxidation levels (72 -91%) in 0 ppm and 350 ppm CO 2 than in 2000 ppm CO 2 ( Figure 1). Glutathione contents and redox states Compensating the low ascorbate content, the glutathione content was highest in vtc1 at the beginning of the fumigation period. Lowest glutathione contents were observed in adg1, while the glutathione content was similar to wt in pgr1. During the 6 h fumigation the glutathione content increased in all samples to similar levels and redox states ( Figure 1). Although not significantly different, the mean values indicate a trend towards slightly higher reduction at the end of the fumigation period ( Figure 1). Activities of antioxidant enzymes Compared to wt, the three mutants revealed increased ascorbate peroxidase activities in high and low CO 2 . APx activity of adg1 was twice that observed in wt at ambient CO 2 concentrations (Figure 1). In parallel, the SOD activity was 1.5-fold induced in adg1. Like for APx the CO 2 availability scarcely affected SOD activity of vtc1. SOD and APx activities were only slightly higher in pgr1 than in wt, while that mutation increased glutathione reductase activity in 0 and 350 ppm CO 2 ( Figure 1). Surprisingly, SOD and GR activities were lower in 0 than 350 ppm CO 2 (Figure 1). Contents of soluble and hydrolysable sugars In wt, pgr1 and vtc1 the CO 2 availability barely changed the availability of soluble sugars, while in adg1, due to the limitation in chloroplast starch biosynthesis, the sugar concentration was higher in 350 ppm CO 2 than in wt. In parallel, hydrolysable sugars were generally very low in adg1 and did not increase in high CO 2 . pgr1 and vtc1 had similar and slightly higher soluble sugar levels than wt at all CO 2 concentrations applied. In contrast, the concentration of hydrolysable sugars was less in vtc1 compared to pgr1, which may reflect the effect of limited GDP-mannose pyrophosphorylase activity on cell wall biosynthesis [27]. Generally, in wt, vtc1 and pgr1 the sugar levels were only elevated at ambient CO 2 , but not in saturating CO 2 demonstrating that in high CO 2 , in which Calvin-cycle activity should be stimulated, carbon assimilation was limited or carbohydrate consumption activated leading to similar carbohydrate pool sizes as in 0 ppm CO 2 ( Figure 2A and 2B). ABA levels in ambient air Consistent with previous observations [14], the ABA content was increased 2.1-fold in vtc1 compared to wt. adg1 and pgr1 showed insignificant increases in the range of 1.2-to 1.3-fold, respectively ( Figure 2C). Redox stabilization and antioxidant defense in response to differing CO 2 availability Photosynthetic performance of wild-type and mutants To test wt and mutants for limitations in photosynthetic electron transport, in the final 2 hours of the fumigation period the photosynthetic response of the plants to a 3.6increase in the light intensity from 80 to 285 μmol quanta m -2 s -1 was tested by monitoring chlorophyll-a-fluorescence parameters ( Table 2). In ambient air, the quantum yield of photosystem II (ΦPSII, (F M' -F S )/F M' ) was similar at standard light conditions in wt, adg1 and vtc1 and slightly higher in pgr1. In response to increased light, it decreased most in pgr1 with a steady-state level of 0.31. In vtc1, the steady-state ΦPSII was highest, while in wt and adg1 it was slightly lower. In 0 ppm, ΦPSII strongly decreased in all plants prior to the increase in light intensity ( Table 2). In response to the 3-fold increase in light intensity ΦPSII further decreased to levels between 0.032 (adg1) and 0.093 (wt) indicating severe reduction of photosynthetic electron transport efficiency. At 2000 ppm CO 2 ( Table 2), ΦPSII of all lines was highest at standard light conditions (F V /F M between 0.663 and 0.734) and dropped in response to increased light to levels around 0.5 in wt, adg1 and vtc1 and as low as 0.35 in pgr1. Transcript level regulation in response to high and low CO 2 Semi-quantitative RT-PCR analyses at least in triplicates were performed to assess the transcript regulation of selected genes in leaves 6 h after fumigation beginning with the respective CO 2 concentration (Fig. 3). Compared to wt, in 350 ppm CO 2 the transcript levels of the plastidic genes psaA and psbA, which encode core subunits of photoreaction centre I and II, respectively, and that of the nuclear encoded small subunit of RuBisCO were not significantly changed in the mutants adg1, pgr1 and vtc1, reflecting acclimation. The transcript level for ferritin-1 which has been suggested as marker gene for hydrogen peroxide [28] decreased in all mutants. In contrast, transcripts for Bap1, which is a marker gene for singlet oxygen signalling [28], were specifically increased in the ascorbate deficient mutant vtc1. The transcript levels of the three tested nuclear encoded chloroplast antioxidant enzymes Csd2, sAPx and 2CPA showed mutant specific patterns. Csd2 and sAPx transcript levels were wt-like in vtc1, increased in adg1 and decreased in pgr1, while the 2CPA transcript levels were specifically decreased in adg1. Stp1 transcript amounts, which are suppressed by the cellular carbohydrate availability [29], were doubled in pgr1 and increased 3.5-fold in vtc1, but were unchanged in adg1. In adg1 and pgr1, like in wt, the transcript levels of the two plastome encoded genes psaA and psaB and the nuclear encoded RbcS, Stp1, Bap1 and Ferritin-1 increased in response to 0 and 2000 ppm. Fig. 3 gives the transcript modifications in response to depleting or saturating CO 2 of wt and mutants each normalized to the respective level in 350 ppm. In vtc1, the response of Bap1, which was already strongly increased at 350 ppm, was weakest. In 2000 ppm CO 2 treated vtc1 psbA transcripts were hardly increased and psaA transcripts were even slightly decreased indicating a specific response pattern of vtc1 to high CO 2 . Again the three transcripts for nuclear encoded chloroplast antioxidant enzymes Csd2, sAPx and 2CPA showed specific response patterns. Csd2 hardly responded to changes in the CO 2 -availability. Only in pgr1, where the transcript levels were decreased at 350 ppm CO 2 , an increase in 0 and 2000 ppm CO 2 was observed. sAPx transcripts, which were decreased in wt Arabidopsis in response to increased and also decreased CO 2 availability, strongly decreased in adg1 treated with 2000 ppm CO 2 and increased in pgr1 at 0 ppm CO 2 . In vtc1, the transcript level responded wt-like in 0 ppm CO 2 and atypically increased in 2000 ppm CO 2 . 2CPA transcripts showed the previously reported CO 2 dependency [30]. The regulation amplitude increased in adg1. The high CO 2 -response was abolished in pgr1 and reversed to an increased transcript level in 2000 ppm CO 2 in vtc1. Discussion Photosynthetic activity and metabolism depend on the stoichiometrical assembly and regulated interaction of nuclear and chloroplast encoded proteins [31]. In addition, changing environmental conditions are continuously sensed and used to adjust the photosynthetic apparatus for balanced supply of energy, reductive power and assimilates. The basic mechanism of regulation involves coordination of gene expression. Although various studies on this topic have identified candidate signals, the complexity of interaction and the multiplicity of signals are far from being understood. Here, a set of biochemical and physiological data and the transcripts were analyzed upon variation of the CO 2 -availability in order to tentatively identify potential signalling dependencies (Table 3). Approaching CO 2 saturation releases the electron pressure within the photosynthetic electron transport chain, increases the acceptor availability at PSI and decreases photorespiration intensity. High quantum yields of PSII (Table 2) indicated efficient electron consumption. Despite metabolic activation, the cellular reduction state of NADP(H) increased in wt under saturating CO 2 (Table 1). In parallel, the ATP availability decreased (Table 1) [6]. Regulation of ROS-responsive genes Six hours of illumination in low, ambient or high CO 2 elicited significant changes in transcript abundance. The increase of ferritin-1 and Bap1 transcript amounts in 0 and 2000 ppm CO 2 indicates redox imbalances and stimulation of ROS signalling in high as well as low CO 2 [28]. This is surprising since relaxation of electron pressure should be maximal at saturating CO 2 with concomitantly low rates of ROS generation. However, a high activation state of the Calvin cycle needed for efficient carbon fixation at saturating CO 2 depends on a highly reduced thioredoxin system that in turn activates the redox regulated enzymes [32]. The increase of ferritin-1 and Bap-1 transcript levels suggests that regulated electron drainage maintains sufficient electron pressure and is involved in the up-regulation of the ROS-related marker genes. The CO 2 dependent response was altered in the mutant genetic backgrounds. In adg1, the Bap1 transcript levels were less induced in 0 ppm and more in 2000 ppm (Figure 3) demonstrating that limitations in chloroplast carbohydrate storage affect the responsiveness of singlet oxygen-signalling. It is tempting to assume that in adg1 increased APx and SOD activities (Figure 1) antagonized Bap1 induction in low CO 2 (Figure 3) due to higher antioxidant protection, while the high transcript accumulation in 2000 ppm CO 2 results from carbohydrate Semi-quantitative RT-PCR data of selected plastome and nuclear encoded genes Figure 3 Semi-quantitative RT-PCR data of selected plastome and nuclear encoded genes. The PCR products were separated on ethidium bromide containing agarose gels and documented electronically in UV light and analyzed densitometrically. The values are given as the logarithm of the induction factor which is calculated from the ratio of transcript level of WT and mutants at ambient 350 CO 2 (hatched bars). The black bars represent logarithmic values calculated after fumigation with 0 or 2000 ppm CO 2 , respectively, in relation to corresponding control at 350 ppm CO 2 for wt, adg1, pgr1 and vtc1. The data are means of n = 3, (᭜) indicates that the SD was less than 30% of the mean value. inhibition of photosynthetic electron transport due to insufficient capacities for starch biosynthesis. Transcript amount co-regulation analysis shows among all genes tested the strongest correlation of Bap1 with the ascorbate content (K = -0.74; Table 3). Consistent with the hypothesis by op den Camp et al. [28] that Bap1 is regulated by singlet oxygen, the antagonistic effect of ascorbate availability and the negative correlation with the activity of antioxidant enzymes supports the conclusion that the overall cellular antioxidant capacity controls Bap1 induction. Because the total glutathione content was significantly decreased in adg1 in 350 ppm at the begin and end of the fumigation time ( Figure 1A), a special regulatory function in Bap1-regulation is indicated for the ascorbate-specific components of the Halliwell-Asada-Cycle [33]. In contrast, for ferritin-1, which is supposed to be induced specifically by H 2 O 2 [28], no such strict correlation was observed with the availability of low molecular weight antioxidants or the activity of antioxidant enzymes. Highest correlation was observed with the availability of hydrolysable sugars (K = -0.62; Table 3) suggesting regulation by carbohydrate metabolism. A similar negative correlation with the contents of hydrolysable sugars was observed for Stp1, which encodes a plasma-membrane monosaccharide transporter [29]. Even stronger than for ferritin-1 transcript abundance and Bap1 transcript abundance, for Stp1 a positive correlation with the glutathione content and the redox state of the glutathione pool was observed (Table 3) suggesting glutathione-dependent regulation. However, excluding adg1-data from the correlation analysis demonstrated that at least for Bap1 and Stp1 the high correlation with glutathione data results from the adg1-specific regulation of the glutathione pool and, therefore, may also result from disturbed chloroplast carbohydrate metabolism. Ascorbate-dependent regulation In a genome-wide transcript analysis of vtc1 plants, Pastori et al. [14] identified 171 genes with altered expression, among which defence genes constituted a significant subgroup. Here, in the steady state, in 350 ppm CO 2 , to which the plants were adapted to, the transcript levels of 2CPA, Csd2 and sAPx were balanced. The transcript level of ferritin-1 was even decreased. The transcript level of Bap-1 was 3.73-fold increased and constitutively high upon variation of the CO 2 -availability, demonstrating that the transcript abundance is dominantly regulated by the mutational defect in ascorbate biosynthesis. The transcript levels of Bap1, Fer1, RbcS and Stp1, although to different extends, negatively correlated with the ascorbate content and positively with the reduction state. Apparently metabolically induced alterations in ascorbate levels caused similar transcriptional changes as genetic mutations in ascorbate biosynthesis [14]. In our analysis the correlation between ascorbate and defence gene expression was stronger when the data set of the vtc1mutant was not included (K = -0.67) demonstrating that in vtc1 hardening responses may mask signalling induced by the variation of CO 2 -availability. The strongest relationship between the reduction state of ascorbate and transcript level regulation was observed for 2CPA. In previous experiments strong suppression of 2CPA transcription was observed upon ascorbate application [29,34,35]. Here, responses upon internal variation of the ascorbate content were analyzed. A strong negative correlation (-86 %; Table 3) with the reduction state of ascorbate, but absence of correlation with the ascorbate content (K = 0.23; Table 3) excludes sensing the ascorbate availability. It is postulated that either specifically dehydroascorbate or, more likely, the electron consumption in dehydroascorbate reduction regulates 2CPA transcript abundance. Coupling of transcript abundance regulation A set of four genes, i.e. Bap1, psaA, psbA and Stp1, showed a similar transcript pattern in response to the seven metabolic parameters, i.e. glutathione contents and reduction state, 3-PGA and DHAP contents, the calculated NADPH reduction state and assimilatory force, and the hydrolysable sugars (Table 3). It should be noted that transcript levels of psaA and psbA changed in parallel in response to the imposed metabolic and mutational strains. This contrasts the anti-parallel responses of psaAB and psbB observed upon transfer to photosystem I and II-specific light regime previously described by Pfannschmidt et al. [11]. It is concluded that a variation between 0 and 2000 ppm CO 2 in the mutant background elicits more severe changes in metabolism and signalling than altering the light quality from red to far red at very low photon flux density [11]. However, the photosystem I transcript psaA decreased relative to the photosystem II transcript psbA when the electron pressure was reduced by increasing the CO 2availability (Figure 3), e.g. from 1.07 (0 ppm) to 1 (350 ppm) to 0.88 (2000 ppm) in wt indicating a stronger transcription of genes for the PS-I reaction centre protein in 0 ppm CO 2 and those encoding PS-II reaction centre proteins in 2000 ppm CO 2 . In adg1, the gradual response was unaffected, however generally the transcript abundance of psbA was higher than that of psaA ( Figure 3). In pgr1 the relative transcript abundance normalized to wt at 350 ppm was also higher under all three CO 2 conditions, suggesting that the higher psbA transcript levels were caused by photoinhibitory high carbohydrate availability or a limitation in the Rieske activity. It is tempting to suggest that the signal is transmitted by PQ-dependent redox signals. However, in vtc1 more psaA than psbA was observed at 350 ppm CO 2 , a balanced psaA/psbA ratio in 0 ppm CO 2 and an inverted ratio in 2000 ppm, demonstrating that the ascorbate availability affects the stoichiometry of transcripts for the photoreaction centres upon variation of the CO 2 -availability. After 6 h fumigation with 350 ppm CO 2 , the steady state quantum yield of PS-II ((F M' -F S )/F M' ) was wt-like (Table 2), while the quantum yield was adjusted to 1.23-fold higher levels within 4.5 min of illumination upon doubling of the light intensity (Table 2). On the background of decreased ascorbate availability (Figure 2A) this demonstrates that the photosynthetic electron transport chain was otherwise protected. Chlorophyll fluorescence analysis showed that the acclimation was insufficient to protect the photosynthetic membrane in low CO 2 . The low quantum yield of PS-II ( Table 2) demonstrates even stronger photosynthetic impairment than in wt. That the regulation of the quantum yield of PS-II does not correlate with the psaA/psbA-ratios, supports the hypothesis that the regulation of the transcript abundance of the photoreaction centre proteins is more dependent on ascorbate-specific signals than on the redox state of the PQ-pool. The PQ-dependent long term acclimation response postulated by Pfannschmidt et al. [11] appears further subordinate to signals linked to the metabolic state of the PGA and DHAP concentrations, the NADPH/NADP + ratio, the assimilatory force and hydrolysable sugars (Table 3). In adg1 a perfect positive correlation (K = 1) was observed between the concentration of soluble sugars and the psbA transcript levels, whereas the other lines showed a high negative correlation (wt : K = -1;pgr1 : K = -0.96;vtc1: K = -1), highlighting the importance of carbohydrate-dependent signals in psaA and psbA regulation. Correlation with the adenylate status In animals, the energy status sensed for instance via insulin-like growth factor 1 or by AMP-dependent kinases plays an important role in regulation of gene expression [36]. However, here except an only weak (K = 0.41) correlation for 2CPA, no correlations between the adenylate status and transcript levels were indicated. Due to the photoautotrophic nature of plants, plants rarely encountered energy deprivation. Further on, the chloroplast and cellular adenylate status directly coordinates metabolic pathways via feed-back and feed-forward mechanisms [37]. The lack of strong energy-linked regulation demonstrates that the adenylate phosphorylation state may not be a major signal, which is directly linked to the regulation of nuclear gene expression in context of photosynthesis. However, the analysis of pgr1 showed a mutant specific regulation of Csd2, sAPx and 2CPA upon altered CO 2availability ( Figure 3). Because pgr1 is unable to acidify the thylakoid lumen below pH 6 due to a mutation in the Rieske protein [18], the ATP/ADP ratio was very low in the morning and generally decreased irrespective of the CO 2 concentration (Table 1). The concentrations of most metabolites were not significantly changed in response to altered CO 2 availability compared to wt indicating efficient metabolic compensation (Figure 3). The transcript levels of Csd2, sAPx and 2CPA were regulated in a mutant specific manner, while RbcS, Bap1, Fer1 and Stp1 responded wt-like demonstrating that ROS-and carbohydrate signalling pathways were unaffected [28,29,38]. Upon the different treatments of pgr1 the transcript abundance of sAPx highly correlated with the calculated reduction state of NADPH (K = 0.99), while no correlation was observed in the other plant lines. It is therefore concluded that the mutation in the Rieske protein limits the finecontrol of sAPx expression and makes it more dependent on stromal redox signals. The difference in sAPx regulation between pgr1 and wt also demonstrates that in wt the Rieske protein influences sAPx expression. Excluding ROS-signalling, because the Bap1 control was wt-like, either the redox state of the PQ pool or thylakoid acidification/the adenylate status may modulate sAPx transcript abundance. The Csd2 transcript levels positively correlated with APx activity in all lines under all treatments (Table 3). Concomitantly, in pgr1, also an almost perfect correlation was observed with the redox state of NADPH. Compared to sAPx the Csd2 transcript level was regulated with higher amplitudes. sAPx and Csd2 transcripts encode two prominent chloroplast antioxidant enzymes, which successively act in superoxide and H 2 O 2 detoxification in the stromal part of the Halliwell-Asada cycle (ascorbatedependent water-water cycle) [33]. Coordinated regulation of transcript abundance points out expressional coregulation. Conclusion In the cell, metabolite, redox and energy signals are tightly linked, which makes differentiation of signalling cascades difficult. This study demonstrates that comparison of mutants with specific limitations in the coordination of plant acclimation at least transiently uncouples signalling branches. Comparison of adg1, pgr1 and vtc1 with Arabidopsis wild-type plants showed coordinated expression of Bap1, psaA, psaA and Stp1, which have been discussed previously to respond specifically to singlet oxygen [28], the redox state of the PQ pool [11] and monosaccharide availability [29], respectively. Like Ferritin-1, Bap1 and Stp1 correlated strongest with the ascorbate contents, while for psaA and psbA a stronger link to the assimilatory force and NADPH/NADP + ratio was indicated. Correlation of 2CPA expression, whose transcription is controlled by the acceptor availability at photosystem I in wt [30], with the redox state of the ascorbate pool further strengthens the links between the antioxidant system and the pho-tosynthetic electron transport and more specifically chloroplast-to-nucleus signalling. It is postulated that during evolution a stabilized network has evolved which links photosynthetic metabolism to nuclear gene expression. Mutants might be well balanced under standard conditions, but application of environmental changes leads to an altered acclimation in comparison to wt, which allows to tentatively differentiate parallel induced signalling cascades. Metabolite analysis Total and reduced ascorbate was quantified spectrophotometrically according to [39] from plant material frozen in liquid nitrogen by recording the decrease of absorption at 265 nm following addition of ascorbate oxidase. Glutathione contents were determined fluorimetrically after derivatization with monobromobimane and HPLC separation on a "reverse phase" Hypersil BDS-C15 5 μm column from tissue extracted in 0.1 M HCl and 5 mM diethylenetriamine pentaacetic acid [40]. 3-PGA and DHAP contents of perchloric acid extracts were quantified according to Dietz & Heber [41], ATP and ADP with firefly enzyme according to the luminometric method described by Kaiser & Urbach [42]. Assimilatory force and NADPHreduction state were calculated as described in Dietz & Heber [24]. Reducible hydrolysable and soluble sugars were determined according to Yemm & Willis [43] with anthrone reagent. ABA contents were quantified according to Weiler [44] from freeze-dried leaf material. Chlorophyll-a-fluorescence measurements Between 4-6 h after onset of the CO 2 fumigation, the response of the mutants to an increase in light intensity to 285 μmol quanta m -2 s -1 was monitored for 4 min using a Mini-PAM Fluorometer (Walz, Effeltrich, Germany). 30 s before and during the illumination with 285 μmol quanta m -2 s -1 , the fluorescence parameters F S and F M' [45] were determined every 30 s with saturating light pulses (1s; >3000 μmol quanta m -2 s -1 ). The quantum yield of PS-II (ΦPSII) was calculated as (F M' -F S )/F M' .
2016-05-04T20:20:58.661Z
2006-08-17T00:00:00.000
{ "year": 2006, "sha1": "22a2301132555b61e26e432bf9efa541bbb54041", "oa_license": "CCBY", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/1471-2229-6-15", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "205b70a6d59f18445812e401c44946ebd0e2bac5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
210714514
pes2o/s2orc
v3-fos-license
The parallelism motifs of genomic data analysis Genomic datasets are growing dramatically as the cost of sequencing continues to decline and small sequencing devices become available. Enormous community databases store and share these data with the research community, but some of these genomic data analysis problems require large-scale computational platforms to meet both the memory and computational requirements. These applications differ from scientific simulations that dominate the workload on high-end parallel systems today and place different requirements on programming support, software libraries and parallel architectural design. For example, they involve irregular communication patterns such as asynchronous updates to shared data structures. We consider several problems in high-performance genomics analysis, including alignment, profiling, clustering and assembly for both single genomes and metagenomes. We identify some of the common computational patterns or ‘motifs’ that help inform parallelization strategies and compare our motifs to some of the established lists, arguing that at least two key patterns, sorting and hashing, are missing. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’. INTRODUCTION The future of scientific computing will be increasingly data intensive due to the growth of data from sequencers, telescopes, microscopes, light sources, particle detectors, and embedded environmental sensors.Open data policies for scientific research are leading to large community data sets of both raw and derived data.Some of resulting data analysis problems involve massive numbers of independent computations while others require irregular computations in which the objective of the analysis is to discover the underlying structure of the data.Many genomics problems fall into this latter category, where the structure and relationship between different sequences or entire genomes is unknown.These problems require data structures like hash tables, histograms, graphs, and very sparse unstructured matrices.They have dynamic sources of load imbalance and little locality, leading to unpredictable communication that is Extended preprint of paper "Yelick K et al. 2020 The parallelism motifs of genomic data analysis.Philosophical Transactions of the Royal Society A." Published version to appear at http://dx.doi.org/10.1098/rsta.2019.0394Contact:yelick@berkeley.edu both irregular in space, with arbitrary connections between processors, and irregular in time, where one process may need data on another at any point in time. In this paper we describe parallelization challenges and approaches for high performance genomic data analysis using a series of examples drawn in large part from the ExaBiome project, including k-mer counting, alignment, genome assembly, protein clustering, and machine learning.We consider analysis of both DNA and proteins expanding beyond the strict domain of genomics into proteomics.Shared memory programming is a natural fit for these problems, and indeed the most popular genome assemblers and clustering algorithms have typically run on shared memory computers.In developing High Performance Computing (HPC) implementations of these applications, we use distributed versions of shared data structures that are updated asynchronously by individual processors with minimal global synchronization. In contrast, the applications that dominate HPC workloads are scientific simulations that have a natural degree of locality from the underlying physical laws.These simulations often lend themselves to domain decomposition, where the physical domain is partitioned across processors, and while communication may be both global and to nearest neighbors, the presence of timesteps and iterative methods lead to natural phases of communication and computation separated by global synchronization.Figure 1 shows a notional spectrum of simulation and analysis problems and the level of irregularity which tends to correlate with the difficulty of parallelization.On the left are independent parallel jobs, whether from simulation or analysis.These are easily parallelized on a cluster or cloud platform using programming systems like Spark [1], or even geographically distributed computing as in the grid [2].Simulation problems with physical structure fall in the middle two categories, depending on whether they have global patterns of communication and synchronization, which often stress the global network bandwidth but are simpler to reason about, or involve pairwise exchange of data using synchronous or asynchronous two-sided message passing.These boundaries are neither strict nor precise, with many applications having a mixture of styles, and deep learning landing with simulation.However, the spectrum highlights that the genomics applications will provide an interesting perspective for the design of parallel hardware and software systems. In addition to summarizing parallelization techniques for genomics analysis, we identify a relatively small set of computational motifs that appear multiple times across applications, illustrated in Figure 2.While we do not presume that these motifs are sufficient for all such applications, we believe they can substantively inform the design of libraries, programming systems, benchmarks, and hardware.Following a brief overview of the ExaBiome project in Section 2, Section 3 gives an overview of several genomics data analysis problems and the computational motifs that will lead to a particular parallelization strategy.Section 4 summarizes our genomics motifs and compares them to other lists of motifs, showing strong similarity to another list for data analysis problems more broadly, although we argue that two key motifs, sorting and hashing, are missing.Section 5 describes how the motifs lead to different types of programming support and hardware requirements, and Section 6 makes some concluding remarks on the value and limitations of our motifs. EXABIOME OVERVIEW The ExaBiome project is developing scalable parallel tools to analyze microbial species, such as bacteria, fungi or viruses, which typically live in communities with hundreds of different species mixed together.The genome level analysis of these communities, metagenomics, is key to understanding the makeup of these communities, how they change based on external factors like temperature, moisture, or chemicals, to understand their functional behavior, and to compare them across communities.An estimated 99% of microbial species are not culturable in isolation, making metagenome analysis the preferred technique for understanding these communities.The human microbiome has been linked to a wide range of health issues including diabetes, cancer and mental health, while environmental microbiomes can both have both positive or negative impacts on everything from oxygen production and remediation of chemical spills to formation of toxic algal blooms. Exabiome, which is part of the Exascale Computing Project [3], is developing HPC solutions for problems that were predominantly computed on shared memory or serial machines, and taking advantage of the processor accelerators such as Graphics Processing Units (GPUs) that are key to future exascale system designs.The project is developing assemblers for both short and long read sequencing data (MetaHipMer and diBELLA), taking the fragmented output of sequencers and constructing long sequences from which genes, corresponding proteins, and taxonomic information can be derived.Working across large protein data sets, PISA and HipMCL extract clusters of related proteins that are useful in understanding ancestry and functional behavior.The team is also exploring deep learning techniques to relate proteins to 3D structure and function, and a set of methods to compute signatures for a metagenomes that can be used for comparisons across microbial samples in space or time and database search. The project has already demonstrated unprecedented scales in terms of data set size and performance with the goal of growing the data set capability by more than an order of magnitude.The largest metagenome assembly to date used wetland soil samples that were a time series data set across several physical sites from the Twitchell Wetland in the San Francisco Bay-Delta.These samples consisted of 2.6 terabytes of 7.5 billion reads, which are the DNA fragments output from the sequencers, and the assembly computation required 5.1 hours on 1024 nodes of the Cori supercomputer at NERSC.We believe it is the largest assembly of any kind done as a single co-assembled computation, i.e., rather than pre-filtering the data in some way or assembling pieces of the data separately.Separate analysis shows the value of such co-assemblies, especially in extracting information about the low abundance species in a sample.The largest protein clustering computation used assembled metagenomes and metatranscriptomes from two community data sets (IMG and NCBI).This unprecedentedly large data set contained 383 million proteins and 37 billion connections, requiring about one hour on 729 nodes of the Summit system at OLCF. A SAMPLING OF GENOMIC ANALYSES We describe at a high level some of the algorithms and parallelization approaches used in genomic data analysis, selecting a set of problems that represent a diverse set of computational patterns and are prevalent across multiple applications.Our primary focus is on distributed memory parallelization techniques, so we describe the data distribution and communication approaches as well as any load balancing issues, or limitations to scaling when they exist. K-mer Analysis Given a set of variable-length strings, a common approach to analysing those strings is to break them into fixed-length substrings called k-mers.For example, the string on the left has the list of 4-mers on the right. CCTAAAGCCTA CCTA CTAA TAAA AAAG AAGC AGCC GCCT CCTA Several bioinformatics analyses involve counting the number of occurrences of each distinct k-mer, e.g., to filter low-frequency k-mers that are likely errors, to find high frequency k-mers that indicate repetitive regions of the genome, or by using the k-mer histogram as a signature for a set of genomics data.K-mers also serve as seeds in determining whether two DNA fragments are likely to align with one another and may also be used on protein data with its 21-character amino acid alphabet in addition to DNA.The most common approach to k-mer counting is to build a hash table of k-mers, possibly using a Bloom filter, an approximate and space efficient data structure that answers queries about set membership, to eliminate singletons.If the k-mer length is small, a direct map may be practical, and sorting is also possible, although to keep memory use in check, k-mers are generated incrementally and identical ones merged while they are sorted to avoid having all of them in memory at once.Manekar and Sathe give an overview of the various approaches and benchmark some of the popular shared memory tools [4].Distributed memory parallelism for k-mer counting becomes increasingly important to address large sets of environmental microbial genomes and to handle cross-genome comparisons.Raw sequencing data may be several times larger than the final genome, e.g., the data set may be sequenced repeatedly giving it a sequence depth of 10-50x to ensure that every location is sequenced multiple times so that sequencing errors can be eliminated.Large environmental data sets may therefore run to multiple terabytes resulting in input data that does not fit on a single largememory compute node.The list of k-mers -prior to removing duplicates -requires storage nearly k times larger than the input, and given that typical k-mer lengths may run from 10-50 characters, the raw k-mers may not fit even in the aggregate memory of a multi-node system.The ExaBiome project uses a hash-based approach to k-mer counting with a Bloom filter used to avoid storing most of the singleton k-mers.The resulting hash table stores the count of each unique k-mer that occurs more than once in the original data. In general, there is no communication locality when building the distributed hash table, so on p processing nodes, each k-mer will be communicated remotely with probability (p-1)/p.This creates an irregular many-to-many communication pattern without any predetermined patterns and without natural points for bulk-synchronous communication.A Bloom filter is useful to avoid storing singleton k-mers, but requires the same irregular manyto-many communication as the hash table: all k-mers are communicated, but the Bloom filter requires only a few bits for each unique k-mer.A good hash function can ensure even load balance of the unique k-mers, but significant communication load imbalance still results when the frequency distribution is skewed, as is often the case in real data sets where there are some very high frequency kmers.Local aggregation of such "heavy hitters" can reduce communication bottlenecks for high frequency k-mers, but the effectiveness depends on having a small number of such k-mers so that a local table can collect and combine them. Within the ExaBiome project we have multiple instances of k-mer analysis, which include a basic count / histogram operation and indexing to collect the information about the position of each k-mer in the set of input sequences (reads).Memory utilization is a key factor in design, and to avoid having the full list of k-mers (with duplicates) in memory at any given point in time, one version of the code performs all-to-all exchanges in phases [5]- [8] and counts k-mers for use in the short read assembly, and another version keeps indexing information for use in computing long read overlaps [9].Other distributed memory k-mer analysis tools include Bloomfish, which uses a similar MPI all-to-all collective approach but has only a single phase [10] thus limiting data set size due to memory constraints, and Kmerind, which has demonstrated scaling to over 20 TB data sets by using multiple phases and various memory saving optimizations [11].The most recent ExaBiome k-mer counting tool is entirely without global synchronization and uses one-sided communication to continually send k-mers while combining and storing local ones.Not only does it avoid global synchronization, but it also hides latency by using non-blocking communication. Pairwise Alignment Alignment is performed on both DNA and proteins to find approximate matches between strings, allowing for a limited number of insertions, deletions, and substitutions.Pairwise alignment is typically done with some form of dynamic programming, i.e., Needleman-Wunsch [12] for the best overall alignment or Smith-Waterman [13] for the best local substring alignment.Both algorithms find an optimal match based on a given scoring scheme that rewards matches and penalizes mismatches, insertions, and deletions.The algorithms operate by filling in an n × m scoring matrix based on strings of length n and m and compute the optimal score at each position with an overall sequential cost of O(nm).The resulting dependence pattern leads to parallelism along an anti-diagonal wave-front.A popular heuristic algorithm, called X-drop [14], searches only for high-quality alignments by tracking the running highest score and not exploring cell neighborhoods in the matrix whose score drops by a given threshold below the maximum.It gets its performance benefits from dynamically resizing the anti-diagonal wavefront (i.e., its band), therefore reducing the search space, and may stop early when there is no high-quality match. Pairwise alignment appears throughout genomic data analysis, because both errors in data from sequencers and variations in genomes across individuals lead to imperfect string matches.The ExaBiome project has multiple instances of alignment, which include aligning short reads to partially assembled sequence data (called contigs), aligning long reads to each other, or aligning proteins to each other.Typical lengths of DNA from sequencers run from 100-250 characters for short reads to over 10,000 for long-read technology reads, while proteins are typically a few thousand characters long.Even if one is aligning against a full genome, e.g., the 3-billion-character reference human genome or a large database of genomes, it will be done by starting from a predetermined location or seed as described in the next section.At the scale of a few hundred to a few thousand characters, pairwise alignment is amenable to SIMD [15]- [17], multicore, GPU [18]- [21], and even FPGA [22,23] parallelism, and can take advantage of narrow data types to represent the four nucleotides in DNA, the twenty-one amino acids in proteins, or the limited range of values in the scoring matrix.Recent work also shows how dynamic programming problems exhibit essentially linear speedups using the concept of rank convergence [24], in which the pairwise alignment is computed via a series of dense matrix multiplications on the tropical semiring where the scalar addition is replaced with the maximum operator and scalar multiplication becomes integer addition. Alignment dominates the local on-node computation in ExaBiome applications, as well as other genome analysis tools across scales.However, there is not sufficient work for distributed memory parallelism within pairwise alignment, and even GPU offload requires batch alignment, where a set of pairs are aligned as a single operation, to amortize the startup and data movement overhead. All-to-All Alignment Alignment is often done across a set of strings, such as alignment against a database of reference genomes or proteins, a set of patient genomes against a single (large) reference, or a set of reads from a sequencer against each other or against partially constructed genomes fragments as part of genome assembly.The ExaBiome project performs all-toall alignments as part of short read assembly (merAligner within MetaHipMer), in which case the input reads are aligned against all partially assembled contigs [25], and as the first step in long-read assembly where reads are aligned against each other in BELLA and diBELLA [9,26]. The all-to-all computational pattern is familiar from nbody simulations and, as in that case, computation on all O(n 2 ) pairs of strings/particles is prohibitively expensive.To tackle this, particle simulations rely on hierarchical treebased approaches that exploit the physical layout of particles in space, which is not applicable in alignment.Instead, in aligning a set of sequences, one can pre-filter the pairs to find ones that are likely to have a good alignment.Our approach therefore looks for sequences that share at least one short identical string, e.g., a k-mer, which can also be used to seed the alignment.For example, to align a set S against another T , store all k-mers from strings in set T in a hash table and lookup all the k-mers from strings in S to find matching pairs, starting each pairwise alignment from the position of the common k-mer. In distributed memory, the k-mer hash table has an irregular many-to-many communication pattern that is familiar from the k-mer counting, but each k-mer now retains the list of sequences containing that k-mer.The hash table may be viewed as a sparse k-mer×sequence matrix with sequences from set T. To compute the set of sequences from S that have a matching k-mer, we can take either a linear algebra or database hash-join view of the problem. In the former case, we construct a k-mer×sequence matrix for each set, transpose one and multiply them to obtain a sparse sequence×sequence where each nonzero at position i, j represents a pair S i , T j that share a common k-mer.The sparse matrix primitive that performs this operation is known as SpGEMM, for Sparse GEneralized Matrix-Matrix multiplication [27].It is generalized in the sense that the multiplication can operate on any arbitrary algebraic structure, also known as a semiring, and not just the real field.The single-node shared memory BELLA code uses this approach [26] to align a set of long reads to itself, so S = T .Both input and output matrices in BELLA's case are sparse. The second approach constructs the same k-mer×sequence table for T but does not explicitly compute the sequence×sequence matrix.Instead, as it computes the set of k-mers in S, it looks them up in T s table to find sequences in T with a common k-mer.The distributed memory diBELLA uses this approach in a bulk-synchronous series of many-to-many exchanges, while merAligner performs alignments on-the-fly as the read sequences are processed (typically fetching the contig from a remote processor) that contains a matching k-mer.merAligner also caches these contigs as there is enough likely reuse that can be leveraged to save repeated communication of contigs. All of these distributed memory alignment algorithms involved irregular many-to-many communication either done asynchronously as 1-sided remote look-ups or in batches.The asynchronous approach has more messages, each of which is small, so communication software overhead and latency can limit performance.It has the advantage of overlapping computation and communication together, which makes good use of both networking and computing resources.The bulk-synchronous approach leads to better message aggregation between pairs of processors, but it can suffer from high load imbalance costs due to the implied barriers at each exchange.However, separating communication from computation prevents overlap and is more likely to trigger bisection bandwidth limits in the network.The pairwise alignments that follow communication can either be done one pair at at time or in batches, with the bulksynchronous version likely having larger batches to do.K-mer-based matching is not the only method that is used to index large genomic data sets.In particular, suffix trees and their more practical sibling suffix arrays provide an alternative way of indexing large data sets.Rather than hashing, these methods using sorting and search on a compact representation of the suffix substrings and then build a hierarchical index representation of the data.Suffix arrays are significantly more flexible than direct k-mer based approaches because they effectively index all possible k-mer lengths at once.However, they are harder to implement and they often come with increased computational costs.Recent work on distributed suffix array construction [28] as well as querying [29] has shown scaling to eight nodes but with the potential to make these data structures more popular in HPC approaches. There are other applications that arise in comparing which genomes or metagenomes align to each other.In this scenario, one is often interested in some sort of "distance" metric between pairs of genomes or metagenomes, as opposed to merely identifying the candidate pairs that might align.The output is often dense because almost all pairs of (meta)genomes will contain conserved regions that will provide a match using shared k-mers.Using an approach similar to BELLA, Besta et al. use parallel sparse matrix computations to compute the Jaccard similarity between all pairs of genomes.They also utilize the aforementioned SpGEMM primitive, with one difference that the software is optimized for the case where the output genomes×genomes matrix is dense because it holds the Jaccard similarity. The Bioinformatics community have been developing alternative space-efficient data structures in order to compute (meta)genome-to-(meta)genome distances for the scenario where a distributed-memory computer is unavailable.MASH [31], perhaps the most popular of such tools, uses the MinHash sketch technique [32] for each (meta)genome and only computes the Jaccard similarity on those sketches, as opposed to finding explicit shared k-mers.Recently, Baker and Langmead [33] took the sketching approach one step further and used the HyperLogLog (HLL) algorithm for further compression.While we are not aware of any distributed-memory approaches to sketch-based genomic distance calculations, the HLL data structure itself is trivially mergeable.HLL has been utilized in distributed genome assembly for efficient k-mer counting in the past [5].We therefore expect forthcoming developments in distributed-memory sketch based genome comparison. Graph Traversal for Genome Assembly Genome assembly involves the analysis of reads from sequencers to produce longer contiguous sequences of the genome with errors corrected.For short reads with their low error rate (< .1%), the MetaHipMer software performs k-mer analysis and eliminates low frequency k-mers which are presumably errors.Along with each k-mer in the final hash table, it stores left and right high quality extensions, i.e., the character that frequently appeared to the left and right of the k-mer in the original input.This table is then viewed as a De Bruijn [34] graph in which a k-mer vertex is connected to another if their k-mers overlap in k-1 contiguous positions.The left and right extensions with each k-mer make it straightforward to find neighboring vertices.A depth-first traversal starting from arbitrary k-mers compute the connected components of the graph which are linear sequences called contigs.For metagenomes, the same basic method is used but with increasing values of k, with contigs from the earlier steps added as reads to the later ones.This iterative process helps to improve coverage of low-depth, highly fragmented genomes in the earlier phases and resolve repeated regions and obtain longer contigs in the later phases.Once the contigs are formed, the assembler builds a graph with contig vertices and uses alignment to find reads that align to multiple contigs and thus form an edge in the contig graph.There are several other graphs traversals performed on both the k-mer and contig graph, which are omitted here.We focus on parallelization of contig construction on the k-mer hash table.A more detailed description of contig generation and other graph traversals during assembly are available in the HipMer and MetaHipMer papers [5]- [8,35]. MetaHipMer takes advantage of the memory and computing performance of distributed memory supercomputers to support large-scale assemblies.The hash tables involved in our algorithms can be up to tens of terabytes and do not fit in a typical shared memory node, and contig generation is written in UPC [36,37] so that hash table buckets are directly accessed by any processor using one-sided communication.During construction, we aggregate multiple insert operations intended for the same remote processor to amortize communication overhead.This is done dynamically and asynchronously: once a particular buffer for a remote node is full, it is sent using one-sided memory operations with atomics to the memory of a remote processor.Hash table inserts and lookups are done in two separate phases, so the delayed inserts from aggregation are not semantically visible -all of the inserts are complete at the end of the phase and the order is not important. During graph traversal the hash table remains fixed, although multiple traversals happen in parallel from different starting vertices and individual k-mer vertices are marked as visited to avoid duplicate traversals.This is done with fine-grained remote atomics rather than locking to minimize the number of communication round trips, although this stage is latency-limited since each processor is performing a single-threaded traversal of the graph and needs to wait for a remote vertex before continuing.In later stages of assembly, the hash table of contigs is truly read-only and each contig may be used multiple times by a single processor, so caching remote contigs is efficient and preserves correctness.Caching is not performed during contig generation because there is limited reuse. Sparse Matrix Operations for Protein Clustering Proteins of the same evolutionary origin are said to be homologous.Homologous proteins often perform similar functions; hence homology finding facilitates protein annotation and the discovery of novel protein families.One often infers homology from excess sequence similarity; with "excess" referring to higher similarity than can be encountered by chance.Even then, a simple pairwise similarity metric is just a proxy for homology and can lead to both false positives and false negatives, depending on the parameters used in sequence similarity calculations.A clustering step that takes the similarity matrix as input and exploits topology information (i.e., the transitivity of neighboring proteins) to find more robust and accurate protein families.This helps eliminate a significant portion of spurious homology connections and recovers many missing links while computing a globally consistent view of the clusters. A typical pipeline for protein clustering therefore involves first finding highly-similar sequences using many-tomany alignments among proteins, using one of the popular tools such as MMseqs2 [38], DIAMOND [39], or LAST [40].K-mer based indexing that is similar in spirit to those described in Section 3.3 is often used to reduce the number of comparisons.The ExaBiome project is currently working on a novel many-to-many protein similarity search tool, tentative called Protein Sequence Aligner (PISA), that is scalable to Exascale architectures.The result of the similarity matrix/graph computation is then fed into a clustering algorithm that discovers the ultimate protein families.Since this two-step process is often very expensive, single-step clustering algorithms [41] have gained in popularity among those who does not have access to high-end computing equipment, despite often resulting in fragmented clusters.We will not be focusing on those methods here because one of the goals of the ExaBiome project is to improve accuracy by utilizing Exascale computers. The Markov Cluster (MCL) algorithm [42] is arguably the canonical graph-based algorithm for clustering protein similarity matrices.The MCL algorithm treats this similarity matrix as an adjacency matrix of the graph where vertices are proteins and edges are similarities.The graph is sparse because only those similarities that are above a certain similarity threshold are retained.MCL performs random walks from every vertex (protein) in the graph.It exploits the fact that most of these walks will be trapped within tightly connected clusters, hence driving up the probability mass that is accumulated within each cluster.In order to avoid densifying the intermediate matrices and making the computation infeasible, MCL performs various pruning strategies that are shown to not hurt the quality of the final clusters [43]. The simultaneous random walks directly map to a sparse matrix primitive that is commonly known as SpGEMM, which computes the product of two sparse matrices.The high-performance distributed re-implementation of the Markov Cluster algorithm, known as HipMCL [44], utilizes some of the most general and scalable sparse matrix algorithms implemented within the Combinatorial BLAS [45].These algorithms include a 2D SpGEMM algorithm known as Sparse SUMMA [27], several different shared memory SpGEMM algorithms [46] that are optimized for different iterations of HipMCL, a fast memory estimator based on sparse matrix dense matrix multiplication for memory-efficient SpGEMM [47], as well as a very fast distributed memory connected components algorithm [48] that is used for extracting the final clusters from the result of the HipMCL iterations.The integration of GPU support to HipMCL as well as other performance improve- ments for pre-exascale architectures has recently been published [49].Using faster communication-avoiding SpGEMM algorithms [50] for HipMCL is ongoing work. Machine Learning for Genomics and Proteomics A comprehensive coverage of machine learning (ML) applications in genomics and proteomics is both too large and too fast growing to address here.Instead, we touch on the computational building blocks for the machine learning algorithms that are commonly applied to genomic and proteomic data.A large class of machine learning methods are built on top of basic linear algebraic subroutines that are found in the modern dense BLAS [51], Sparse BLAS [52], or the GraphBLAS [53].This relationship is illustrated in Figure 3. Machine learning has been applied to metagenome assembly in various contexts.For example, MetaVelvet-SL [54] uses Support Vector Machines (SVMs) to identify the potentially chimeric nodes on a metagenomic de Bruijn graph.A chimeric node is shared by the genomes of two closely related species and needs to be split into multiple nodes for an accurate assembly.A popular application of ML to proteomics data is to discover ancestral relationships (i.e.homology) between proteins.Kernel-based methods, such as SVMs, have been traditionally applied to this problem [55].Other fundamental problems in this domain are protein folding [56], especially the prediction of the 3D structure of the protein [57], and protein function prediction.The function of a protein can be predicted using either the sequence, the 3D structure of the protein, or both [58]. Recently, there has been a growing set of deep learning (DL) approaches to these problems.The particular DL machinery is input and problem specific but include transformers used in language modeling [59], convolutional neural networks (CNNs), and graph neural networks (GNNs).These different DL machines have different computational footprints: Transformers heavily rely on relatively large dense matrix computations, CNNs can be trained using a series of small matrix multiplications [60], and GNNs are bottlenecked with large sparse matrix-dense matrix multiplications [61]. TIFS Several computational patterns arise in the ExaBiome application and are common to other genomics applications and data analysis problems more broadly.These are displayed in Figure 2 and include: 1) Hash tables: These are used throughout the genome assembly applications, MetaHipMer and diBELLA, to store k-mers for the purposes of counting (histogramming), and for quickly finding pairs of sequences with a common substring.2) Sorting: While used less frequently than hashing in our examples, it is another technique for counting k-mers and is used in suffix arrays and to prioritize graph operations, e.g., finding the longest contig as a starting point for a graph traversal.3) Graph traversals: Used to connect k-mers into contigs and in other analyses on the contig graph to resolve ambiguities and increase assembly length.4) Alignment: The problem of finding the minimum edits required to make two strings match is used on raw read data, assembled genomes, genes, and proteins.5) Generalized n-body: The problem of comparing or aligning all sequences in one set to another set (or the same set), but using a method such as limiting to pairs with a common k-mer to avoid all O(n 2 ) comparisons.6) Sparse matrices: Sparse matrix products used within the generalized n-body problem to find pairs, for protein clustering, etc. 7) Dense matrices: There is ongoing work by ourselves and others to use machine learning methods in genomic data, which often make use of dense matrix multiplication as described in Section 3.6.Further, pairwise alignment can in theory also be computed using dense matrix computations on semirings.In addition to these seven motifs of genomic data, local computations such as parsing reads into k-mers and other basic string operations, arithmetic operations, logical operations, and more occur in all of our applications.When these can be performed independently on separate data, they can be invaluable in obtaining high performance parallel implementations, but if the operations are each performed serially they are not instrumental in understanding parallelization.In comparing to other lists of motifs, we include these as "Basic Operations" in Table 1 although they tend to be linear time operations on the input which can be almost trivial to parallelize, and thus less useful as a parallelism motif. While our selection of problems informing our genomics motifs is naturally biased, we note that independent HPC researchers have been focusing on similar problems.For example, Darwin [62] is a co-processor specifically designed to perform fast all-to-all long read overlapping and alignment in the context of assembly.The body of work from Aluru's group at Georgia Tech similarly encompasses k-mer analysis, alignment, assembly, and clustering and uses some of the same patterns albeit pushing in the direction of bulksynchronous computation [63].SARVAVID [64] provides a Domain-Specific Language (DSL) with language constructs for k-mer extraction, index generation and look-up, clustering, all-to-all similarity computation, graph construction and traversal for genome assembly, and filtering errorprone reads.Mahadik et al. identify this list of "kernels" as common to a broad variety of genomics applications.We remark that these kernels can be mapped to our own list of motifs. There are other proposals for the parallelism motifs that cover many applications of scientific simulations, data analysis, and more.The original set of "Seven Dwarfs" due to Phil Colella [65] was meant to capture the most important computational patterns in scientific simulations and are shown in the first column in Table 1.The Berkeley View report [66] on multicore parallelism, in the second column, generalized these patterns to capture a broader set of applications including some data analysis problems.A report by the National Academies [67] then defined a set of "Seven Giants" of Big Data, shown in the third column, which combined sparse and dense matrices into a single motif.Ogres [68] is another similar, yet multidimensional classification of both HPC and Big Data applications based on 51 well-studied NIST applications.Our own genomics motifs in the last column are quite similar to those in the "Seven Giants" set, but in our view the ideas of hashing and sorting are so essential to understanding data analysis for genomic data and for other large-scale database analyses involving joins that they deserve to be separate categories.They are also standard in other large-scale database operations.On the other hand, optimization and integration are very general techniques that can lead to a variety of parallelism patterns depending on the data and method being used, e.g., they may be dominated by dense or sparse matrix operations, as well as other independent computations.Each list takes a somewhat different approach to characterizing independent operations, which in our view is such a general notion that it does not belong as an algorithmic motif.Colella's Monte Carlo class is a more specific class of problems that do lead to a style of parallelism, albeit dominated by independent calculations. HARDWARE AND SOFTWARE SUPPORT FOR PARALLEL GENOME ANALYSIS Although some analysis problems can be done independently or with traditional bulk-synchronous parallelism, we argue that the irregular and asynchronous nature of some of these problems [7,69,70] places different requirements on the programming systems, libraries and network than most simulation problems.In addition, communication optimizations have a somewhat different characteristic than in more structured and regular computations. Roughly speaking, there are four programming styles for distributed memory communication: • Bulk-synchronous collectives, such as broadcast, reductions, and all-to-all exchanges.For example, MPI collectives have a rich set of collective operations [71]. • Two-sided point-to-point communication, i.e., send and receive, which need not be synchronous, but requires [72]. The majority of simulation codes are written in some combination of the first two styles, while data analytics problems written in a map-reduce framework use collectives.But for analytics problems involving hash tables with random-in-time and random-in-location access, we argue that the latter two are a better fit.Sparse matrix computations such as iterative methods can be programmed elegantly using bulk-synchronous parallelism, as can sorting and generalized all-to-all problems, although the data exchanges are often irregular and unbalanced, with the volume of data between processors varying considerably.Communication imbalance issues can affect sparse matrix multiplication when the distribution of nonzeros is nonuniform, e.g., when a k-mer appears in many of the input sequences, or in parallel sorting when the distribution of values being sorted is nonuniform, e.g., a single value appears with very high frequency.These imbalance factors may encourage designs that avoid global communication and synchronization in favor of overlapped point-to-point or one-sided communication. While numerical libraries form the basis of many computational simulations, we see distributed data abstractions for hash tables, Bloom filters, histograms, and various types of queues for rebalancing data and computational load as keys to our analysis problems.For example, the Berkeley Container Library [73] provides the data structures and CombBLAS provides the distributed memory sparse matrix primitives designed for graph algorithms [45].These libraries can capture some of the more important communication optimizations, which are familiar ideas but have somewhat different usage. • Asynchronous communication avoids both global and pairwise synchronization, allowing each thread to progress without waiting to resolve load imbalance from communication or computation that may vary over time.• Non-blocking communication provides overlap for both computation and other communication events, and is especially important for fine-grained communication to avoid paying full latency costs for each message.In a one-sided model this means non-blocking put and get operations or fire-and-forget in an RPC model.• Communication aggregation is a standard technique in bulk-synchronous applications, but in asynchronous ones this involved dynamic buffering of data destined for a single core or node and shipping it when the individual buffer is full or based on some other trigger.In practice the management of the message buffers creates a critical trade-off between memory footprint and number of messages, but the uncertainty of communication volume and destination makes this particularly challenging.• Improving spatial locality is not always possible for irregular data, e.g., hash table construction on unknown data, but when insight into the data is possible, a carefully constructed hash function can provides significant benefit in reducing the percentage of remote accesses [25].• Caching remote data is useful when there is sufficient temporal locality, e.g., in looking up contigs during alignment of reads to contigs during assembly.• Iteration space tiling used in communication-avoiding algorithms for dense matrix multiplication [74,75] and n-body calculations [76,77] provide provable advantages in reducing communication volume and number of messages at the cost of additional memory.These methods do not simply partition the result matrix or particles/sequences over processors, but instead replicate them to the extent allowed by available memory. For sparse matrices and sparse interactions the benefits depend more on the sparsity patterns [27,46,47,78], but are useful in clustering [44] and possibly alignment. From an architectural perspective, these highly irregular applications stress message injection rate, communication latency, and in some cases bisection bandwidth [7,69].They may never saturate link bandwidth if a multi-core node cannot inject small messages into the network fast enough to saturate bandwidth.While message aggregation is used in our implementations to maximize bandwidth utilization, this tends to put significant pressure on the memory per node due to the nearly random pattern of remote processors with which a single node communicates.Communication overlap can also be critical in these applications, including overlapping multiple communication events with each other.Perhaps the most obvious difference between genome analysis and simulation is that floating point numbers are essentially nonexistent in the lower level analyses and only arise in machine learning such as clustering and deep learning application. SUMMARY This papers provides an overview of some of the computational patterns that arise in genomics data analysis using examples from the ExaBiome project.These represent problems like genome assembly and protein clustering that until recently were done only on shared memory machines.These can now be performed orders of magnitude faster and on data sets that were previously intractable, revealing new species and species families.We see a growing number of multi-terabyte data sets but also recognize that many biologists feel constrained in their experimental design by the daunting task of computational analysis.As the demand for better performance and larger data sets continues to grow, a distributed memory approach will be increasingly important. Our goal in writing this paper is to summarize the work in high performance data analysis for genomics to help experts outside biology understand the stress placed on parallel hardware and software systems from these applications.These patterns are captured in a set of motifs, closely related to the previous "Seven Giants" of data analysis, but with the critical additions of hashing and sorting.We believe this list and the overview of application examples and parallelization techniques will help in designing benchmark suites, ensuring they capture some of the most important characteristics of this application space.The described methods can drive requirements analysis for hardware and software, representing problems with finegrained, asynchronous, non-blocking, one-sided communication, irregular memory accesses, and narrow data types for both integers and characters.Our experience also makes the case for reusable software libraries that go beyond algorithms to data structures that are distributed across processors but can be updated by a single process with limited synchronization. Fig. 1 . Fig. 1.A spectrum of regularity with different patterns of communication and synchronization.Data analysis is often at the two extremes. Fig. 3 . Fig. 3. Dependencies of various machine learning methods upon linear algebraic primitives.Orange boxes are unsupervised methods whereas the green boxes include supervised methods.NMF stands for nonnegative matrix factorization, PCA stands for principal component analysis, and MCL stands for the Markov cluster algorithm.CONCORD stands for CONvex CORrelation selection methoD.CX refers to a lowrank matrix factorization in which a subset of the columns (C) is one of the factors. TABLE 1 A comparison of motifs for parallel computing, including our own set for genomic data analysis.*Basic operations include string parsing, string identity, and 2-bit encoding of DNA sequences.
2020-01-20T14:01:56.652Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "c73569a59f19108a68908c9486b1103fa89f035b", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2019.0394", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "a7b1eeb0c1255845d6904ac539a0f72b69994340", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Biology" ] }
134613378
pes2o/s2orc
v3-fos-license
Palaeoecological data as a tool to predict possible future vegetation changes in the boreal forest zone of European Russia: a case study from the Central Forest Biosphere Reserve New multi-proxy records (pollen, testate amoebae, and charcoal) were applied to reconstruct the vegetation dynamics in the boreal forest area of the southern part of Valdai Hills (the Central Forest Biosphere Reserve) during the Holocene. The reconstructions of the mean annual temperature and precipitation, the climate moisture index (CMI), peatland surface moisture, and fire activity have shown that climate change has a significant impact on the boreal forests of European Russia. Temperature growth and decreased moistening during the warmest phases of the Holocene Thermal Maximum in 7.0-6.2 ka BP and 6.0-5.5 ka BP and in the relatively warm phase in 3.4-2.5 ka BP led to structural changes in plant communities, specifically an increase in the abundance of broadleaf tree species in forest stands and the suppression of Picea. The frequency of forest fires was higher in that period, and it resulted in the replacement of spruce forests by secondary stands with Betula and Pinus. Despite significant changes in the climatic parameters projected for the 21st century using even the optimistic RCP2.6 scenario, the time lag between climate changes and vegetation responses makes any catastrophic vegetation disturbances (due to natural reasons) in the area in the 21st century unlikely. Introduction Modern climate changes, which are manifested as an increase in air temperature, changes in precipitation patterns and more prevalent anomalous weather evens such as heat waves and windstorms, have an obviously significant impact on the growth and functioning of boreal forests in Northern Eurasia [1]. To describe future climate variability, it is very important to know how different forest communities will respond to changes in climate conditions in the future [2,3,4,5]. The method of palaeoenvironmental reconstructions is one of the most effective approaches to describe the possible scenarios of the landscape state and vegetation changes in the 21 st century [6,7]. Within the last decades, climate and landscape reconstructions for the Holocene Thermal Maximum (8. BP) and the optimum of the last Interglacial (Eemian, MIS 5e, about 125 ka BP) were applied as palaeo-analogues of the vegetation response considering increases in global temperature of 0.7-1.0°C and 1.7-2.0°C, respectively [7]. Loutre and Berger [8] have suggested that the conditions of the orbital climate forcing during the Holstenian Interglacial (MIS 11, 420-370 ka BP) are the most similar to those of the current interglacial period. Therefore, vegetation dynamics during the Holstenian Interglacial can be used as an analogue to predict the vegetation changes at the end of the 21 st century. Multiple Holocene climate reconstruction models for the Eastern and Northern Europe exhibit rapid temperature increases in the Late Glacial and Early Holocene (an analogue of present-day global warming), during the Holocene Thermal Maximum when the mean annual temperatures were 2°С higher than those of the present day [6,9,10,11]. Roughly 5.7-5.5 ka BP, the Holocene Thermal Maximum was followed by gradual climatic cooling that included several warming and cooling phases with temperature fluctuations ranging between 2 and 3°С. The key mechanisms of forest-climate interaction in previous epochs remain unclear, and additional research using aggregated multy-proxy data analysis is necessary. The present study is focused on reconstructing vegetation dynamics in the southern part of the Valdai Hills at various time intervals during the Holocene. We make use of new multi-proxy palaecological data (pollen, testate amoebae, charcoal, etc.) to interpret past vegetation dynamics. The temporal moistening conditions in the area were derived using the climate moisture index (CMI, [12]). To predict possible regional vegetation changes in the boreal forest zone of central European Russia under climate changes in the 21 st century, several scenarios of future vegetation changes are suggested. Available projections of climate change for the 21 st century are provided by an ensemble of global climate models such as CMIP5 (IPCC 2013, Coupled Model Intercomparison Project, Phase 5 [1]) for scenarios RCP2.6 and RCP8.5 (Representative Concentration Pathways for possible range of radiative forcing +2.6 W/m 2 and +8.5 W/m 2 ). These projections show that the mean air temperature for the central part of European Russia may increase at the end of the current century by 2.0-2.5 and 6.0-7.0°C, respectively. This temperature increase will be accompanied by an increase in annual precipitation that may range from 7% (RCP2.6) to 15% (RCP8.5). All of these changes may result in significant alterations in forest cover. A key region studied in this investigation was the Central Forest State Natural Biosphere Reserve (CFSNBR) situated in the southern part of the Valdai Hills. We opted to focus on the southern boundary of the boreal forest zone. Taking into account the modern trend of increases in global air temperature, forest communities can be very sensitive to environmental changes. A number of very detailed palаeoecological reconstructions of the Holocene climate and vegetation history for the area have been conducted and found to be very useful for this study [6,13,14,15]. For projecting possible vegetation changes in the 21 st century, we used the optimistic RCP2.6 climate scenario. The RCP8.5 climatic scenario does not have any analogues in the Holocene, and consequently, it was not used in the present study. Materials and Methods The CFSNBR is situated roughly 360 km northwest of Moscow (the Tver region, 56º35' N, 32º55' E) in an ecological zone transitioning from taiga to broadleaf forests. The vegetation of the CFSNBR is primary southern taiga forests, and it has been undisturbed by any human activities for at least 86 years. The climate of the study area is temperate and moderately continental with a mean annual temperature is 4.1°C and annual precipitation of roughly 700 mm [4]. The plant cover includes mixed uneven-age spruce (Picea abies), birch (Betula pendula) and aspen (Populus tremula) trees with a small admixture of broadleaf trees (Tilia cordata, Ulmus laevis, Fraxinus excelsior, Acer platanoides). Alder (Alnus glutinosa) is abundant in the river valleys. The Holocene vegetation and climate reconstructions are based on palaeoecological data from the large peat bog Staroselsky moch (617 ha), which is located in the southeast part of the CFSNBR. The results of pollen and testate amoebae analysis were published by Novenko et al [14] and Payne et al [15]. The estimation of micro-charcoal concentration in peat core from the small forest peatland (<0. ha) located 4 km west of the Staroselsky moch peat bog was used for reconstructing fire frequencies. Available palaeoecological data allowed us to reconstruct environmental changes during the 9000 years, with the exception of the micro-charcoal data, which cover the last 7000 years. The last millennium was excluded from analyses of vegetation-climate interactions due to increasing human impacts and vegetation disturbance. The mean annual temperature and precipitation during the Holocene (figure 1) were reconstructed with pollen data using the Modern Analogue Technique (MAT). Details of the MAT have been presented in previous publication [13]. Using the MAT, we found a statistically significant analogue in the modern pollen datasets (720 sites in Europe and Siberia) for each fossil pollen assemblage. The climatic characteristics (temperature, precipitation) of the area, where the spectrum analogue was obtained, are accepted as reconstructions of climate and woodland coverage in the past. Climatic information was derived from BRIDGE Earth System Modelling results [16]. [14]), climatic moisture index and water table depth inferred by pollen and testate amoebae data from Staroselsky moch peat bog; characteristic of pollen taxa from the pollen records from the Staroselsky moch peat bog; micro-charcoal accumulation rate and fire episodes inferred from charcoal data from the small forest peatland in the area of the CFSNBR. The Sum of broadleaf trees includes pollen of Quercus, Ulmus, Tilia, Fraxinus, Acer and Carpinus. Modern analogue calculations were produced using Polygon 2.2.4. A test of the accuracy of the applied method using a database of surface pollen assemblages reveals that the MAT can reproduce the mean annual temperature rather correctly (R 2 =0.81; RMSEP=1.5°C). The accuracy of reconstruction of annual precipitation is lower (R 2 =0.41, RMSEP=100 mm), and we can determine changes in humidity only as general trends. To describe the temporal patterns of moistening conditions within the study area during the Holocene, we computed the CMI [12] using the ratio of annual precipitation to annual potential evapotranspiration. The annual potential evaporation was calculated using the Priestley-Taylor equation [17]. This numerical algorithm uses the reconstructed annual air temperature, the forest cover and pollen proportion of coniferous and deciduous tree species as input parameters. Peatland surface moisture, a robust proxy for climate humidity, was reconstructed as depth to the water table (WTD, cm) using a testate amoeba-based transfer function (reversed weighted averaging regression model) that was specifically developed for the forest zone of European Russia [18]. The transfer function included 80 samples from 18 peatlands located in the taiga, mixed and broadleaf forests and the forest-steppe zones of European Russia. The leave-one-out cross-validation of models constructed for the training dataset revealed a rather high accuracy of the model (R 2 =0.74; RMSEP=5.5 cm). Water-table depths inferred by testate amoeba generally reflected the length and severity of the summer moisture deficit, which in raised bogs is primarily controlled by summer precipitation [19]. Micro-charcoal concentration data were transformed into charcoal accumulation rates (CHAR, particles cm −2 yr −1 ) using CharAnalysis software [20], which separates the long-term trends (i.e., background CHAR) from positive deviations (i.e., charcoal peaks) in the CHAR time series. Charcoal peaks represent fire episodes (i.e., one or more fires that occur during the time span of the peak). Results and discussions The climate conditions of the study area between 9.0 and 8.0 ka BP were relatively cold (figure 1). The mean annual temperature was 2°C lower then at the present time at the beginning of this period and increased to modern values roughly 8.0 ka BP. The mean annual precipitation increased from 600 to 700 mm and a consistent decrease in peatland surface wetness indicated an increase in climate humidity. The CMI decreased from 0.36 (more humid) to 0.26, which corresponds to less humid, drier conditions. During this period, the study area was occupied mainly by birch forest, which has persisted in the region since the early Holocene [13,21]. The plant macrofossils revealed from the lower part of the peat core suggested an abundance of Betula pubescens [14]. Climate warming after 8.0 ka encouraged the expansion of Picea, broadleaf trees (Quercus, Tilia, Ulmus) and Corylus over the study area. The moistening conditions were characterized by high variability in the CMI between 0.17 (low humidity) and 0.30 (moderate humidity). A high WTD in the peatland inferred from testate amoebae data indicated rather humid conditions during the summer periods between 8.0 and 7.0 ka BP (figure 1). From 7.0 to 5.5 ka BP, the mean annual temperature was roughly 5-7°C, which exceeds modernday values by 1-3°С (figure 1). These reconstructions agree well with the climatic conditions determined from pollen data for the Holocene Thermal Maximum in the Baltic region and Fennoscandia [10,11], where the mean annual temperatures were 2°С higher than those at the present time. Pollen-inferred temperature reconstructions from Estonia have shown that the mean annual temperatures reached 8-9°С, which exceeded modern-day ones by 3.0-3.5°С [22]. The annual precipitation in the CFSNBR was roughly 600 mm during the period 7.0-6.5 ka BP. Between 6.5 and 5.5 ka the annual precipitation approached or exceeded modern-day values. The reconstruction of the peatland surface moisture reflected two distinct phases with drier climate conditions (7.0-6.2 ka BP and 6.0-5.5 ka BP). The CMI in the periods decreased to lowest values 0.24 (for a ratio of annual potential evaporation and precipitation of roughly 0.76), and it was lower as the mean values estimate for entire Holocene for the area (CMI=0.285). The WTD during the second phase in the peat bog was extremely low (32 cm), likely due to the reduced summer precipitation and higher summer temperatures. The proportion of Picea pollen reduced significantly between 7.0 and 5.5 ka BP, and it almost disappeared from assemblages during the second warm phase 7.0-6.2 ka BP. At the same time, Pinus, Betula and broadleaf tree species increased their abundance. The pollen productivity of windpollinated plants, such as Betula, Alnus and Pinus, is much higher than the pollen productivity of Acer, Ulmus and Tilia. Therefore, the proportions of broadleaf trees in pollen assemblages are often underestimated [23]. In the period 6.0-5.7 ka BP, the fraction of the sum of broadleaf tree pollen reached 40% of the total pollen sum, indicating a broad expansion of temperate deciduous forests. The warm and relatively dry climatic conditions of the Holocene Thermal Maximum were unfavorable for Picea growth. According to data of modern ecological studies of spruce, Picea abies prefers moist soils with high seasonal water storage [24] and is highly susceptible to drought [25]. The availability of water is therefore an important factor for the growth of this tree species nowadays. Climate reconstructions of the study area based on the pollen data from Staroselsky moch peatland revealed climate cooling and an increase in humidity after 5.3 ka BP. These reconstructions coincided with a number of palaeoclimatic reconstructions based on various natural archives in Europe that demonstrated gradual climatic cooling since 5.7 ka BP [6,9,10,11,26], possibly caused by a decrease in summer insolation. Roughly 4.5 ka BP, the mean annual temperature in the study area declined to 3°С and the annual precipitation was close to the modern values. At 4.0 ka BP the annual precipitation increased to 800 mm year. The CMI in the period varied significantly and in wetter years reached 0.35. Higher precipitation levels and lower temperatures promote sufficient soil moisture conditions that resulted in an increasing fraction of Picea in forest stands in the study area. The share of Picea pollen in assemblages from peat core at Staroselsky moch increased to 30-40% (figure 1). Broadleaf trees remained relatively abundant in the plant cover until 4.0 ka BP and then their fraction gradually decreased. The rise of Picea pollen values were traced in pollen diagrams from European Russia and Finland [6,10], which suggested the expansion of boreal coniferous and mixed coniferous-broadleaf forests. The next warm and drier phase was detected in climate reconstruction by pollen and testate amoebae data from the Staroselsky moch peat bog between 3.4 and 2.5 ka BP. The mean annual temperature exceeded modern-day values by 1-2°С, the WTD in the peatland dropped to 20-25 cm, and the CMI decreased to 0.23. The proportion of Picea fell, and the fraction of Betula and broadleaf trees increased. One can expect that Picea forests were significantly damaged during this phase by fires. Charcoal data from the area of the CFSNBR have shown that the Holocene fire activity was low until the last millennium. The conditions of the study area-a flat topography with impeded drainage of soils and an abundance of broadleaf tree species-were replaced by Picea in a relatively wet climate. This situation obviously led to hampered burning and low fire activity [27]. Only two distinct fire episodes were detected during the warm and dry phase 3.4-2.5 ka BP, and one fire episode occurred roughly 6.6 ka BP, which corresponded to the oldest dry phase of the Holocene Thermal Maximum. Archeological findings and palaeobotanical indicators are inconclusive in terms of human occupation of the study area during these periods. Consequently, one can assume that the cause of the forest fires was likely summer drought. Climate reconstructions for the study area have demonstrated relatively humid climate conditions after 2.5 ka BP. The CMI in some years reached 0.33. Our data agree well with European palaeoecological records that highlight climate cooling and an increase in humidity roughly 2.5 ka BP [11,26]. After 2.5 ka BP, spruce forests recovered in the study area. The Picea pollen curve formed a conspicuous maximum between 2.5 and 1.6 ka BP (figure 1). The increase in Picea pollen values after 2.5 ka BP was also recorded in a number of sites in Finland, European Russia, Belarus and Estonia [10,21,22,28], findings that suggest a regional expansion of spruce forests. The last 1000 years have been marked by dramatic changes in vegetation. The proportion of spruce and broadleaf trees in the forest stands decreased abruptly, which coincided with the expansion of birch forests. The micro-charcoal concentration increased by an order of magnitude, and the frequency of fire increased. Pollen records from the area of the CFSNBR have revealed a significant reduction in total forest coverage [13], the presence of cultivated cereals, weeds and ruderal plants [14]. These findings apparently indicate human impacts on plant cover. Projection of future vegetation changes and conclusions Holocene climatic reconstructions have shown a very high sensitivity of vegetation to climate changes. To predict possible forest vegetation changes during the 21 st century, the RCP2.6 scenario was used. For vegetation projections, we can use the warm and dry phases observed during the Holocene Thermal Maximum 7.0-6.2 ka BP and 6.0-5.5 ka BP, as well as the climate warming between 3.4 and 2.5 ka BP. The palaeoecological data that we obtained for these periods have shown that the climate warming and decrease in CMI led to structural changes in plant communities: an increase in the abundance of broadleaf tree species (Quercus, Tilia, Ulmus, Fraxinus, Acer) in forest stands and the suppression of Picea trees. The frequency of forest fires during the aforementioned periods was higher, and it resulted in the replacement of spruce forests by secondary stands containing Betula and Pinus. According to our palaeoenvironmental reconstructions, vegetation changes caused by climate warming lasted from 500-1000 years. On the other hand, the observed temperature growth spanned only a few decades. One can expect a lag between climate changes and vegetation response; catastrophic vegetation disturbances are unlikely to occur in the 21 st century. Although the palaeoecological data do not fully correspond to the short-term changes expected in the current century, they can be used to assess general trends in vegetation dynamics.
2019-04-27T13:08:50.746Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "08044fc8578f8e2f6397d98263ea4e82f68a73ed", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/107/1/012104", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "eb7847dceb9e2c4b54272e64f36927498fe8244c", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
252898851
pes2o/s2orc
v3-fos-license
Tau PET following acute TBI: Off-target binding to blood products, tauopathy, or both? Repeated mild Traumatic Brain Injury (TBI) is a risk factor for Chronic Traumatic Encephalopathy (CTE), characterized pathologically by neurofibrillary tau deposition in the depths of brain sulci and surrounding blood vessels. The mechanism by which TBI leads to CTE remains unknown but has been posited to relate to axonal shear injury leading to release and possibly deposition of tau at the time of injury. As part of an IRB-approved study designed to learn how processes occurring acutely after TBI may predict later proteinopathy and neurodegeneration, we performed tau PET using 18F-MK6240 and MRI within 14 days of complicated mild TBI in three subjects. PET radiotracer accumulation was apparent in regions of traumatic hemorrhage in all subjects, with prominent intraparenchymal PET signal in one young subject with a history of repeated sports-related concussions. These results are consistent with off-target tracer binding to blood products as well as possible on-target binding to chronically and/or acutely-deposited neurofibrillary tau. Both explanations are highly relevant to applying tau PET to understanding TBI and CTE. Additional study is needed to assess the potential utility of tau PET in understanding how processes occurring acutely after TBI, such as release and deposition of tau and blood from damaged axons and blood vessels, may relate to development CTE years later. Introduction Chronic Traumatic Encephalopathy (CTE), a neurodegenerative disorder associated with progressive neuropsychiatric decline in individuals who have suffered repeated mild traumatic brain injury (TBI), is currently diagnosable only at autopsy based on the presence of neurofibrillary tau tangles in the depths of brain sulci and surrounding blood vessels (McKee et al., 2014). Positron Emission Tomography (PET) now allows assessment of tauopathy in vivo. A small number of tau PET studies have revealed tau deposition months to years following TBI (repeated mild and single moderate-severe) and in subjects with clinically-probable CTE, (Stern et al., 2019;Ayubcha et al., 2021;Marklund et al., 2021) though differences between patients and controls are subtle, confounded by non-specific binding of early generation tracers, and not considered diagnostic at the individual level. The mechanism by which repeated TBI leads to CTE remains unknown but has been posited to relate to release of tau at the time of injury through axonal shearing. Tau levels in blood, CSF and interstitial fluid rise rapidly following TBI and can provide clinically relevant information about TBI severity and prognosis (Zetterberg, 2017). We designed a study to learn if tau PET performed shortly after TBI can improve understanding of how processes occurring acutely after TBI may predict later neurodegeneration. We report early results from this study, focusing on the first subject enrolled whose tau PET scan was markedly abnormal with prominent radiotracer uptake in regions of hemorrhage. Participants and setting As part of a study approved by the Weill Cornell Medicine IRB, adult subjects with complicated mild [GCS 13-15 with trauma-related CT abnormality such as intracranial hemorrhage or contusion (Marshall et al., 1991)] or moderate TBI were recruited from the Weill Cornell-New York Presbyterian Emergency Department to undergo PET and MRI neuroimaging within 14 days of injury. All subjects provided informed consent prior to participation. Neuroimage acquisition PET scans were acquired from 0 to 60 and 90 to 120 min on a Siemens Biograph PET/CT after rapid IV bolus injection of ~555 MBq 18 F-MK6240. Neuroimage processing and analysis After reconstruction and decay correction, PET images were motion corrected and coregistered with T1 MRI using validated methods (Tahmi et al., 2019). SUVr images for summed 90-120 min acquisition were generated with reference to cerebellar gray matter, defined by Freesurfer and eroded by 2 voxels to avoid partial volume effects. Results TBI subject characteristics are shown in Table 1. All subjects suffered complicated mild TBI with brief (<5 min) loss or alteration of consciousness with normal Glasgow Comas Scale (GCS) scores of 15 by the time of Emergency Department evaluation. As shown in Figure 1, all subjects' 18 F-MK6240 PET scans were abnormal. Subject #1, with a history of repeated (10+) sports-related concussions, had two frontal areas of intense 18 F-MK6240 uptake in the regions of MR-visible T1 hyperintense, hemorrhagic cerebral contusion and subdural hemorrhage. In this subject only, radiotracer accumulation appeared to extend into brain parenchyma. Subject #2 had probable extra-axial tracer accumulation along a parafalcine subdural hematoma adjacent to (normal off-target) tracer accumulation in skull. Subject #3 showed a small amount of extra-axial tracer accumulation in the region of epidural hematomas. Discussion 18 F-MK6240 tau PET performed shortly after complicated mild TBI in three subjects revealed focal radiotracer accumulation in regions of MR-visible hemorrhagic injury. This has not previously been reported. Prominent increased PET signal in one subject with a history of multiple prior concussions suggests the possibility that these results correspond to acute and/or chronic tauopathy. This would be relevant to understanding the pathogenesis of CTE, which to date has proven difficult to image using PET, and which remains impossible to diagnose in vivo. However, alternate explanations for 18 F-MK6240 accumulation in regions of traumatic hemorrhage such as tracer extravasation through a damaged blood-brain barrier (BBB) and off-target radiotracer binding must be considered. Tracer extravasation 18 F-MK6240 is highly BBB permeable [a general requirement for brain PET radiotracers (Pike, 2009)] and does not require BBB breakdown to enter brain parenchyma. Arterial spin labeling (ASL) MRI showed decreased blood flow to regions of radiotracer accumulation (data not shown) which would be expected to decrease tracer delivery. Other PET studies of acute TBI show decreased radiotracer accumulation in regions of hemorrhagic injury (Langfitt et al., 1986). We therefore do not think focal radiotracer accumulation can be attributed to tracer leakage or extravasation. Binding to blood products While 18 F-MK6240 is considered specific for neurofibrillary tau, autoradiography has shown weak binding to blood products (Aguero et al., 2019;Malarte et al., 2021). Consistent with this, all three subjects showed radiotracer accumulation outside the brain, in regions of extra-axial hemorrhage. This cannot represent parenchymal tau deposition, and confirms that off-target binding to blood products must be considered when interpreting 18 F-MK6240 PET. Additional studies e.g., autoradiography and immunohistochemical studies in human tissue obtained at surgery or postmortem and from animal models are needed to clarify the tissue target of 18 F-MK6240 in the setting of hemorrhage. Tauopathy In Subject #1, PET signal extends into brain parenchyma and well beyond regions of MRI-visible hemorrhage, and we question whether tracer binding to blood products can account fully for these findings. We propose that tau PET signal in this subject with a history of multiple prior concussion, scanned shortly after complicated mild TBI, corresponds, at least in part, to on-target binding of 18 F-MK6240 to acutely and/or chronically deposited neurofibrillary tau. While tau tangles are not generally considered to form quickly after TBI, this has been demonstrated in teenaged athletes who died days after suffering concussions, with tau localized to regions of microhemorrhage (McKee et al., 2014;Tagge et al., 2018). This finding in concussion, along with the perivascular location of chronically-deposited tau in CTE (McKee et al., 2014) support the theory that traumatic microvascular injury and release of blood components into brain parenchyma is important to the pathophysiology of tauopathy after TBI (Michalicova et al., 2017;Sandsmark et al., 2019). Much additional neuroimaging and other work is needed to determine if our PET finding of tau radiotracer accumulation surrounding hemorrhagic contusion also supports this theory (vs. corresponding solely to off-target binding) and understand relevant factors, e.g. whether this finding was detectable in one subject because of a predisposing history of multiple prior concussions, injury features, genetic and/or other factors, and whether PET abnormalities resolve over time. The absence of similar tau PET findings in subjects scanned later after injury (Ayubcha et al., 2021;Marklund et al., 2021) suggest this apparent lesion may be transient; follow-up scans are planned for 1 year after injury. Conclusion 18 F-MK6240 tau PET imaging performed shortly after complicated mild TBI in three subjects revealed areas of focal tracer accumulation in regions of traumatic hemorrhage consistent with 1) off-target radiotracer binding to blood products and/or 2) post-traumatic tau deposition. Both explanations are relevant to advancing understanding of tauopathy in TBI and CTE. It is notable that evidence of possible parenchymal tauopathy was present only in one subject with a history of multiple prior concussion. Additional study is needed to clarify the tissue target of 18 F-MK6240 PET in the setting of TBI complicated by intracranial bleeding, and to assess its potential utility in understanding how processes occurring acutely after TBI, such as release and deposition of tau and blood from damaged axons and vessels, may relate to future neurodegeneration. (B) Subject #2: MRI shows parafalcine trace subdural hematoma, diffuse atrophy, and a developmental arachnoid cyst. PET signal in the region of subdural hematoma (white circle) may be present, but is difficult to differentiate from (normal, off-target) signal in adjacent dura. (C) Subject #3: MRI shows R anterior and L lateral temporal epidural hematomas. PET shows extra-axial radiotracer accumulation in these regions (L temporal = small white circle; R temporal = larger white circle).
2022-10-15T13:06:32.285Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "74bb8a9aca3c2c7429f9a9d6bb2dc08293b2c546", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnimg.2022.958558/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2618e5767bde6bd0247ae5653d3366c03e6eda9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236398646
pes2o/s2orc
v3-fos-license
The Role Of Pasture Agrophytocenoses In The Optimization Of The Ecological Situation In the desert regions of our country there are large farms specializing in astrakhan fur farming. It is known that karakul is one of the most important branches of animal husbandry in the republic. In Karakul agriculture, the natural cover of deserts and hills serves as the main source of food. Of the 23.1 million hectares of pastures and hayfields in Uzbekistan, 17.5 million hectares are used as desert pastures. Of these, 37.1% are still in crisis, with 1 million. Harmful and poisonous plants on an area of more than 0.5 million hectares. Under the influence of these weeds, which are not eaten by weeds, the productivity of pastures is sharply reduced. INTRODUCTION One of the main causes of this pasture crisis is the misuse of pastures and the spread of poisonous and similar poisonous and harmful plants. These There are 30 million pastures in the country, and they all have a different climate and soil cover, so it is necessary to study the characteristics of vegetation, soil and climate. The study of the biological properties of a plant for growth, development, reproduction, spontaneous reproduction and, most importantly, its suitability for feeding is the main task in the construction of artificial pastures. To do this, on degraded areas of pastures, it is advisable to isolate heat-resistant plants and carry out phytomeliorative measures, for which it is necessary to conduct experiments with their participation, taking into account the ecological and biological characteristics of promising desert pastures. The desert pasture had to determine the most suitable species of edible plants for the area. The creation of artificial pastures with the participation of species acceptable for a given area is important for increasing the productivity of pastures, allowing intensive use of pastures and providing livestock with vitamin-rich feed throughout the year. This is one of the key issues of today in solving this urgent task, and also important for the prevention of desertification. The best way to improve pastures that are in crisis in the desert and hills, i.e. covered with poisonous and harmful plants -this is the creation of artificial agrophytocenoses from a mixture of shrubs, semi-shrubs and grasses, along with ensuring biodiversity and guaranteeing the sustainability of the environment. All sandy massifs of the republic are the main fodder base for livestock, but the fodder Phytomelioration is an important factor in preventing desertification. Due to the fact that the Kyzykum desert is a large and promising Karakul region of the republic, the description of scientific research in the field of phytomelioration of these pastures is of particular importance. Although the first studies in the field of pasture productivity in KyzylKum were started in the 30s under the leadership of E.P. Korovin, their development increasing the productivity of pastures, working on the basis of mutual agreements with karakul farms or farms and the state. control over their activities; • Establish regular financing of measures to improve pasture productivity at the expense of centralized funds and local budgets, control the use of created agrophytocenoses. The next important task is the correct selection of the optimal composition of pasture agrophytocenoses in conditions of a specific ecological type of environment, species, alternative ratios of life forms of plants in desert areas. When creating new agrophytocenoses, the following recommendations should be considered. • Testing and selection of different ratios of food desert species belonging to different life forms in arid regions. • Determine the characteristics of effective use of moisture based on the study of the forming properties of the root system. • Identification of species with a high content of nutrients based on the study of indicators of feed yield. Also, the development of agro technical foundations for the creation of optimal structural pasture agrophytocenoses in certain environmental conditions is one of the urgent problems today, which is a key condition for increasing biodiversity, the development of karakul and biodiversity. It can be seen that living forms of promising shrub, semi-shrub, transitional species under the conditions of annual crops of created agrophytocenoses are higher and more intensive than development indicators. CONCLUSION In conclusion, we note the enormous importance of creating artificial agrophytocenoses for stabilizing the ecological state of desert pastures and increasing the productivity of pastures, which will increase the productivity of pastures by 2-3 times, ensure biodiversity and prevent desertification. The life span of agrophytocenoses in winter is 15-20 years and is a guarantee of the stability of the biocenosis due to self-reproduction at later stages.
2021-07-27T00:05:07.998Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "d25a526becafda67324feffa9da889c1f4b05f91", "oa_license": "CCBY", "oa_url": "https://theamericanjournals.com/index.php/tajssei/article/download/153/139/140", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d0e1372ff241780fd1dcb89130478d782ef65375", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
226682741
pes2o/s2orc
v3-fos-license
The Principles of Sustainable Development as a Form of Structural Transformation Sustainable development today covers not only certain aspects of the environmental and economic aspects of society. The very structure of the economy, in particular the production sphere, as well as the whole society, is changing radically. Moreover, the structure is not only a systemological, but also the category of sustainable development. It acts as the basis of any material phenomenon or process, that is, it serves as a relatively stable basis on which more mobile elements function. Therefore, the closest concept that allows revealing the essence of sustainable development, the problem of cognition of environmental and economic transformations, transformations and their regulation, is the structure. It is by using the disclosure of the structure as a fundamental element of the system of sustainable development that can reveal the factors of deep changes in the economy, under the influence of which the transformation of industry towards modern environmental guidelines, and with it employment, investment, innovation, systems of their state regulation. Introduction The growing need for the transition to sustainable development for the Russian economy is due to both the crisis processes in the field of ecology in recent years and the constraining role that structural problems play in promoting environmental values, in establishing a postindustrial way of life, and in improving the quality of life. In turn, an analysis of the principles of structural regulation of the transition to sustainable development is important for the formation of its theoretical foundations, taking into account national characteristics and forms of their implementation. The essence of the concept of "principle" reveals its Latin meaning -the beginning, the foundation (principium), on which theory, economic policy, etc. can be built. As a scientific category, principles are on a par with such components of a scientific methodology as approaches, economic categories, and laws. Materials and Methods Representatives of classical political economy, in particular, A. Smith [1], were one of the first principles to regulate the economy: non-interference by the state in the functioning of the national economy, the action of the "invisible hand" of the market. K. Marx [2] highlighted the general methodological principles of the study of categories of economics, thereby laying the foundation for the study of processes and phenomena in the development of economic relations, their structure and forms of implementation in economic practice. The first principle is a dialectical one, according to which within any economic phenomenon there is an internal contradiction that constitutes the source of its own development. The structure of the national economy here also cannot be an exception, since it is a clearly defined category, which implies the presence of mutually penetrating and mutually exclusive opposites in it. From this general proposed by K. Marx for the analysis of all economic phenomena of the principle, it is advisable to single out a number of particular principles applicable to the structural regulation of the economy: • the continuity of changes in the structure of the economy and, accordingly, the permanent need for its regulation for sustainable development; • conditionality of structural changes by objective factors (such as scientific and technological progress, development of property relations, the emergence of new forms of economic relations). Therefore, structural regulation should affect not only quantitative indicators of economic development (reproduction, macro-dynamic), but also qualitative ones, which are associated with the pre-formation of economic relations as a result of the accumulation of gradual quantitative changes; • the non-absolute nature of economic dynamics as a criterion for determining the directions, forms and results of the structural regulation of the economy, since its development is not a repetition of cycles, but progressive movement along an ascending line. Therefore, in the development of structural regulation, it is necessary to be guided not only by economic growth, but also by the development of economic relations and institutions. The second principle is the correspondence of production relations to the nature and level of productive forces, which are sides of each production method. Consequently, highlighting the technological modes in the structure of the economy (as the dominant ways of producing a social product), it is impossible to focus only on the branching productive forces that form them (according to K. Marx, means of production and people who have certain production experience, labor skills and bringing these means of production into action). It is necessary to analyze whether the level of development of economic relations corresponds to the desired mode. By detailing the principle of dialectical correspondence of production relations and productive forces in relation to sustainable development, we believe that the impact on the structure in order to develop a certain order (in our case, the fifth and sixth) requires not only the development of the required high-tech industries, but and improving economic relations. This should affect both the areas of ownership, investment and credit, the distribution of the cost of the finished product and income, as well as the distribution of people in the structure of industry. The latter echoes the Keynesian idea of a change in the structure of consumption, which should correspond to structural changes in the production of a gross domestic product [3]. Thus, the Marx principles of the analysis of economic phenomena and their regulation are generally methodological, their value is that they are applicable for sustainable development. At the same time, the principles of neoliberal regulation of the economy, put forward and substantiated by proponents of modern neoliberalism, are more detailed. L. Erhard, as a vivid representative of this area of economic science, continued the development of the classical principles of economic regulation, adapting them to the realities of the transition to sustainable development. Unlike the classical school, neoliberalism is based on the idea of priority of free competition, but not contrary to, but as a consequence of the state's impact on the economy. Neoliberals see the role of state stress in the regulation of the economy in the functions of the "night guard" or "sports judge". The credo of neoliberalism was succinctly expressed by L. Erhard: "Competition is wherever possible; regulation is where it is necessary" [4]. Therefore, the basic principles of neoliberal impact on the economy include: -liberalization of the economy, that is, ensuring the minimum necessary intervention in it; -the need for state regulation of the transition to sustainable development, as well as the use of principles of free pricing; -priority of private property and free markets, interference in which by the state is permissible only in extreme cases (war, catastrophes, cataclysms). In contrast to the neoliberal, the basic principles of Keynesian economic regulation include the following. The first principle is that the main (but certainly not the only) regulator of the modern market economy is not the market, but the state. Consequently, regulatory actions that are significant for the economy are formed not at the micro, but at the macro level, and take into account national goals and interests. The impact itself is not only on intercompany relations, but also on the interconnections between "state-firms" and "firms-households". The state becomes the leading subject of the national economy, endowed with its property. Therefore, we can distinguish from Keynesian provisions the principle of structural regulation of the economy that is significant for us, which consists in the interaction of its public and private sectors. This interaction should develop in the following areas: nationalization and privatization of property in industry, in the infrastructure and financial sectors; the creation of state-owned companies in the latest industries, concentrating the sixth technological structure; tax incentives for environment management innovation. The second principle is the planned and predictive nature of regulation, formed on a scientific basis and significantly reducing the impact of market forces on the development of the national economy. Therefore, in relation to the structural regulation of the economy, we emphasize the importance of the principle of indicative planning and forecasting impact of the state on the economy. The third principle of Keynesian economic regulation is the implementation by the state of a policy of "effective demand". The central task of the state, according to J.M. Keynes, is to ensure high volumes of "effective demand" (that part of total expenditures, which is determined by the tendency of households to increase their consumption as income grows instead of increasing savings) [3]. It is important for us that the implementation of the principle of regulation of "effective demand" allows us to influence the structure of investments and innovations, and to change the proportions of the output of consumer goods and means of production. Basic principles: coordination of economic interests of subjects of entrepreneurship; ensuring the unity of law enforcement, coercion and control by the state over the mandatory implementation of institutional regulations; compulsory fulfillment of contractual obligations; the organization of the economic order and the responsibility of the state for the establishment of general "rules of the game" for business entities; Derivative principles: an informed choice of strategies for the development of market entities that contribute to improving the efficiency of production and distribution of goods; guarantees of institutional conditions for the rapid adaptation of firms to the market environment; preferred attitude to efficiently functioning business entities; implementation of entrepreneurial activity within the established norms and rules. The principles discussed above are mainly associated with the knowledge of economic laws and phenomena, as well as with the regulation of the entire system of transition to sustainable development as a whole [7][8]. Fundamental issues of the development of the structure of the national economy are considered in the framework of the system-structural and system-self-organization approaches [9]. In general, their basic principles include: • principles of consistency and isomorphism of the system of the national economy. These principles reflect a universal view of the economy as a system with all its inherent laws, as well as the conformity of the structure of subsystems to the structure of the "head" system; • the principle of dissipation, which consists in the fact that from the totality of admissible states of the system under the influence of the regulatory action, one is realized that corresponds to a minimum increase in entropy (disorder) [10][11]; • the principle of the presence of an attractor (the desired state of the system and its structure, to which the system should strive in its development); • the presence in the structure of the system of "catalysts" for its development -factors directing the economy to the most winning attractors [12][13][14]. Results and Discussion Thus, we are dealing with a fairly wide range of principles that form the basis of approaches to regulating the economy in the transition to sustainable development. Therefore, the starting point for highlighting the principles of structural regulation of the transition to sustainable development is the division of the existing principles of the functioning and regulation of the economy (as applied to its structure) into the following parts: 1. The principles of analysis of the structure of the economy: the priority of content over form; consideration of environmental factors; unity of theory and practice; the dialectical principle, which implies the development through movement and resolution of contradictions; inequality of economic dynamics as a criterion of structural analysis of the economy. 2. The principles of functioning of the structure of the national economy: systemicity and isomorphism, synergy and informatization; conditionality of structural changes by objective factors; the continuity of structural changes and the permanence of the need for their regulation; correspondence of production relations to the nature and level of productive forces; proportionality between resources and needs; compliance of the institutional structure of society with the technological structure of production; the presence in the structure of the economic system of the attractor and catalysts for its development. 3. The principles of regulation of the structure of the economy in the transition to sustainable development: -keynesian principles: the initiating and regulatory role of the state, the priority of the state (and not the market) in regulating the structure of the economy, the planned and predictive nature of such regulation; -institutional principles: state responsibility for the establishment of general "rules of the game"; coordination of the economic interests of business entities and the "cultivation" of the necessary institutions in the structure of the economy; -the neoliberal principle: the transfer of structural changes in the monetary system to the structure of the country's economy; -principles of a systematic self-organization approach: functionality and dissipation. The last principle contradicts the classical principles of regulation -the state's non-interference in the functioning of market entities, the "invisible hands", the use of which in structural regulation will inevitably increase entropy in the system of the national economy. As a result, structural changes will be unpredictable, disorganization. It is these principles that should become the basis for determining the theoretical content of the structural regulation of the economy in the transition to sustainable development. It is advisable to add a number of principles to them, allowing to more fully reveal the essence of sustainable development. The first of these is multilateralism, the complexity of structural regulation, which should include: • normative regulation (by improving the legal framework, technical, financial, environmental, environmental standards, etc.); • indicative target-oriented regulation (development and adoption of target plans and programs for the development of the green economy, reduction of consumption of non-renewable resources, transition to alternative energy, etc.); • market regulation (creation of conditions for inter-industry capital flow, development of a competitive environment, equalization of inter-regional disparities, intensification of entrepreneurial activity in the innovation sector). The second principle is the phasing of structural regulation. From the perspective of a systematic understanding, the development of structural regulation of the economy (both in theory and in practice) requires a consistent solution to the following key problems. The first is the identification of various criteria for structuring the national economy and the types of structure, the identification of objects, subjects, and the subject-targeted area of transition to sustainable development. The second is the determination of the effective directions of the structural regulation of the national economy, based on an analysis of the links between its structure, forms of economic relations and the characteristics of the transition to sustainable development. It is important for the Russian economy to take into account the interconnections between its market transformation and the transition to sustainable development, since market reforms impose qualitatively new requirements on the structure of the economy, and environmental problems hamper the integration processes with technologically advanced countries. The third principle of structural regulation of the economy that we propose is the correspondence of its forms and tools to the level of problems of transition to sustainable development. In particular, the huge technological lag of Russia from developed countries, the extremely low pace of formation of the newest -the sixth -technological layer in the country's economy is due to the lack of a whole range of economic relations, institutions, and effectively interacting economic entities. Therefore, their formation requires the adoption of not only economic incentives, but also direct regulation of R&D and innovation. And the regulation of the structure of exports and imports as a "derivative" of the manufacturing sector should be carried out primarily by economic methods, avoiding excessive protectionism, which generates raw materials rent-oriented and dependent trends among domestic enterprises Conclusion Thus, the study of approaches to determining the principles of revealing the structural problems of the national economy during the transition to sustainable development made it possible to divide them into principles of analysis, functioning and regulation of its structure (Keynesian, institutional, neoliberal, systemic self-organization). In addition to them, we proposed the principles of multilateralism and complexity, the phasing of structural regulation of the economy, as well as the principle of the conformity of its forms and tools to the level of environmental problems. The implementation of these principles is designed to help accelerate the transition to sustainable development.
2020-06-25T09:07:33.673Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "c5ae7d50d068825484a960a96f52e23e0d41482f", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/34/e3sconf_iims2020_04011.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3dce6032b566139477af9cc3a1c5efc08e5fdc0b", "s2fieldsofstudy": [ "Environmental Science", "Philosophy", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
3550237
pes2o/s2orc
v3-fos-license
Toxin–antitoxin systems and their role in disseminating and maintaining antimicrobial resistance Abstract Toxin–antitoxin systems (TAs) are ubiquitous among bacteria and play a crucial role in the dissemination and evolution of antibiotic resistance, such as maintaining multi-resistant plasmids and inducing persistence formation. Generally, activities of the toxins are neutralised by their conjugate antitoxins. In contrast, antitoxins are more liable to degrade under specific conditions such as stress, and free active toxins interfere with essential cellular processes including replication, translation and cell-wall synthesis. TAs have also been shown to be responsible for plasmid maintenance, stress management, bacterial persistence and biofilm formation. We discuss here the recent findings of these multifaceted TAs (type I–VI) and in particular examine the role of TAs in augmenting the dissemination and maintenance of multi-drug resistance in bacteria. INTRODUCTION Antibiotic resistance has been highlighted as one of the most pressing concern of 21th century. The rapid spread of 'superbugs', including Enterobacteriaceae with NDM-1 (New Delhi Metallo-beta-lactamase-1), KPC-2 (Klebsiella pneumoniae carbapenemase-2) and the most recent reported MCR-1 (mobile colistin resistance-1), has been described as a global crisis and an impending return to the pre-antibiotics era (Moellering 2010;Liu et al. 2015). To rationally combat antibiotic resistance, we require a better understanding of which factors influence the emergence and persistence of antibiotic resistant clones. Bacterial toxin-antitoxin systems (TAs), originally linked to plasmid maintenance systems (Ogura and Hiraga 1983), exert important activities in the context of bacterial resistance and persistence formation Harms, Maisonneuve and Gerdes 2016;Patel 2016). TAs are small modules consisting of a stable toxin and its unstable cognate antitoxin. Antitoxins are more labile than toxins and readily degraded under stress conditions; this allow the toxins to exert their detrimental effects, promoting plasmid maintenance, slow growth and dormancy, which is rather linked with chromosomally encoded TAs (Page and Peti 2016). TAs are not essential for normal cell growth but are nonetheless widely present on bacterial plasmids and chromosomes. It has been hypothesised that TAs play a central role that is advantageous for cell survival in their natural habitat, such as switching into a dormant, drug-resistance state to withstand high levels of antibiotic stress (Page and Peti 2016). The toxins inhibit cell growth by targeting a variety of important cellular processes, including DNA replication, transcription and cell-wall synthesis, which are similar to antibiotic activities (Davies and Davies 2010;Chan, Balsa and Espinosa 2015). Because of their ubiquity and crucial intracellular targets, the study of bacterial toxins will help us understand their role in the dissemination and evolution of bacterial antibiotic resistance. In this review, we will provide a synopsis of TAs and in particular examine the role of type II TAs in augmenting the dissemination and maintenance of multidrug resistance in Gram-negative bacteria. TAs CLASSIFICATION TAs are small genetic modules found on bacterial mobile genetic scaffolds like plasmids, as well as on bacterial chromosomes. The TA loci encode two-component systems that consist of a stable toxin whose overexpression either kills the bacterial cell or negates cell growth, and an unstable cognate antitoxin. As a result, when a plasmid encoding the TA is lost from a cell, the toxin is released from the existing TA complex and kills plasmid-free cells. In essence, this is an elegant model for perpetuating plasmid maintenance in bacterial population (Gerdes, Rasmussen and Molin 1986). This unique system is also called post-segregational killing. The first TA (ccdAB) identified was carried on a F-plasmid and was shown to play an important role in plasmid maintenance by coupling host cell division to plasmid proliferation (Ogura and Hiraga 1983). Since this initial discovery, a number of different TAs have been identified that are encoded on bacterial genomes. Based on their proteomic nature of their corresponding antitoxin, TAs are currently divided into six distinct classes (Table 1). Type I TAs All type I toxins are small hydrophobic proteins of approximately 60 amino acid and their gene expression is regulated by an antisense RNA transcribed from the toxin gene but in reverse orientation (Gerdes and Wagner 2007). Type I TAs are arranged either as overlapping, convergent transcribed genes pairs or as divergently transcribed gene pairs located apart. In the first case, the antitoxin is a cis-encoded antisense RNA (e.g. hok-sok, bsrG-SR4); in the latter case, it is a trans-encoded sRNA (e.g. tisB-IstR1, shoB-ohsC) (Brantl 2012). The first and best studied type I system is hok-sok (host killing, hok, and suppressor of killing, sok), which was first discovered on plasmid R1 from Escherichia coli (Gerdes, Rasmussen and Molin 1986;Thisted and Gerdes 1992 All the toxins of type I TAs have an identical secondary structure consisting of one α-helical structure and are predicted to be localised in the inner membrane, and thus to induce pores into the bacterial cell membranes, resulting in inhibition of ATP synthesis (Fozo, Hemm and Storz 2008). Consequently, replication, transcription and translation maybe inhibited, leading to cellular death (Unoson and Wagner 2008). For instance, TisB produces clusters of narrow anion-selective pores in lipid bilayers that significantly disturbs the cytoplasmic membrane (Wagner and Unoson 2012). Many toxins are not bacteriocidal, but interfere with phage propagation, modulate the cell membrane or prevent mature particle formation, and in some cases, only overexpression of toxin genes shows a toxic effect (Yamaguchi and Inouye 2011). Type II TAs Type II TAs have been most extensively studied among all TAs, and the huge number of type II TAs varies greatly from different bacterial species, even among the same species. Hitherto, 12 subgroups of type II TAs have been identified based on toxin amino sequence homology (Leplae et al. 2011), including mazEF (Aizenman, Engelberg-Kulka and Glaser 1996), relEB (Takagi et al. 2005), yefM-yoeB (Kamada and Hanaoka 2005), ω-ε-ζ (Zielenkiewicz and Ceglowski 2005) and mqsRA (Gonzalez Barrios et al. 2005;Brown et al. 2009). In type II systems, the antitoxin is a small, unstable protein that sequesters the toxin through protein complex formation. The expression of the two genes is regulated at the level of transcription by the TA complex that involves binding palindromic sequence at the promoter region. Therefore, as the concentration of the TA complex in the cell is reduced as a result of antitoxin degradation, the TA operon expression is suppressed to produce more toxin and antitoxin, and thus the type II system is also termed the 'addiction module' (Yarmolinsky 1995). In most cases, the antitoxin genes are located upstream of their cognate toxin genes so that the antitoxins appear to have an advantage for their production over their cognate toxins. Conversely, there are many exceptions of this genetic arrangement, such as higBA in which the toxin genes higB is located upstream of its cognate antitoxin genes, higB (Yamaguchi, Park and Inouye 2009;Christensen-Dalsgaard, Jørgensen and Gerdes 2010). Type III TAs The toxIN Pa was first identified on a plasmid from Erwinia carotovora subspecies atrosepticum (Pectobacterium atrosepticum) as an example of the novel type III protein-RNA TAs (Fineran et al. 2009). The toxIN Pa locus consists of a toxin ToxN Pa inhibiting bacterial growth and RNA antisense ToxI Pa counteracting the toxicity. The arrangements of type III TAs are unusual, as a toxin gene is preceded by a short palindromic repeat, which separates the toxin from its small RNA antitoxin, composed of several repeats of short nucleotide sequences. The short palindromic repeat acts as a transcriptional terminator, regulating the relative levels of antitoxin and toxin transcript. For example, toxIN Bt located on pAW63 from Bacillus thuringiensis encodes a toxic protein ToxN Bt and a antitoxin ToxI Bt containing 2.9 tandem repeats of a 34-nucleotide sequence (Short, Monson and Salmond 2015;Goeders et al. 2016). Currently, type III TAs are divided into three subgroups sharing the same genetic organisation, namely toxIN, cptIN (for Coprococcus type III inhibitor-toxiN) and tenpIN (for type III ENdogenous to Photorhabdus inhibitor-toxiN) (Blower et al. 2012;Goeders et al. 2016). Though these subgroups were identified by shared identity with ToxN, their cognates diverge between and within the subtypes, such as the number of repeats and the length of repeats (Blower et al. 2012;Goeders et al. 2016). All type III toxins discovered so far serve as endoRNase that cleave mRNAs in adenine-rich regions, whose activities inhibit by forming RNA pseudoknot-toxin complex. Type IV TAs The yeeU-cbtA has been proposed for the new type IV TAs in which the protein antitoxin interferes with binding of the toxin to its target rather than inhibiting the toxin via direct TA binding (Masuda et al. 2012). Unlike most toxins targeting the macromolecular biosynthesis, CbtA is the first toxin of the TAs that affects cellular morphology (Tan, Awano and Inouye 2011). CbtA binds and inhibits the polymerisation of bacterial cytoskeletal proteins, MreB and FtsZ. The antitoxin, YeeU, suppresses the CbtA toxicity by stabilising the CbtA target proteins rather than by directly interacting with CbtA to suppress its toxicity (Masuda et al. 2012). Specifically, YeeU directly binds to both MreB and FtsZ and enhances the bundling of their filaments in vitro. Notably, this is a unique feature of the yeeU-cbtA system, distinguishing it from all the other TAs in that CbtA and YeeU does not form a complex. Nevertheless, YeeU is able to neutralise CbtA toxicity. Thus, the yeeU-cbtA constitutes a new type of TA. Type V TAs The ghoTS is a new type of TA, where GhoS (ghost cell suppressor) is the first known antitoxin to neutralise the toxicity of GhoT ghost cell toxin, by specifically cleaving its mRNA (Wang et al. 2012). Compared to the high overlapping catalytic sites of CRISPR-associated-2 protein SSO1404 structures, Wang et al. (2012) suggested that the antitoxin, GhoS, is a sequence-specific endoRNase that cleaves ghoT transcription and thereby prevents GhoT translation. GhoT is a membranedamaging protein, and its production can lyse the cell membrane and change its morphologies. Ultimately, this causes the formation of ghost cells, a group of dead or dying cells in which cell outline is still visible but the cytoplasmic area is transparent (Wang et al. 2012). GhoT has also been shown to contribute to biofilm formation-after the deletion of its repressor GhoS, the formation of biofilm and cell motility increased by approximately 6-and 2-fold, respectively (Wang et al. 2012). Type VI TAs In contrast to typical TAs, in which toxicity of the toxin is neutralised by the antitoxin, socB is unique and constitutively controlled by the protease CIpXP, while its cognate socA acts as a proteolytic adaptor, promoting the degradation of SocB by CIpXP. SocB identified in Caulobacter crescentus has been proposed to inhibit DNA replication through direct interaction with DnaN, a ring-shaped protein that encodes a central component for DNA elongation (Markovski and Wickner 2013). This interaction disrupts the association of DnaN and Pol III and other DnaNbinding proteins, resulting in the collapse of the DNA replication forks. It also has been shown that this DNA damage can cause the accumulation of SocB, suggesting that it may play a regulatory role in the induction of the RecA-mediated SOS response (Aakre et al. 2013;Markovski and Wickner 2013;Page and Peti 2016). Therefore, the socA-socB system may play important roles in promoting Caulobacter adaptation to varying environmental conditions by preventing DNA replication. THE CELLULAR TARGETS OF TAs In last decade, an increasing number of cellular targets for TAs have been elucidated, and most of them are involved in many essential bacterial processes such as DNA replication, RNA transcription and protein translational modification as shown in Table 1 and Fig. 1. Interestingly, TAs share many cellular targets with that of the antibiotics. For instance, zeta toxin phosphorylates the essential nucleotide sugar UDP-N-acetylglucosamine (UNAG), and leads to the inhibition of cell-wall synthesis, much like the activity of penicillin, that inhibits the formation of peptidoglycan across-links in the bacterial cell wall or glycopeptides that bind cell-wall precursors (Kohanski, Dwyer and Collins 2010). Another example are DNA gyrases that can induce and relax supercoils during DNA replication yet are the target of toxins CcdB and ParE, as well as quinolone antibiotics that disrupt DNA replication by binding to DNA-gyrase complexes (Kohanski, Dwyer and Collins 2010). Due to their remarkable similarity in cellular targets between TAs and antibiotics, TAs may provide novel insights into the discovery and development of new antimicrobials. Targeting cell-wall synthesis: Zeta toxin The epsilon zetas were originally discovered as plasmid maintenance modules on a 29-kb low-copy plasmid, pSM19035, isolated from Streptococcus pyogenes (Zielenkiewicz and Ceglowski 2005). pSM19035 stability is conferred by two regions (segA and segB), and their corresponding products, SegA and SegB, control the plasmid partitioning (Ceglowski et al. 1993;Ana et al. 2000). The segB gene complex consists of four genes (δ and ωε-ζ ), ensuring a 'better-than-random' plasmid segregation. The gene δ shares a significant homology to ATPases involved in active plasmid partitioning, but stabilisation function is dependent on the ω-ε-ζ operon. Therefore, among TAs, the organisation of the ω-ε-ζ operon is unique. The first three-component operon with the ε and ζ genes encodes an antitoxin and toxin, respectively, and the transcription of this operon is controlled by the additional gene ω (Ana et al. 2000;Zielenkiewicz and Ceglowski 2005). It has been show that the product of ω binds to the promoter of the entire operon as a dimer, and in the absence of ω repression, the intensity of transcription from ω is increased about 40-fold (Ana et al. 2000). Plasmid-encoded epsilon-zeta TAs enhance plasmid maintenance by killing the plasmid-free daughter cells (Zielenkiewicz and Ceglowski 2005), whereas chromosomally encoded epsilon-zeta TA family (pezAT for pneumococcal epsilon-zeta) identified from S. pneumoniae kills bacteria through the inhibition of cell-wall synthesis. More recently, Mutschler and Meinhart (2011) showed that toxin PezT inhibits the bacteria cell-wall synthesis by phosphorylating the UNAG into a toxic module UNAG-3-phosphate (UNAG-3P). UNAG-3P accumulates and competitively inhibits MurA, which is the essential enzyme for peptidoglycan synthesis (Barreteau et al. 2008), thereby freeing PezT toxin that poisons bacteria through inhibition of the cell-wall formation, causing the cells to lyse (Mutschler and Meinhart 2011;. The zeta toxin systems have been identified as highly abundant modules in multi-resistance plasmids and chromosomes of various Gram-positive pathogens, including S. pyogenes (Zielenkiewicz and Ceglowski 2005) and S. pneumoniae (Khoo et al. 2007). It has long been thought that epsilon-zeta systems only can be found in Gram-positive bacteria; however, a novel zeta homolog has been first identified from the Gram-negative bacterium Escherichia coli (Rocker and Meinhart 2015). This zeta toxin, designated EzeT for E. coli zeta toxin, is located in 3.4-kb islet, consisting of two domains featuring EzeT toxin and epsilon-like antitoxin within a single polypeptide chain. Similar to the toxin PzeT, the C-terminal domain of EzeT containing all catalytic motifs of UNAG kinases is capable of phosphorylating UNAG in vitro Rocker and Meinhart 2015). In contrast to conventional type II TAs, N-terminal domain of EzeT contains an antitoxin; thus, EzeT is an authentic zeta-like UNAG kinase and is also the first auto-inhibited TA system, since it can be inhibited by its own N-terminal cis-acting antitoxin domain (Rocker and Meinhart 2015). Targeting tRNAs: VapC and HipA PIN (N-terminus of the pilin biogenesis protein PiIT) domains are small protein domains consisting of 130 amino acid in length, and serve as ribonuclease enzymes that cleave singlestranded RNA in a sequence-dependent manner (Arcus et al. 2011). The TA module vapBC (virulence-associated protein) is associated with PIN-domain proteins. The vapBC (at that time called vagCD) locus derived from virulence plasmid of Salmonella Dublin strain G19 was proposed to prevent plasmid loss under nutrient-limiting conditions (Pullinger and Lax 1992). VapC is the PIN-domain ribonuclease, co-expressed with cognate inhibitor VapB that forms a novel PIN-domain-inhibitor complex (Bunker et al. 2008;Arcus et al. 2011). vapBC are surprisingly abundant; for example, the genome of Mycobacterium tuberculosis encodes . HipA function acts in similar manner to VapC, but has different binding sites. In contrast to phosphorylate EF-Tu, free HipA inactivates GltX by phosphorylation at its ATP-binding site Ser 239 , and thus GltX is unable to charge tRNA with glutamate (tRNA Glu ). Consequently, this induces amino acid starvation and the invariable activation of RelA to more (p)ppGpp synthesis. Thus, high accumulated levels of (p)ppGpp trigger a stringent response that inhibits the global translation process such as protein synthesis (Kaspy et al. 2013;Germain et al. 2015). Targeting DNA gyrase: CcdB and ParE The ccd (couple cell division) locus is adjacent to the origin of replication of the F plasmid and promotes the stable maintenance of F plasmids by coupling host cell division to plasmid proliferation (Ogura and Hiraga 1983 and parE present similar properties to those of quinolones, but interact at a different site to DNA gyrase (Jiang et al. 2002;Dao-Thi et al. 2005). Under normal growth conditions, the antitoxin CcdA inhibits CcdA toxic activity by forming a tight CcdA 2 -(CcdB) 2 complex. Once the bacterium loses its plasmid, unstable CcdA degrades and CcdB and GyrA form a symmetric complex. After CcdB-GyrA binding, ATP is hydrolysed and the supercoiled DNA is released resulting in blocking bacterial transcription and immediate cell death (Critchlow et al. 1997;Dao-Thi et al. 2005). More recently, an additional role for CcdB has been that of a persistence factor (Tripathi et al. 2012). When faced with antibiotic or heat stress, the increased levels of the ATP-dependent protease Lon (Kuroda et al. 2001), responsible for the rapid turnover of unstable proteins, degrade the antitoxin CcdA, freeing toxin CcdB. Free active toxin CcdB causes DNA damage through forming a GyrA-CcdB cleavage complex, which triggers the RecAmediated SOS response. Ultimately, multidrug-tolerant persister cells are formed. Targeting membrane potential: HokB and TisB tisB/istR-1 is the first TA locus involved in the SOS response. The locus encodes two small RNA molecules: one is an antisense RNA, istR-1, that inhibits TisB toxicity, and the toxin, TisB, under the control of Lex (Vogel et al. 2004;Darfeuille et al. 2007) and is localised on the inner membrane (Unoson and Wagner 2008) (Fig. 2). The induction of tisB results in membrane damage that entails a rapid decrease in DNA replication, RNA transcription and protein synthesis (Unoson and Wagner 2008). HokB is similar to TisB in that both are small proteins, and exert toxicity by damaging the inner membrane. The hokB-sokB locus derived from chromosome of E. coli K-12 codes for three genes: sokB, mokB and hokB (Pedersen and Gerdes 1999). The sokB is a small antisense RNA that controls the translation of the mokB reading frame. hokB translation is under the control of mokB, thereby sokB can also suppress hokB toxicity. Recent studies have shown that HokB acts as a potential persistence factor (Verstraeten et al. 2015) and its accumulated leads to a loss of membrane electrochemical potential, ultimately resulting in persistence. Targeting ribosomes: Doc, MazF and RelE The toxin doc (death on cure) and its conjugate antidote, phd (prevent host death), are derived from the bacteriophage P1 and play a major role in plasmid stability persistence, programmed cell death and stress response (Lehnherr et al. 1993;Gazit and Sauer 1999). Doc has been showed to be a representative member of the Fic protein subfamily, which is ubiquitous in bacteria and involved in crucial functions (such as bacterial pathogenesis) (Garcia-Pino et al. 2008;Harms, Maisonneuve and Gerdes 2016). Fic proteins have a central conserved HXFX(D/E)N(K/G)R motif that is present in Doc structures. Phd dimers are subject to cleavage by CIpXP protease, an ATP-dependent protease of E. coli (Lehnherr and Yarmolinsky 1995). It has been shown that mRNA is significantly stabilised upon Doc induction, suggesting that Doc does not cleave mRNA. In fact, Doc toxicity has been proposed to act in a similar manner to hygromycin B (HygB), an aminoglycoside antibiotic (Liu et al. 2008). After degradation of Phd by CIpXP protease, the free Doc binds on the 30S ribosomal subunit that includes the HygB-binding site and phosphorylates the conserved threonine (Thr 382 ) of the elongation factor EF-Tu. Subsequently, Doc-bound EF-Tu is unable to bind to aminoacylated tRNAs which leads to an accumulation of stalled ribosomes, blocking protein synthesis, and thus a dormant state is formed (Liu et al. 2008;Castro-Roa et al. 2013). The MazF and RelE proteins are also RNases, which inhibit translation by the cleavage of mRNA. Purified MazF is a sequence-specific (ACA) endoribonuclease, which only cleaves single-stranded mRNA at VUUV' sites independently of the ribosomes, by a mechanism very similar to that of E. coli RelE Zhang et al. 2003;Donegan and Cheung 2009). In the context of the stringent response, antitoxin RelB is degraded by ATP-dependent protease Lon, which leads to activate RelE. The activated RelE induces a global inhibition of translation by cleavage of the mRNA at the ribosome A-site, with the consequence of the tRNA degradation (Christensen and Gerdes 2003;Pedersen et al. 2003). Consequently, this activates RelE to trigger a stringent response, creating high-tolerant persisters. Targeting bacterial biofilm formation: MqsR Bacterial biofilms are communities in which cells aggregate on a solid surface and are further enveloped in an exopolysaccharide matrix (Mah and O'Toole 2001;Stewart and Costerton 2001). It has been shown that biofilms are closely linked to antibiotic resistance and that a biofilm can form slimy layers that surround the bacteria and act as a barrier to antimicrobial agents, decreasing the penetration of antibiotics to the bacterium's surface (Davies 2003). When cells are embedded in a biofilm, their MIC has been shown to increase from 6.25 to >400 μg/ml depending on the antimicrobial agent (Evans and Holmes 1987). Besides failure of antibiotic diffusion, some studies have demonstrated that biofilm-associated multidrug-resistant Pseudomonas aeruginosa cells can cause slow growth, lipopolysaccharide modification and antibiotic degradation, ultimately accompanied by an increase in antibiotic resistance (de la Fuente-Núñez et al. 2013 ). The first TAs shown to be involved in biofilm formation was mqsRA(motility quorum-sensing regulator), a typical type II TAs in which the toxicity of protein MqsR is neutralised by its conjugate antitoxin MqsA (Gonzalez Barrios et al. 2005;Brown et al. 2009;Wang and Wood 2011). Gonzalez Barrios et al. (2005) demonstrated that toxin MqsR is significantly stimulated by biofilm formation and enhanced cell motility. It has been suggested that MqsR is an RNase and prevents translation by cleaving RNAs (Brown et al. 2009). In addition, antitoxin MqsA has been linked to the regulation of the general stress responses, such as oxidative stress . Wang et al. confirmed that MqsA represses the stress regulator, RpoS, leading to the decreased concentration of messenger 3,5-cyclic diguanylic acid and thus decreasing biofilm formation. However, upon stress, for example, oxidative stress, MsqA is unstable and is rapidly degraded by Lon and ClpXP protease, causing the accumulation of RpoS. As a result, the stringent response is triggered, and the bacterial state is switched from high motility (planktonic) to sessile (biofilm) state. BIOLOGICAL ROLE OF TAs IN ANTIMICROBIAL RESISTANCE Initially, TAs were identified on plasmids and used to be considered as selfish genes with little or no physiological benefit to the host cells. Because if a plasmid encoding the TAs is absent in the daughter cell, the stable toxin is released by rapidly degrading antitoxin to kill plasmid-free cells, in order to increase plasmid maintenance in host cells. Since their discovery, the role of TAs has been debated over decades. Hitherto, mounting evidence has testified that TAs are far more than selfish loci and that they play key roles in promoting cell survival. In particular, in response to antibiotic stress, toxins can be activated by stress-induced protease like CIpXP and Lon. This phenomenon results in slow cellular growth in which the bacterium can now effectively tolerate antibiotic challenge. THE MAINTENANCE OF MULTIDRUG RESISTANCE PLASMIDS Conjugative plasmids identified as reservoirs for resistance genes are one of the most effective physical forums to develop and disseminate the antibiotic resistance genes among bacteria (Carattoli 2013;Mathers, Peirano and Pitout 2015). In many cases, plasmids can carry genes that are highly beneficial to the host bacteria by enabling them to persist in unfavourable environments, e.g. protection against potentially lethal antibiotics. Therefore, plasmids serve as effective DNA shuttles for antibiotic resistance genes that are, in part, linked to the clinical failure of antibiotics treatments. However, because plasmids are extrachromosomal, mobile genetic elements presented in host cells, plasmids impose a metabolic burden to the host cells, which are prone to elimination from bacterial genome in the absence of selective pressure (Zielenkiewicz and Ceglowski 2001). The stable inheritance of plasmids is achieved by activating the plasmid-specified partitioning proteins into dividing cells and selective killing of the cells that failed to acquire a plasmid (Hayes 2003). As discussed above, TAs, like hok-sok and ccdAB, are responsible for the plasmids stabilisation; thus, TAs also have been viewed as 'addiction modules' (Engelberg-Kulka et al. 2006). Beside plasmids, TAs appear to play a stabilising role in genomic islands, for instance, SXT, an integrative and conjugative element that mediates tolerance to multiple antibiotics in Vibrio cholera (Wozniak and Waldor 2009). One novel TA pair (designated mosAT) within SXT has been identified to promote SXT stability. Ectopic expression of mosT causes growth inhibition and MosA can neutralise the toxic effect of overexpressed MosT. Similar to plasmid-borne toxins, when SXT is vulnerable to loss, MosT expression is activated to minimise the SXTfree cells. Therefore, the activity of mosAT may contribute to the maintenance of SXT in bacterial populations (Wozniak and Waldor 2009). BACTERIAL STRESS RESPONSE The SOS response is important for bacterial survival under stress conditions that can trigger disruption of the DNA replication fork and result in the accumulation of single-stranded DNA. Both RecA and LexA proteins have an important role in the SOS response as regulators (Yamaguchi and Inouye 2011). RecA, activated by single-strand DNA, is involved in the inactivation of the repressor LexA. Normally, LexA binds to a specific sequence in the DNA (the SOS box) and represses the expression of genes involved in DNA repair, mutagenesis and cell growth arrest. The SOS response is an important factor for persister formation in response to the fluoroquinolone antibiotic, ciprofloxacin, which can cause DNA damage (Dorr, Lewis and Vulic 2009;Lewis 2010). The first TA locus, tisAB-istR-1, is involved in the SOS response to DNA damage (Vogel et al. 2004). This locus encodes a toxic gene tisAB and two small RNAs, IstR-1 and IstR-2, as shown in Fig. 2. TisAB is under LexA control and thus activated by the SOS response, but only TisB is responsible to the toxicity (Vogel et al. 2004). The transcription of istR-2 is also SOS regulated and not involved in the TisB control, whereas the antitoxin IstR-1 binds with the LexA-independent promoter and inhibits TisB expression by inducing RNase III-dependent cleavage of tisB mRNA (Vogel et al. 2004;Darfeuille et al. 2007). In the absence of an SOS response, istR-1 is constitutively transcribed to inactive the toxicity of TisAB by inducing RNase III-dependent cleavage of tisB mRNA (Vogel et al. 2004). When DNA damage is caused by ciprofloxacin, it activates the RecA protein leading to LexA repressor cleavage, and then the SOS response is induced. The antitoxin IstR-1 that controls the Lex promoter is almost complete cleaved, while the toxin TisB gradually accumulates and rapidly binds to the cytoplasmic membrane, leading to membrane damage, and the proton motive force (pmf) and ATP levels are decreased. This causes the rates of DNA, RNA and protein synthesis to decrease, and the intake of drugs to the cells is blocked. As a result, growth slows down and a multidrug-resistant persister is formed (Vogel et al. 2004;Darfeuille et al. 2007;Unoson and Wagner 2008;Dorr, Vulic and Lewis 2010) (Fig. 2). PERSISTER CELLS TAs can also contribute to bacteria persistence formation (Lewis 2010;Maisonneuve and Gerdes 2014;Page and Peti 2016). Persistence is observed when a small subpopulation of cells survive antibiotic treatment that has efficiently killed off the rest of the population. In contrast to resistance, persistence is a form of antimicrobial tolerance that is not link with genetic mutation or DNA acquisition, but rather with a spontaneous switch of a dormant, non-dividing state. Therefore, persisters are able to survive in the presence of antibiotics even if they are genetically not programmed to become resistant. More importantly; however, rather than causing cell death, some toxins convert cells into a dormant or a semidormant state that is resistance to antibiotics, and then revive them when environmental conditions become more conducive for growth (Hayes 2003). TAs have been shown to play a major role in persister formation in many model systems. An example of TAs mediating persister states involves the intracellular metabolite, guanosine tetraphosphate and pentaphosphate [(p)ppGpp], the main regulator of the stringent response (Amato, Orman and Brynildsen 2013;Maisonneuve, Castro-Camargo and Gerdes 2013). In Escherichia coli, (p)ppGpp was discovered as a alarmone to alter cellular transcription globally by interacting with RNA polymerase activity directly, in response to nutrient starvation or other stress (Dalebroux and Swanson 2012). As a consequence, bacteria can survive even faced with limiting nutrients, suggesting that the coupling accumulation of (p)ppGpp level may induce growth arrest, drug tolerance and the formation of persisters. It has been proposed that high levels of (p)ppGpp trigger persistence by activation of the TA loci, resulting in translation inhibition and growth arrest (Korch, Henderson and Hill 2003;Maisonneuve, Castro-Camargo and Gerdes 2013;Schumacher et al. 2015;Harms, Maisonneuve and Gerdes 2016). Contrary to previous reports, there is growing evidence to suggest that EF-Tu is not the target of HipA during the inactivation of translation, but HipA-mediated persistence depends stochastically on the (p)ppGpp-TA pathway (Germain et al. 2013;Kaspy et al. 2013;Maisonneuve, Castro-Camargo and Gerdes 2013;. Most likely, the current molecular model explaining HipAmediated persistence is shown in Fig. 3 (Korch, Henderson and Hill 2003;Germain et al. 2013;Kaspy et al. 2013;Maisonneuve, Castro-Camargo and Gerdes 2013;Germain et al. 2015). When faced with particular stresses, bacteria rapidly swift transcription profile to trigger the nucleotide alarmone (p)ppGpp synthesis, which involved in catalytic activity of SpoT and RelA, the two (p)ppGpp synthetases of E. coli (Dalebroux and Swanson 2012). The resulting increased (p)ppGpp levels accumulate inorganic polyphosphate (PolyP) through inhibition of exopolyphosphatase (PPX), a phosphatase enzyme that degrades PolyP. The accumulation of PolyP combines with Lon protease preferentially to cleave the antitoxin HipB, resulting in an excess of toxin HipA. In return, free active toxin HipA inactivates GltX by phosphorylation of its ATP-binding site Ser 239 , with the consequence of uncharged tRNA accumulation in the cell. Consequently, the amino acid starvation triggers the activation of RelA to more (p)ppGpp synthesis. Thereby, the high level of (p)ppGpp accumulation induces a stringent response that inhibits the synthesis of DNA, RNAs, ribosomal proteins and membrane components, promoting cells entry into dormant state. Conversely, a recent study showed that the activation of yefM-yoeB (Christensen et al. 2004), a well-characterised type II TAs, is not dependent on the level inorganic PolyP and (p)ppGpp (Ramisetty et al. 2016), and further suggested that the pathways of TAs-mediated persistence formation may be far more complicated than previously known. CONCLUSION In the last decade, antimicrobial resistance in Gram-negative pathogens has outpaced the production of novel and even new drugs entering the market place providing an increasing void that is unlikely to be bridged. The drivers and maintenance of antimicrobial resistance was hitherto thought to be antimicrobials themselves; however, increasingly we are becoming aware that antimicrobial resistance is as much to do with genetic maintenance systems, e.g. TAs, as it is to do with the presence of the drug. TAs are remarkable systems that parasitise the PolyP P P P P P P P P P P P P Figure 3. (p)ppGpp-hipA mediated persister pathway. In response to particular stresses, SpoT and RelA are activated to synthesise the nucleotide alarmone (p)ppGpp, The increased (p)ppGpp levels lead to the accumulation of inoganic polyphosphate (PolyP) through inhibition of exopolyphosphatase (PPX), that the cellular enzyme to degrades PolyP. The accumulated PolyP combines with Lon protease preferentially to cleave the antitoxin HipB, resulting in an excess of toxin HipA. In return, free active toxin HipA inactivates GltX by phosphorylation of its ATP-binding site Ser 239 , with the consequence of uncharged tRNA with glutamate (tRNA Glu ) accumulation in the cell. Uncharged tRNA Glu loads at empty ribosomal sites and triggers the activation of RelA to more (p)ppGpp synthesis, promoting cells entry into dormant state. Note that SpoT and RelA are bifunctional synthetase-hydrolase enzyme, if the stresses have been removed, they can hydrolase (p)ppGpp and bring cells to normal growth (Dalebroux and Swanson 2012). The red box labelled with '?' indicates that the link between stringent response-associated genes (including ppGpp, Lon, PolyP) and TAs has been exploring in some TAs, such as relBE, mazEF and yefM-yeoB. It has been proved that the activation of toxin MazF and YoeB is dependent on the Lon-mediated degradation of their cognates, antitoxins, but not on the accumulation of PolyP and ppGpp (Christensen et al. 2001Ramisetty et al. 2016). bacteria and hold it hostage. TAs are also extremely varied and are a testimony to the dexterity and plasticity of genetic systems to adapt and evolve. Although yet to be fully established, TAs are becoming increasingly numerous and more associated with antimicrobial genes present on the same plasmid thereby providing maintenance of the antimicrobial resistance in the absence of the drug. Worryingly, the SOS induction triggered by drugs such as fluoroquinolones activates TA systems such as TisAB via LexA. The fact that fluoroquinolones are widespread and poorly degraded implies an ever-present pressure on certain TA systems to further be mobilised throughout bacterial populations. FUNDING QY is funded by a CSC scholarship and TRW funded by HEFC. TRW and QY were also supported by MRC grant DETER-XDR-CHINA ( MR/P007295/1). Conflict of interest. None declared.
2018-04-03T00:05:51.600Z
2017-03-16T00:00:00.000
{ "year": 2017, "sha1": "a0eb35f5804bc60c4a7a0d6bdd7a54fa8f318600", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/femsre/article-pdf/41/3/343/23906099/fux006.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b3c36714d9849add84c675c8f64d148c8933b4d6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
242483027
pes2o/s2orc
v3-fos-license
Implementation of Symmetrical and Unsymmetrical Fault in Power System Network Using Matlab : The main goal of this study is to provide a MATLAB-based simulation model for three-phase symmetrical and unsymmetrical faults. This paper shows how to deal with MATLAB programming in which a transmission line model is created and various challenges are acted out using a toolbox. Fault analysis for many types of faults has been performed, and the resultant effects may be seen in simulation output such as voltage, current, and control, as well as the positive, negative, and zero grouping parts of voltage and current output as waveforms. B. Matlab Toolbox Elements The components of the MATLAB toolbox that are used in the analysis of power systems encourage future programming amendments and extensions. This is critical for investigations that are interested in developing and testing novel technologies for various power system applications. It supplies a way to efficiently prepare information records is commonly recognized organizations for systems that are established, and the outcomes created by one application can be easily employed either entirely or partially by another application supported by the bundle. The MATLAB/SIMULINK toolbox contains the following items: 1) Mat Power Tool Storage 2) Voltage Stability Toolkit and 3) Power System Analysis Toolkit. Dispersed assets library, Electric drives library, and Adaptable Air conditioning transmission library are examples of application libraries. A.C.'s current source, A.C. furthermore, D.C. voltage source, Controlled voltage and current source, Three-phase programmable voltage source, Three-phase source, and Battery are included in the Electrical Sources assemble. Single-phase models RLC branches and loads, linear and saturable transformers, shared inductances, n-area lines, MOV sort surge arrester, circuit breaker, and n-phase distributed parameter line display are all included in the Components collection. The client can surely include more mind-boggling components developed from the main PSB building parts and partner a dialogue box by utilizing SIMULINK's covering office. This method was used to create a three-phase library, which is also available. Basic semiconductor devices are found in the Power Electronics category. Except for the Diode, every component in this assembly has a SIMULINK gating control input and a SIMULINK output that returns switch current and voltage. The Mat Power Tool Kit is a collection of tools for describing power stream and optimal power stream issues. This bundle is designed to provide the best possible performance while keeping the code simple to understand and modify. The Power System Analysis Tool collection is for electric power systems. Control and investigation. Control stream, never-ending power stream, optimum power stream, tiny flag security investigation, and time area reproduction are all included. Stability of voltage Tool storage solves voltage security problems and provides information for power system planning, operation, and control. Control squares, discrete control pieces, discrete estimations, and phasor libraries are among the other libraries available. a) Block Libraries: In the SIMULINK environment, the PSB is a realistic device that allows you to develop schematics and recreate force frameworks. The SIMULINK environment is used to communicate with basic segments and networks present in electrical power systems. It includes an electrical model library that includes RLC branches and loads, transformers, lines, surge arresters, electric machines, control devices, and so on. Snap and drag tactics into SIMULINK windows can be used to create outlines. To enter parameters, the Power system Block set uses the same artwork and intelligent dialogue boxes as normal SIMULINK pieces. Simplified and detailed models of synchronous machines, asynchronous machines, a permanent magnet synchronous machine, a model of a hydraulic turbine governor, and an excitation system can be found in the Machines groups. Every machine block has a SIMULINK output that returns inside factor estimations. A straightforward tool for setting introduction conditions is included in the PSB graphical interface (Powergui). This enables reproduction under initial conditions or the start of recreation in a lasting state. A heap stream computational motor enables the installation of three-stage circuits with synchronous and asynchronous machines, allowing the recreation to start in a persistent state. Simulink scopes linked to estimation piece yields in the PSB library can be used to visualize reproduction outcomes. This estimation block serves as a link between the electrical components and the SIMULINK blocks. To convert electrical signals into SIMULINK signals, the voltage and current estimation blocks can be used at certain points in the circuit. II. FAULTS IN THE TRANSMISSION LINE As previously stated, two types of faults can occur in a three-phase transmission line of a power system: balance faults (also known as symmetrical faults) and unbalance faults (also known as unsymmetrical faults). However, this study focuses on the asymmetrical fault, which typically occurs between two or three conductors in a three-phase system or between conductor and ground at some point. Unsymmetrical faults can be classified into three basic categories based on this:-1) Single Line to Ground Fault. 2) Double Line to a Ground fault. 3) Double Line fault. The single line to ground fault occurs most frequently in three-phase systems, followed by the L-L fault, 2L-G fault, and three-phase fault. These types of faults can arise during electrical storms, resulting in insulator flashover and ultimately affecting the power grid. In order to explore and analyze the unsymmetrical fault in MATLAB, a network of positive, negative, and zero sequences must be developed. In this study, we examine the voltage and current of buses in positive, negative, and zero sequences under various fault conditions. We also look at the active and reactive power, as well as the system's RMS bus current and voltage, under various fault conditions. A. Protective Relay The relay, which is a device that treks the circuit breakers when the information voltage and current indicators relate to the fault circumstances intended for the relay function, is one of the most critical elements of a power protection system. When all is said and done, relays can be classified into the following classes: 1) Directional Relays: They respond to a difference in phase angle between two relay contributions. 2) Differential Relays: They respond to the magnitude of the logarithmic sum of the information sources. 3) Size Relays: These relays are triggered by the magnitude of the input amount. 4) Pilot Relays: These relays respond to information flags received from a remote location. 5) Remove Relays: They are affected by the ratio of two information phasor signals. Over the years, relay technology has progressed, and the following generational classification has been established: 6) Electromechanical Relays: This is the first relay generation. They work on the electromechanical conversion principle. They're tough and unaffected by electromagnetic interference. However, current technological improvements have rendered them obsolete in most domains. 8) Numerical Relays: The operation of these relays entails an Analog to Digital conversion of currents and voltages obtained from CTs and VTs and fed to the DSP or microprocessor. The protection algorithms are then applied to these signals, and the necessary decisions are made. The following are some of the benefits of using a Numerical Relay: a) Extreme adaptability. b) A wide range of capabilities. c) The ability to self-check and communicate. d) Has the ability to adjust III. POWER SYSTEM NETWORK CIRCUIT MODEL FOR THREE-PHASE FAULTS The GUI is used to implement the model created in MATLAB using the Sim Power Systems® Tool. Any power system network may be simulated with ease using this powerful data simulation model, and fault analysis can be carried out. [33][34][35][36][37][38][39][40][41][42][43]. Step-by-Step Guide to Creating a Circuit Model 1) Start 2) Open the model file (.mdl/.slx) and run it. 3) Explain the terms CT and PT. 4) Determine the number of samples, phrases, and sampling time (frequency) 5) Establish operating times for faults and circuit breakers. 6) Define the system voltage as well as the line lengths. 7) Allocate RAM for the three phases' current and voltage data. 8) Check the maximum and minimum current and voltage values in each phase, and if abs(min)>max, abs(min)=max. 9) Change the matrix to an array and normalize the maximum current and voltage to 32767. 10) Divide current and voltage data by phase into separate instructions. 11) Initialize the buffer to determine the travel duration 12) Create a user-defined Current and Voltage waveform 13) Perform the test and obtain the relay's trip status and journey time. 14) Save the plots you've created. 15) Create a report by building a new Excel server and sending data from MATLAB to Excel. 16) Save the report as a pdf after converting the file. In a Matlab environment, Figure 2 depicts the relay circuit sub-system for the suggested three-phase model. B. Simulation Outcomes The Distribution system model is used in this scenario to simulate a Three-Phase To Ground Fault. The simulation is run for 1 second to make the waveforms more visible. 10 kHz is assumed to be the sampling frequency. The system voltage is 33 kV, and the line length is 10 kilometers, with a fault occurring at 5 kilometers. As illustrated in Figure 3, the fault starts at 0.2 seconds and ends at 0.7 seconds. For other test scenarios, these parameters were also kept constant. The current and voltage waveforms for the provided requirements are shown in Figures 3 and 4. When these signals are injected into the relay, the relay trips after 47.51 milliseconds and the status of its coil is displayed in the simulated results. The trip status of a self-reset relay returns to 0 when the fault is cleared, whereas the trip status of a manual reset relay remains 1 until the reset button is manually pressed. To view the charts, the proposed MATLAB model can be run standalone or on the GUI. ©IJRASET: All Rights are Reserved Simulating unsymmetrical faults such as Double Line to Ground fault, Line to Line fault, and Single Line to Ground fault is part of the suggested study work. IV. CONCLUSION The use of MATLAB software allows for the modeling and analysis of three-phase faults in order to provide transmission line parameter data. This paper proposes a simulation of a three-phase transmission line fault analysis system. Transmission line faults such as a single line to ground faults, double line faults, and so on are also simulated. This system paves the way for the power system's bus system to be redesigned. according to the outcomes The proposed method has the potential to be implemented for larger power networks that are geographically separated.
2021-09-28T17:13:20.733Z
2021-07-10T00:00:00.000
{ "year": 2021, "sha1": "b0840acf0583e6475998d480a5de9053c02fc46d", "oa_license": null, "oa_url": "https://doi.org/10.22214/ijraset.2021.36329", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e1cd45422131a1a6fb02176d1b961d689f49f23b", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
155423964
pes2o/s2orc
v3-fos-license
Run-Away Energetic Reactions in the Exhaust of Deposition Reactors A model is developed to simulate the processes that may cause run-away ex-othermic reactions in the downstream of typical deposition reactors used in semiconductor manufacturing. This process model takes into account various modes of mass and heat transport as well as chemical reactions and provides insight into the key mechanisms that trigger the uncontrolled energetic reactions and cause the formation of potentially damaging hotspots. Using the developed model, a parametric study was conducted to analyze the effects of various system and operating conditions. In particular, the effects of the gaseous reactants concentrations and incoming temperature, the extent of accumulation of deposits, and the gas flow rate, and the reactions activation energy and heat of reaction are analyzed and the location and time of hot spot formation for each case are determined. The results are useful in developing strategies for mitigating the occurrence of the damaging energetic events. Introduction Chemical vapor deposition and atomic layer deposition processes continue to play a key role in the semiconductor industry [1]. One of the prevalent but sporadic issues associated with the CVD systems is the formation and accumulation of deposits on various surfaces downstream of the tools (exhaust, pumps, and other components). These deposits come from three main sources: 1) the unreacted portion of precursors that are used in the deposition process, especially in an atomic layer deposition (ALD) process [2], 2) the by-products of the reactions in the deposition tool, and 3) the mixing and reactions when exhaust from various tools shares the same exhaust line. The accumulation of these deposits is sometimes followed by a rapid surge of uncontrolled reactions in the exhaust system. In addition to causing process interruption, these energetic reactions often damage the exhaust components and generate hazardous compounds. Despite documented occurrence and impact of these incidents, the underlying conditions and causes are not clearly understood, primarily due to the difficulty of sampling and analyzing the culprit reactants and the limitation of any direct dynamic measurements at the time of occurrence. Another intrinsic limitation of the experimental approach at this time is the wide range of chemistries and system/exhaust configurations in different fabs. The materials causing these reactions are substances, such as organic and organometallic precursors, which contain organic ligands or carbon-metal bonds that are unstable and highly reactive [2] [3]. The growing use of these deposition techniques has expanded the number of precursor materials for device fabrication. The highly reactive nature of such precursors presents a challenge when the unreacted precursors are transported downstream of the reactor and into the exhaust system, where they form deposits on the exhaust piping, react with other chemicals, and generate potentially hazardous secondary by-products [4]. At least 70 run-away reaction events, involving these energetic materials, have been reported [5] [6] [7]. The majority of these events are associated with the reactions of deposited materials in the frontline or downstream of atomic layer deposition (ALD) and chemical vapor deposition (CVD) process tools. While substantial improvement in mitigating downstream accumulation on exhaust lines and internal pump components has been made using in-line devices such as hot and cold traps and gas reaction columns, the downtime necessary to change expended components causes detrimental process interruption. Additionally, point-of-use abatement can become costly when used with every processing tool. In order to mitigate the detrimental effects of downstream deposition and prevent disastrous results of thermal run-away, it is necessary to gain a fundamental understanding of the chemical processes taking place and further determine how these processes respond to changes in processing conditions [8]. The methods of thermodynamic assessment have been demonstrated as useful tools in estimating the ranges for the desirable and undesirable reaction products [9]. Further kinetic and mechanistic studies of the deposition process are needed to gain insight into the dynamics of the run-away reactions. The long-term goal of this study is to explore the fundamentals of these energetic events through comprehensive process simulation, mapping a wide range of chemistries and process conditions, understanding the mechanism of the harmful energetic reactions, and identifying the key factors in controlling the process. In this respect, the process model is valuable for the analysis of data, determination of key operational parameters, and most importantly, developing strategies and methods for preventing the run-away reactions. Method Approach The generalized pathway and mechanism of the energetic reactions are illustrated in Figure 1. The unreacted precursors and the reactive by-products exiting the process tool (depicted as generic A and B) go through a number of transformations in the exhaust system before being effectively removed or treated by various abatement methods. A portion of these compounds is deposited on the surfaces of the exhaust lines by either physisorption or more energetic chemisorption. The combined overall deposition process is shown as B → B s . In general, the deposit continues to react with other components in the gas phase in a heterogeneous reaction that produces other by-products and releases the heat of reaction (ΔH). This is shown as A + bB s → Prod + Heat (ΔH). The heat of reaction is partly transferred to the gas phase and transported downstream primarily by convection. It is speculated that in most cases, due to the relatively slow gas-solid interphase heat transfer, most of this heat remains in the solid phase, causing a gradual increase in the temperature of the deposit. A schematic of the exhaust piping is shown in Figure 2. The distribution of the deposited material is generally not uniform and goes through a peak at some location downstream of the tool in the exhaust line. The shape of this distribution depends on the reactor and its operating conditions. In this study, the following adjustable equation, Equation (1), was included in the model that can be used in representing a variety of conditions. where C b0 is the initial concentration of the deposited material along the model geometry, C b0max is peak initial concentration at location z, z is the distance along the pipe, L is the length of the pipe, and m is an adjustment parameter for changing the shape of the C b0 concentration profile. To analyze the dynamics of the transport and reactions that take place in the pipe, conservation equations for the mass and energy are formulated. These balances are shown in Equations (2)-(5). In these equations, subscript "A" stands for the gaseous reactant and "B" for the deposited reactant. The subscript "g" and "s" generally refer to the gas and solid phases, respectively. The mass balance equation of gas reactant, Equation (2), includes convection, dispersion, reaction, and accumulation terms. The inlet and the outlet boundary conditions are at z = 0 and z = L, respectively. "D e " is the dispersion coefficient, "u 0 " is the gas flow rate, and "d" is diameter of the pipeline. The heat balance (Equation (3)) includes convection, dispersion, heat transfer between the two phases, and accumulation. " ρ g " is the density of gas phase and " e λ " is thermal conductivity. The heat balance in the solid phase (Equation (5)) includes accumulation, heat generation by the reaction, heat transfer from the solid phase to the gas phase, and the heat transfer to the surrounding through the pipe wall. "h 0 "and "h a " are the heat transfer coefficients and "T a " is the surrounding temperature. Advances in Chemical Engineering and Science The rate coefficient, k, is given by the following equation: The concentration boundary and initial conditions for the gas and the solid phases are given by the following equations: Similarly, the temperature boundary and initial conditions for the gas and Parametric Study The process model was used in a parametric study to understand the effects of various system and operating conditions on the mechanism and dynamics of the energetic events. In particular, six parameters were studied: concentration of the gaseous reactant, concentration of the deposit, velocity of the carrier gas, inlet temperature, reaction enthalpy, and reaction activation energy. Only one parameter is varied over a selected range at a time to isolate and determine its effect on the overall process. The base values and the ranges of variation these parameters in this parametric study are shown in Table 1. The effect of C a0 on the process is shown in Figure 4. The resulting effects on the size and the location of the peak temperatures are similar to those of C b0 . The effect of flow rate on the peak temperatures is shown in Figure 5. The results show that the temperatures in both the solid and the gas phases decrease with the increase in the flow rate. The deposit removal rate does not increase significantly by increasing the flow rate. This is because the increase in flow rate causes an increase in the heat transfer coefficients and a more efficient heat dissipation [2] [10]; this lowers the heat accumulation and reaction rate. The hot spot for the gas phase moves towards the pipe outlet as flow rate increases, indicating the dominance of convection in accumulation of heat in the gas phase. In general, flow rate has a large effect on the location of the hot spots; it can be effectively utilized to lower the size of the hotspot and move its location away from the sensitive parts of the exhaust system. The effect of the incoming gas temperature is shown in Figure 6 and Figure 7. The results show that the hot spot location shifts towards the inlet section of the exhaust pipe as the incoming gas temperature increases. This is expected due to a higher reaction rate close to the inlet of the exhaust and a lower heat accumulation towards the outlet. Both of these trends are potentially damaging to the process tool and the sensitive equipment and should be avoided. The highest temperature peak (K) The effect of the incoming gas temperature is shown in Figure 6 and Figure 7. The results show that the hot spot location shifts towards the inlet section of the exhaust pipe as the incoming gas temperature increases. This is expected due to a higher reaction rate close to the inlet of the exhaust and a lower heat accumulation towards the outlet. Both of these trends are potentially damaging to the process tool and the sensitive equipment and should be avoided. The effect of the heat of reaction (enthalpy change) and the activation energy are shown in Figure 8 and Figure 9. While the hot spot size increases mildly with the heat of reaction, the peak size is more sensitive to the reaction rate than to the enthalpy change. This is due to the dominant effect of the process dynamics and kinetic properties as opposed to that of equilibrium and thermodynamic properties. This is further confirmed by the results of the parametric study, shown in Figure 9. The effect of the activation energy is far more significant than that of heat of reaction due to its primary and large impact on the reaction rate and process dynamics. Onset of the Run-Away Reactions As seen in the previous section, the reactions in the exhaust system generally cause the formation of a peak in the temperature of the gas and/or solid phases. In most cases, this peak temperature rises first and then falls as the reaction depletes the deposited reactants on the pipe; consequently, in most cases, the process proceeds without causing any damage to the system. However, under certain conditions, the rise in temperature is too rapid and too large for the system to tolerate and handle safely. Under these conditions, the process exhibits a rapid change in its dynamics that is similar to ignition in the combustion systems. It is very important to determine the conditions that produce this critical situation and make sure that the operating conditions are selected to stay safely far from this run-away ignition. In this section, some examples are presented to show how the process simulator can be used to determine the safe range of operation and the critical ignition value for a system parameter. Results in Figure 10 show the critical value of the deposit peak, C b0max , while the other parameters are kept at their base values. As shown in Figure 10(a) and Figure 10(b), the critical value for this peak deposit concentration is about 28 mole/m 2 . When this concentration is at 27.9 mole/m 2 , the temperature peak rises first; but then, after 21 minutes, it begins to fall due to depletion of the solid reactant. However, at a slightly higher value of C b0max , (28 mole/m 2 ), the temperature rise accelerates rapidly after 20 minutes; at 21 minutes, it reaches about 60 degrees higher than the values corresponding to C b0max = 27.9. The temperature profiles predicted by the process model after the onset of the runaway reactions are not practically relevant since under those conditions, the physical integrity of the system is compromised, and the system is irreversibly damaged. Applying the same methodology, the process simulator can be used to determine the onset of the run-away reaction and the critical values of the operational Advances in Chemical Engineering and Science parameters. The results for the heat of reaction are shown in Figure 11(a) and Figure 11(b). Similar results are observed for inlet gaseous concentration, flow rate, and activation energy. The critical values and the time for the onset of the run-away reaction for each case are summarized in Table 2. Summary and Conclusions The dynamics of run-away energetic reactions and the formation of hot-spots, involving reactive deposits in the downstream of ALD and CVD deposition tools, have been analyzed. A process model is developed to show the interactions of transport processes and reactions that lead to these energetic events. The Advances in Chemical Engineering and Science process simulation includes convective and diffusive modes of mass transfer in the gas phase, heterogeneous reactions between the gas phase reactants and reactive deposits on the walls, and various modes of heat transfer between the two phases and to the surrounding air. The results show that, in general, hot spots (peaks in the gas and solid temperatures) are formed at some locations in the exhaust pipe because of the accumulation of heat and its transport downstream. These hot spots typically move downstream, grow in size initially, and then dissipate as the deposited solid reactants are depleted. However, in some cases, the heat accumulation is rapid and localized, leading to accelerated rise in the reactions rate and an accelerating process kinetics that resembles ignition in the combustion systems. These conditions, called run-away reactions, are unsafe and damaging to the system and need to be avoided by a proper system design and operating conditions. The process model developed in this study is a useful tool for predicting the critical run-away conditions for a predefined system configuration and operating conditions. The results show that the most critical system parameters that affect the occurrence of these energetic events are those dealing with the reaction and process dynamic. Among those are concentrations of the gas and the depo-
2019-05-17T14:23:48.402Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "175801a47e437d81c73f91ff4d498490ad87a7dc", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=92221", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2af8b0b9e1b337a5a408f9eb1ffe7a42fd12c051", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
261681937
pes2o/s2orc
v3-fos-license
Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a"think small, think many"philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy. I. INTRODUCTION Automated content moderation has become an essential aspect of online platforms in recent years, with explosive growth in user-generated content. As video-sharing websites and online marketplaces have become popular, they have also become a hub for the dissemination of videos containing explosions, which could be disturbing and harmful to younger audiences. Content moderation, therefore, is crucial to ensure user safety and compliance with laws and regulations, particularly for video broadcasting where scenes of explosions are often depicted. One important aspect of content moderation in this context is the detection of unsafe scenes for kids, which includes identifying explosive content and other forms of violent or disturbing imagery. Automated explosion detection techniques are crucial for enabling quick and effective responses to ensure public safety and security, and to prevent the spread of harmful content on online platforms. In this context, there has been a significant interest in applying deep neural networks to address many important realworld computer vision problems, including automated content moderation and explosion detection. However, to achieve higher accuracy, the complexity of neural networks has increased dramatically over time, leading to a challenge in terms of memory and computational resources. Moreover, larger datasets are required to provide higher prediction accuracy and address false positives. While ensemble learning techniques are designed to improve classification accuracy, they often create a subset dataset from the original dataset, leading to the same results since they are getting the same input. Therefore, ensemble models may not perform well without feature engineering. In this paper, we propose a novel ensemble structure consisting of two lightweight deep models that are designed to perform high-accuracy and fast classification for both image and video classification use cases, specifically targeting explosion detection. Our design uses a verification-based combination of two lightweight deep models, each individual model making predictions on a different color feature: the main color-based model which operates based on 3 RGB color channels, and a secondary structure-oriented model which operates based on a single grayscale channel, focusing more on the shape of the object through intensity than learning about their dominant colors. We implemented and evaluated our approach for explosion detection use case where video scenes containing explosions are identified. Our evaluation results based on experiments on a large test set show considerable improvements in classification accuracy over using a ResNet-50 model, while benefiting from the structural simplicity and significant reduction in inference time as well as the computation cost by a factor of 7.64. While our approach is applied to explosion detection scenarios, it can be generalized to other similar image and video classification use cases. Based on the insights gained from our evaluations, we further make an argument to "think small, think many" which aims to beat the complexity of large models with the simplicity of smaller ones. Our approach replaces a single, large, monolithic deep model with a verification-based hierarchy of multiple simple, small, and lightweight models with step-wise contracting color spaces, possibly resulting in more accurate predictions. In this paper, we provide a detailed explanation of our approach and its evaluation results, which we believe will be beneficial for researchers and practitioners working in the field of computer vision. As automated content moderation and explosion detection are crucial for user safety and platform compliance with regulations, our proposed approach can play a vital role in making online spaces more secure for everyone. Furthermore, the increased efficiency in automated compliance and moderation can reduce the burden on manual efforts, allowing human moderators to focus on more nuanced content issues. The remainder of the paper is structured as follows. Section II provides an overview of relevant background information and related work. In Section III, we describe the design and architecture of our proposed ensemble approach. Section IV outlines our experimental setup and presents our evaluation results. In Section V, we discuss our findings and explain how our approach can be extended to other content moderation use cases. Finally, in Section VI, we summarize our contributions and offer concluding remarks. II. RELATED WORK Ensemble models are generally aimed to improve the classification accuracy. Multiple models are used to make predictions for each data point, with each sub-model trained individually through variations in the input data. The predictions by each model are considered as a vote, where all votes can later be fused as a single, unified prediction and classification decision. To achieve this, various techniques exist to combine predictions from multiple models. Voting, averaging, bagging and boosting are among the widely-used ensemble techniques [1]- [3]. In max-voting for instance, each individual base model makes a prediction and votes for each sample. Only the sample class with the highest votes is included in the final predictive class. In the averaging approaches, the average predictions from individual models are calculated for each sample. In bagging techniques, the variance of the prediction model is decreased by random sampling and generating additional data in the training phase. In boosting, subsets of the original dataset are used to train multiple models which are then combined together in a specific way to boost the prediction. Unlike bagging, here the subset is not generated randomly. While effective, a requirement in many of these broad approaches is to create a subset dataset from the original dataset, which is used to make predictions on the whole dataset. So there is a high chance that these models will give the same result since they are getting the same input. A body of work have shown the potential of ensemble methods in improving the performance and accuracy of deep learning models for a variety of classification tasks. In [4], the authors propose an ensemble algorithm to address overfitting through looking at the paths between clusters existing in the hidden spaces of neural networks. The authors in [5] present MotherNets, which enable training of large and diverse neural network aimed to reduce the number of epochs needed to train an ensemble. In [6], the authors propose a geometric framework for a priori determining the ensemble size through studying the effect of ensemble size with majority voting and optimal weighted voting aggregation rules. Another approach that has been explored in the literature is the use of ensembles of weak classifiers, such as decision trees, to improve classification performance. For example, in [7], [8], the authors investigated the decision tree-based ensemble learning in the classification of various cancers. In this paper, we argue that transforming a single, large, monolithic deep model into an ensemble of multiple smaller models can potentially enable higher accuracy, while benefiting from reduced training costs and faster inference time. Our proposed ensemble approach differs from these existing methods in several ways. First, we focus on a specific use case of content moderation, specifically the detection of violent or explosive content. Second, we use a set of lightweight models with narrowed-down color features, which reduces the computation cost and enables faster inference compared to larger, more complex models. Third, our ensemble architecture is designed to handle both images and videos, which is an important consideration for real-world content moderation applications. Our approach is independent, and can further be combined with other ensemble techniques discussed above. III. METHODOLOGY Our proposed methodology is designed to address the limitations of using a single model approach for image classification tasks, specifically for explosion detection. Our experimental results showed that using only a color-oriented model, which focuses on RGB color features, can result in false positives due to the misclassification of scenes with light-emitting sources. On the other hand, while a grayscale model can eliminate color-induced false positives, it introduces other false positives that are similar in structural shape to explosions or fires. To overcome these limitations, we propose a verification-based ensemble structure that combines the strengths of both models, Model C and Model L. Model C is used as the primary classifier, and its predictions are verified or validated by Model L, which is more structureoriented and uses grayscale (L) features. This verification step helps to filter out false positives and improve overall prediction accuracy. Figure 1 illustrates the abstract design of our proposed verification-based ensemble structure, which involves step-wise contracting color spaces. In this structure, if Model C predicts an input sample image as negative, the overall prediction would result as negative, and only if Model C predicted an input sample as positive, the prediction will be verified or validated with Model L. Given these facts, our proposed architecture is designed based on two-fold insights: Firstly, for many use cases such as those seen in our explosion image classification use case, we experienced that while the use of a single model operating on RGB-based color features provides a higher accuracy, it can wrongly classify scenes such as sunlight, lamps, or light-emitting sources as explosions. Similarly, while employing an individual model based on grayscale features can eliminate such color-induced false positives, it further introduces other false positives which are similar to explosions or fires in structural shape. Through our experimentation, we realized that a grayscale model can potentially identify bushes or trees, clay grounds, clouds, or steam as fire or explosion simply due to lack of knowledge on the color data. This problem is exacerbated in video frames due to motion blur. Therefore, we experimented with combining both color and grayscale intensity (which focuses more on the structure) to provide a higher classification accuracy. Figure 2 depicts an example false positive identified as an explosion when using only the grayscale model. Secondly, limited features, in this case, a limited number of color spaces through removing chrominance and keeping only luminance as the color feature, would generally lead to lower learnability due to a lower number of features to train on. This means the model would incur higher recall and lower precision. Our evaluation showed that passing predictions from a supposedly higher-precision model to a model with higherrecall can potentially lead to filtering out false positives and therefore increase overall prediction accuracy. Figure 3 illustrates the structural details of an extended version of our ensemble design, which is applied to video frames. To process video frames, each frame is captured and resized to 300x300, and the pixels along with their 3 color channels are forwarded to a frame pre-processing phase. We chose the input dimension of 300x300 based on a trade-off between computation cost and prediction accuracy. To reduce noise, Anti-Aliasing technique is applied on every frame as a part of the pre-processing phase. To extract features, color channels are separated from each frame, producing RGB color features (signified as C). A Color Channel Transformation module transforms the 3 RGB channels to grayscale-only features (signified as L). The original RGB features are passed to Model C directly, while the grayscale features are passed to Model L. Each model produces a binary prediction output, signifying positive (i.e., explosion in our use case) or negative (i.e., non-explosion). After conducting a thorough evaluation of precision and recall trade-offs, we selected a prediction threshold of 90% for both models. This threshold indicates that any detection with a score above 90% will be considered a positive detection. After the predictions from the individual models are made, our proposed post-processing technique applies a validationbased mechanism to the results from Model C (predictions C) and Model L (predictions L). This mechanism involves veri-fying the positive predictions made by Model C by comparing them with the predictions made by Model L. Only if Model L also made a positive prediction, the overall prediction is considered positive. This step helps to filter out false positives and improve the overall accuracy of the ensemble. After the predictions from the individual models are made, we employ a temporal coherence check to further improve the prediction accuracy for video frames. The temporal coherence check ensures that the predicted labels for subsequent frames in a video sequence are consistent. Specifically, for Model C predictions, we compute the majority vote of the labels for a set of three consecutive frames. If two or more frames in the set are predicted as positive, the majority label is considered positive. This majority value is then fused with the prediction from Model L through the same validation-based approach to derive the final outcome using a 1-N validation approach. For every positive frame i (labeled as "explosion") in Model C predictions, we check the three neighbor frames i − 1, i, and i + 1 in Model L predictions. If at least one of these frames is predicted as positive (labeled as "explosion"), the final prediction is positive. While our video-specific postprocessing approach can be generalized to other numbers of neighbors, such as 1 or 5, We found that our choice of three consecutive frames provides an efficient trade-off between the final precision and recall values. Figure 4 depicts the internal architecture of each of the two models. Both Model C and Model L are feed-forward convolutional neural networks that consist of multiple groups of a 2D convolution layer (Conv2D), followed by a Max-Pooling layer and a Batch Normalization layer. The models have five layers with the number of convolution filters and kernel sizes as illustrated in Figure 4. The first Conv2D layer has 32 filters with a kernel size of 5x5, 64 filters and a kernel size of 3x3 for the second one, 128 filters, 256, and lastly 64 filter with kernel size of 3x3 as well for the other Conv2D layers. To standardize the inputs passed to the next layer and accelerate the training process while reducing the generalization error, we apply batch normalization. We also use a dropout mechanism with a rate of 0.2 to prevent possible overfitting. After passing through a Flatten layer, output features are flattened into a 1-dimensional array for entering the next layer. The produced data is then passed through three dense layers and another dropout layer to achieve the final binary prediction. Rectified Linear Unit (ReLU) is used as the activation function for all the Conv2D and Dense layers. While we designed this specific sequential neural network as our base model, we acknowledge that other lightweight models such as MobileNetV2 [9], SqueezeNet [10], or ShuffleNet [11] can also serve as a base model for our ensemble design. IV. EVALUATION To evaluate our approach, we created a dataset of approximately 14,000 images, consisting of around 8,000 negative and 6,000 positive images obtained from real explosion footage frames of videos. We split the dataset into training and validation sets, with the validation set comprising 20% of the whole dataset. Our models were implemented on an Intel X86 64bit machine running Ubuntu 14.04.5 LTS, using Keras 2.3.1 with Tensorflow 1.13.1 back-end. We trained each of our models for 400 epochs and saved the best model with the lowest validation loss error. Our evaluations of the proposed ensemble methods were conducted against the popular ResNet-50 architecture using a set of 15 test videos of varying contexts, including popular TV series such as MacGyver, Britannia, and NCIS: Los Angeles, with an average duration of around 52 minutes and an average of 78,750 frames per video, encoded in 720p and 1080p resolutions. Human operators inspected the videos in multiple rounds to provide ground truth data with the time intervals of where explosion happened, with an average of 10.75 distinct explosion scenes recorded as ground truth for an average test video. We compared the accuracy results of our proposed ensemble model and the ResNet-50 model used as the back-end of a state-of-the-art Faster R-CNN detection network [12] as part of our evaluation. We measured median precision, recall, and F1 score metrics of the two models on the classification task. Since the ground truth data provided by the human operators were recorded as time intervals, we converted the detection's frames to timestamps and considered a match if a detection was within a second of the recorded ground truth time. Figure 5 illustrates how the number of parameters and inference time of our proposed ensemble method compares with the popular ResNet-50 architecture. Figure 6 shows how the number of parameters and inference time of our proposed ensemble method compares with the popular ResNet-50 architecture. On an average video, our proposed approach was able to achieve a 100% precision which is significantly higher than the 67% precision made by the popular ResNet-50 model used as the back-end of a Faster R-CNN detection network. Our approach eliminated many false positives, potentially saving hundreds of hours of manual content moderation and explosion detection checks by removing the need to verify false detections in the reference videos through manual inspections. In addition, our proposed structure is significantly lighter with almost 19x fewer parameters, with the ability to decrease inference run-time by a large factor, almost 7.64x faster compared to the complex ResNet-50 model. These efficiency gains are critical in automated content moderation and explosion detection scenarios, where platforms need to moderate vast amounts of user-generated content quickly and accurately to ensure compliance with regulations and user safety. Figure 7 shows examples of correctly detected explosion scenes on test videos. V. DISCUSSION We believe that the design and experiments we conducted on the explosion detection use case are generalizable and can be applied to other similar image classification tasks and content moderation use cases as well. We therefore invite researchers to apply our design to other image or video classification tasks, especially those involving detection of non-rigid objects where color might be a dominant specification of the object, such as blood gore detection, smoke detection, fire detection, steam detection, and so on. The proposed ensemble approach can be useful for identifying and removing inappropriate or harmful content from online platforms or during broadcasting content moderation. Based on the insights gained from our experiments, we propose a "think small, think many" strategy for classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step-model ensemble of multiple small, simple, and lightweight models with narrowed-down features, like our shrinking color spaces, can lead to predictions with higher accuracy. In this paper, we demonstrate that our ensemble design was founded upon two base models, a primary and a secondary model, and used color spaces as the notion of features. The secondary model is fed with a limited set of the features delivered to the primary model, validating predictions made by the primary model. We believe that our design can be extended and generalized to a validation-based ensemble of three or more base models. Figure 8 depicts a sample illustration of our validation-based ensemble structure extended towards higher numbers of models. In this example extension, Model 2 validates predictions made by Model 1, and Model 3 further validates predictions made by Model 2, while the features passed get more limited as we iterate from Model 1 to Model 3. In this example, Model 1 operates on all three RGB channels, Model 2 operates on two color channels (RG, GB, or BR), and Model 3 performs validations only on a single color space, whether grayscale, or any of the R, G, or B channels. We believe this abstract concept can be generalized and extended to any feature set beyond only color spaces, which would be an avenue of exploration for the research community to consider. In conclusion, our work demonstrates the effectiveness of ensembling multiple small, simple, and lightweight models with narrowed-down features for image classification tasks. We hope that our proposed design and the open call to apply it to other image or video classification tasks will inspire and benefit the research community. VI. CONCLUSION In this paper, we proposed an efficient and lightweight deep classification ensemble structure, designed for "high-accuracy" content moderation and violence detection in videos with low false positives. Our approach is based on a set of simple visual features and utilizes a combination of lightweight models with narrowed-down color channels. We evaluated our approach on a large dataset of explosion and blast contents in TV movies and videos and demonstrated significant improvements in prediction accuracy compared to popular deep learning models such as ResNet-50, while benefiting from faster inference and lower computation cost. Our proposed approach is not only limited to explosion detection in videos, but can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we suggest a "think small, think many" philosophy in deep object classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based hierarchy of multiple small, simple, and lightweight models with narrowed-down visual features can potentially lead to predictions with higher accuracy, while maintaining efficiency and reducing computational requirements.
2023-09-12T06:43:08.000Z
2023-09-10T00:00:00.000
{ "year": 2023, "sha1": "bba14f73536b8f7ce6cce207dad000af0874afb2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bba14f73536b8f7ce6cce207dad000af0874afb2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
196809708
pes2o/s2orc
v3-fos-license
Influences of Vitamin A on Vaccine Immunogenicity and Efficacy Vitamin A deficiencies and insufficiencies are widespread in developing countries, and may be gaining prevalence in industrialized nations. To combat vitamin A deficiency (VAD), the World Health Organization (WHO) recommends high-dose vitamin A supplementation (VAS) in children 6–59 months of age in locations where VAD is endemic. This practice has significantly reduced all-cause death and diarrhea-related mortalities in children, and may have in some cases improved immune responses toward pediatric vaccines. However, VAS studies have yielded conflicting results, perhaps due to influences of baseline vitamin A levels on VAS efficacy, and due to cross-regulation between vitamin A and related nuclear hormones. Here we provide a brief review of previous pre-clinical and clinical data, showing how VAD and VAS affect immune responses, vaccines, and infectious diseases. We additionally present new results from a VAD mouse model. We found that when VAS was administered to VAD mice at the time of vaccination with a pneumococcal vaccine (Prevnar-13), pneumococcus (T4)-specific antibodies were significantly improved. Preliminary data further showed that after challenge with Streptococcus pneumoniae, all mice that had received VAS at the time of vaccination survived. This was a significant improvement compared to vaccination without VAS. Data encourage renewed attention to vitamin A levels, both in developed and developing countries, to assist interpretation of data from vaccine research and to improve the success of vaccine programs. INTRODUCTION Vitamin A deficiency (VAD) adversely affects children and adults worldwide. Today, the World Health Organization (WHO) estimates that 250 million preschool children suffer from VAD, with the highest frequencies among low-income areas of Africa and South-East Asia [http://www.who.int (accessed March 01, 2019)]. Infectious diseases, particularly respiratory and diarrheal diseases, occur at increased frequencies (at least 2:1) among populations with VAD compared to vitamin-replete populations (1). The global burden of VAD and its effects on populations in developing countries are well known, but much less well appreciated are incidences of VAD and vitamin insufficiencies in the developed world (2)(3)(4)(5). In Memphis, TN, we tested influenza virus-infected children and their household contacts for retinol binding protein (RBP) as a surrogate for vitamin A (6), and found that 13 of 21 individuals were either vitamin A insufficient or deficient (5). We found that both infected and uninfected study participants exhibited low RBP levels. Low RBP can be a consequence of illness (7), but also reflects conditions of malnutrition when individuals in low-income families have limited access to nutrient-rich foods (8)(9)(10). Whereas, infants in the United States may receive government-funded, vitamin-fortified formulas, comparable support is not given to older children (11). Instead, diets for older children and adults may be calorie-dense and nutrient-poor. As is the case in developing countries, vitamin insufficiencies and deficiencies in the developed world correlate with weakened immune responses and poor outcomes upon hospitalization for infectious disease (2,4). Unlike the situation in the developing world, individuals in the United States are usually assumed to be vitamin A-replete. Malnutrition may therefore go unnoticed. Vitamin A Requirements, Metabolism, and Trafficking Vitamin A is acquired from the diet in the form of retinoids (preformed vitamin A) or carotenoids (provitamin A). Retinoids include retinol or retinyl esters from animal sources, whereas carotenoids include beta-carotenes from plants. . A blood level of <0.7 µM retinol is considered vitamin "deficient" or "inadequate, " and levels between 0.7 and 1.05 µM retinol are considered "insufficient" or "marginal" for some biological functions (12). Vitamin A is generally stored in the liver as esters, but can also be found in extra-hepatic sites such as lung, intestine, kidney, and adipose tissue (13,14). Retinol is the most common vitamin A metabolite in the blood and typically circulates in a complex bound to RBP with a 1:1 molar ratio. Retinol-bound RBP (holo-RBP) is, in turn, often bound to transthyretin, a common serum transport protein (15,16). Retinoids can also be transported by chylomicrons or chylomicron remnants in lymph and blood (14,17). Intracellularly, retinol is converted by retinol dehydrogenases (RDH, ubiquitous enzymes) to retinal, and then by retinaldehyde dehydrogenases (RALDH, e.g., ALDH1A) in select tissues to retinoic acid (RA) (18)(19)(20)(21). RA is the vitamin A metabolite best known for its ability to regulate innate and adaptive immune cell function, proliferation, and survival. Importantly, metabolism and trafficking of vitamin A, and consequent effects on the immune system, can be influenced by genetic backgrounds, diets, conditions of maladsorption, and obesity (22). In the case of obesity, animal experiments suggest that vitamin A may be deficient in tissues Abbreviations: RA, retinoic acid; RAR, retinoic acid receptor; RXR, retinoid-X receptor; PPAR, peroxisome proliferator-activated receptor; RARE, retinoic acid response element; VAD, vitamin A deficiency; VAS, vitamin A supplementation; CFU, colony forming units; AID, activation induced deaminase; RDA, recommended daily allowance; RAE, retinoic acid equivalents; RBP, retinol binding protein; WHO, world health organization; ODS, Office of Dietary Supplements; NIH, National Institutes of Health; CSR, class switch recombination; TCR, T cell receptor; IP, intraperitoneally. such as the lung, even when levels in blood appear to be replete (23). Receptors are promiscuous in binding to their ligands and to DNA. The RAR-RXR heterodimer will often bind two half-site sequences, known as retinoic acid response elements (RAREs), separated by a short spacer in the DNA (27,(39)(40)(41)(42)(43)(44). RAREs have a consensus sequence of 5 ′ -(A/G)G(G/T)TCA-3 ′ , though receptors can be bound to non-consensus DNA sites as well. The exact sequence and spacer length (typically zero to eight bases) can alter binding affinity. Additionally, receptors can bind indirectly to DNA by tethering to other DNA-bound factors. Cross-regulation between vitamin A and related nuclear hormones (e.g., vitamin D, thyroid hormone, or sex hormones) may occur, because nuclear hormone receptors can compete for binding to ligands, co-receptors, and DNA (27,40,(45)(46)(47)(48). RAREs are found throughout the genome, often within gene promoters or enhancers. Notably, hotspots for RARE have been identified in switch sites of the immunoglobulin heavy chain locus, positions instrumental in class switch recombination (CSR) (49). The potential binding of nuclear hormone receptors to switch sites and regulatory elements in immunoglobulin and T cell receptor (TCR) loci predicts a direct mechanism by which vitamin A may modulate lymphocyte function (49)(50)(51)(52). Adding to the complexity of vitamin A functions are the extra-nuclear activities. Vitamins bind a complex array of escort proteins at the cell membrane and in extra-nuclear compartments. Each of these interactions can initiate or modulate cell signaling (53,54). Vitamin A and Immune Activities in vitro and in Small Animals Essentially all cells of the immune system including innate cells, B cells, and T cells, are affected by vitamin A (31,(55)(56)(57)(58)(59). Research animals with VAD generate poor antibody responses to many pathogens including parainfluenza virus and influenza virus (34,(59)(60)(61). VAS, when administered either orally or intranasally, can correct responses when given at the time of vaccination (33,59,60,62). In vitro, vitamin A has been shown to upregulate IgA production by B cells (18,63,64), and skew T cell phenotypes toward Treg rather than Th17 populations (65)(66)(67)(68)(69)(70), but in vivo, outcomes are less predictable (71). For example, whereas VAD cells may yield poor Treg activities in a tissue culture setting indicating a predisposition for heightened immune responses (65)(66)(67)(68), Tregs are found at equal or greater frequencies in tissues of VAD mice compared to controls following a respiratory virus infection (32). Furthermore, VAD mice exhibit relatively poor pathogen-specific T cell responses in vivo. In studies of influenza virus and parainfluenza virus infections, there are only weak virus-specific CD8+ T cell responses in the lower respiratory tract (LRT) of VAD mice (26). Responding CD8+ T cells in VAD and vitamin A+D deficient (VAD+VDD) mice express high levels of membrane CD103 (the αE subunit of αEβ7, an e-cadherin receptor). Possibly, the poor recruitment of CD8+ T cells to the LRT is because LRT tissues express relatively low levels of e-cadherin, and CD103+ cells home preferentially to other sites (26). When VAD+VDD mice receive VAS (with or without supplemental vitamin D), CD103 levels on virus-specific CD8+ T cells are reduced, and the percentages of CD4+ and CD8+ T cells in the LRT are improved (72). As another example of the complex influences of vitamin A on immune responses, we find that serum antibody isotype distributions differ between VAD and control animals, but patterns are dependent on the animal's background and sex (50). As a last example, some studies show that VAD biases the immune response toward a Th1 profile and that high levels of vitamin A bias the response toward a Th2 profile (68,73,74). Nonetheless, outcomes are again dependent on cell targets, environment, and activation state (25). Both Th1 and Th2 cytokine responses are evident in VAD mice, and VAD animals express higher levels of Th1 and Th2 cytokines compared to controls at late stages following a respiratory virus infection, presumably as a consequence of poor virus clearance (32). Vitamin A additionally influences epithelial cells and innate immune cells associated with mucosal surfaces. Dendritic cells of the intestine and epithelial cells of the respiratory tract each express the ALDH1A enzymes required for conversion of retinaldehyde to the end-metabolite RA (18,26,29). These unique attributes of mucosal tissues help explain why VAS assists immune responses when applied either orally or intranasally (33,62). Due to the plethora of immune cell and barrier cell requirements for vitamin A, it is not surprising that VAD associates with poor immune responses to vaccines, and that VAS can reverse these weaknesses when given at the time of vaccination (33,34,(59)(60)(61)(62). One vaccine that deserves continued study in the context of VAD and VAS is Prevnar-13. It is estimated that worldwide pneumococcus kills close to 1 million children under the age of 5 each year (75,76). Prevnar-13 can protect against these mortalities (77,78), but the vaccineinduced immune response is not always protective. We suggest that attention to, and correction of, low vitamin levels in Prevnar-13 vaccine recipients may improve vaccine success. Previous studies have shown that VAD inhibits responses both to individual pneumococcus antigens and to Prevnear-13 in mice (59,61,(79)(80)(81). Here, we extend findings to show that VAS improves the immunogenicity and protective capacity of Prevnar-13 in VAD and control animals. To produce VAD mice, pregnant C57BL/6 (H2-b) mice were purchased from Jackson Laboratories (Bar harbor, ME). Mice were placed on either a control or VAD diet upon their arrival in the animal facility at St. Jude (days 4-5 gestation). VAD (cat. no. 5WA2, Test Diets) and control (cat. no. 5W9M) diets differed only in vitamin A content, containing either 0 or 15 IU/g vitamin A palmitate, respectively. Mothers and progeny remained on their assigned diets. Experiments were begun when progeny reached adulthood. These adult mice were vaccinated with 2 doses of Prevnar-13 (PCV, Wyeth Pharmaceuticals Inc.) with 3 weeks intervals. Vaccine was diluted 1:40 in PBS and 100 µL of PCV was administered intraperitoneally (IP). Immediately prior to each vaccination, mice received either 600 IU vitamin A (from Interplexus Inc., Kent, WA) or PBS by oral gavage (100 µL). Challenges Post-vaccination To prepare bacteria for challenge experiments, S. pneumoniae strain TIGR4 (serotype 4) was inoculated from a glycerol stock onto a Tryptic Soy Agar plate (GranCult, Millipore, Burlington, MA) supplemented with 3% sheep blood (Lampire Biological Laboratory, Pipersville, PA) and 20 µg/mL neomycin, and grown at 37 • C, 5% CO 2 . After overnight growth, bacteria were directly inoculated into Todd Hewitt broth (Becton Dickinson, BD, Sparks, MD) supplemented with 0.2% yeast extract (BD) and grown until mid-log phase, OD 620 = 0.4. Cells were washed in PBS prior to animal infections. FIGURE 1 | VAS and T4 polysaccharide-specific immune responses. Results from T4 ELISAs are shown for VAD (top row) and vitamin A-replete control (bottom row) mice. Separate ELISAs were conducted to measure T4-specific IgM IgG1, and IgG3 antibodies. Statistical comparisons were made using Mann Whitney tests and GraphPad Prism software (*p < 0.05, **p < 0.01, ***p < 0.001). IgM levels (for VAD mice), and IgG1 levels (for VAD and control mice), but not IgG3 levels, were significantly improved with VAS. To challenge mice, 2 weeks after the vaccine boost, animals were sedated with 3% isoflurane. They were then inoculated intranasally with 5 × 10 5 CFU S. pneumoniae in 100 µL PBS. To collect and titer lungs, 24 h after infections groups of animals were euthanized by CO 2 asphyxiation and cervical dislocation. Lungs were removed, washed twice in ∼1 mL of PBS and then placed in 0.5 mL PBS. Lungs were then pulverized with a mechanical tissue grinder. Following emulsification, lungs were spun for 5 min at 300 g to pellet debris. Supernatants from the lung homogenates were collected and serially diluted 1:10 in PBS five times. From each dilution, 10 µL were plated on a Tryptic Soy Agar plate (GranCult, Millipore) supplemented with 3% sheep blood (Lampire Biological Laboratory) and 20 µg/mL neomycin. Plates were incubated overnight at 37 • C. Colonies were counted and Excel software was used to calculate titers. Separate groups of animals were infected as described above, monitored for signs of symptomatic infection, and euthanized when moribund. RESULTS Vaccine studies were conducted with male and female mice (either VAD mice or vitamin-replete controls) that were given two successive IP immunizations, separated by 3 week intervals, with the Prevnar-13 vaccine. Mice received either 600 IU of vitamin A as retinyl palmitate by oral gavage or phosphate buffered saline (PBS) at the time of vaccination. Antibody responses were measured 10-14 days after the second vaccine dose. ELISAs were conducted to examine antibodies specific for the type 4 (T4) component of the vaccine. As shown FIGURE 2 | VAS with vaccination improves survival after challenge. Challenge results are shown for VAD (top row) and vitamin-replete control (bottom row) mice. CFU per lung were measured 24 h after challenge (left). In separate groups of mice, survival was monitored (right). Animals were sacrificed when moribund. Survival curves were compared using GraphPad Prism software (*p < 0.05, **p < 0.01). in Figure 1, there was significant improvement of T4-specific antibodies, including IgM and IgG1 isotypes in VAD mice and IgG1 in control mice when VAS was used. IgG3 levels were not significantly changed. Results were reminiscent of previous studies in rats using bacterial antigens and retinol treatments (79)(80)(81)(82). In a preliminary set of experiments, vaccinated animals were also challenged with a high-dose (5 × 10 5 colony forming units, CFU) of pneumococcus (Streptococcus pneumoniae strain TIGR4 [serotype 4]). At this dose, >90% of unvaccinated VAD and vitamin-replete control mice developed an infection, and the dose was 100% lethal in unvaccinated VAD animals (Figure 2). After 24 h, groups of 8-10 mice were sacrificed to measure lung titers. As shown, there were trends toward lower CFU in both VAD and control animals that received VAS at the time of vaccination. A separate set of mice were tested for survival post-challenge. There were significant improvements in survival for VAD and control animals that received VAS at the time of vaccination compared to unsupplemented, vaccinated animals. In fact, all animals that received VAS at the time of vaccination, regardless of original vitamin A status, survived. DISCUSSION We have shown that VAS supports improvements in the immunogenicity of Prevnar-13 in VAD and control mice. With a preliminary study, we also showed that when VAS was given coincident with vaccination, protection against a subsequent challenge with S. pneumoniae was improved. Based on these promising results, we are now initiating a randomized clinical study to test the effects of VAS among Memphian children vaccinated with Prevnar-13 (clincaltrials.gov, PCVIT NCT03859687). Previous Research on VAD and VAS in Humans Public health organizations strive to improve dietary nutrition worldwide, but this goal depends on delivering vitamin-rich foods to all populations, a formidable task. For infants in countries where VAD is known to be endemic, the WHO has supported high-dose VAS programs (83). Children often receive 100,000 IU vitamin A between the ages of 6-11 months and 200,000 IU every 4-6 months between the ages of 12-59 months (84). Research analyses of VAS have yielded positive results in countries where VAD is frequent. In meta-analyses of clinical trials, VAS was shown to reduce deaths by 12-24%, and in isolated studies, reductions of 35-50% were observed (1,(85)(86)(87)(88)(89)(90)(91)(92). VAS reduced morbidities due to infectious diseases, including measles, Plasmodium falciparum, and HIV (93)(94)(95)(96). VAS benefits were also observed when antibody responses were measured, including those to vaccination (97,98). Some studies have shown improved responses to the measles and tetanus toxoid (TT) vaccines following VAS (99)(100)(101). Unfortunately, despite the positive influences described above, results from clinical VAS research have been inconsistent. VAS studies have often failed to show benefit, and have in some cases demonstrated risk. As an example, Malaba et al. did not observe an effect of VAS on infant mortality among children born to HIV-negative mothers with apparently adequate baseline vitamin A levels (102). There have also been reports of increased mother to child transmission (MTCT) of HIV in the context of VAS (103,104). Additional noted risks of high-dose VAS were fontanelle bulging in infants (105,106) and bone density loss [possibly due to cross-inhibition between related nuclear hormones, in this case vitamins A and D (107,108)]. VAS studies in the context of vaccine programs have also yielded conflicting data (97,109,110 (112), unlike the situation for older infants (100,113,114). Explanations for differences in VAS efficacy among clinical studies have addressed effects of age, maternal antibodies, serum antibody levels, and serum vitamin levels, but a consensus has not been reached (90). The contradictory results described above have encouraged the scientific community to question indiscriminate use of VAS, particularly in communities where nutrition has improved and where many children are vitamin replete (97,106,115). Suggestions are made to redirect efforts toward the use of lowdose VAS and/or toward support of improved diets (116). One clear weakness in past clinical research is that comprehensive baseline vitamin levels of study participants for vitamin A and the related, cross-regulatory nuclear hormone vitamin D (47,(117)(118)(119) were rarely reported. Instead, vitamin status has often been predicted based on previous population studies (e.g., frequencies of xerophthalmia). This strategy does not address changing diets within communities or individual differences among study participants. Currently, perceptions of VAD frequencies may thus be falsely high for certain developing countries and falsely low for the developed world. The situation differs dramatically from research studies in small animals, wherein host backgrounds and diets are homogeneous and test animals differ from controls by a single defined variable. A full comprehension of how VAS differentially affects humans with replete, insufficient, or deficient vitamin A and D levels remains elusive. A long-term solution to VAD in humans will require close attention to host characteristics, particularly baseline vitamin A and D levels (52). Improvements in diets should be a primary focus, with VAS programs developed as a back-up solution to malnutrition. For best outcomes with VAS, programs may require customization, with modification of supplements by frequency or dose, dependent on baseline characteristics of vaccine recipients. With attention to pre-existing vitamin levels and cautious administration, VAS programs may ultimately ensure that, (i) vaccinated children and adults are vitamin A replete worldwide, (ii) toxicities are avoided, and (iii) world populations maintain robust immune responses to pathogens and vaccines. DATA AVAILABILITY All datasets generated for this study are included in the manuscript. AUTHOR CONTRIBUTIONS JH and JR contributed to experimental design, data analyses, and the writing and review of the manuscript. RP, HR, SS, and RS contributed to experimental design, performance of the experiments, data analyses, and the writing and review of the manuscript. FUNDING Funding was provided in part by NIH NCI P30 CA21765 and ALSAC.
2019-07-17T13:02:35.676Z
2019-07-17T00:00:00.000
{ "year": 2019, "sha1": "4df934415cf380851cc138c33da494a1684b75af", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.01576/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4df934415cf380851cc138c33da494a1684b75af", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232111525
pes2o/s2orc
v3-fos-license
Sentinel Node Biopsy in Ductal Carcinoma in Situ: Is it Justifiable? Background: For invasive breast cancer, sentinel node biopsy (SNB) is an acceptable alternative to axillary node clearance (ANC), although in the recent era, its role is under review. In ductal carcinoma in situ (DCIS), the benefit of SNB is even less well defined. Despite this, guidelines still recommend that it is performed in selected cases of DCIS. The aim of our study was to evaluate the diagnostic value of performing SNB in DCIS. Methods: Patients with a diagnosis of DCIS who underwent axillary staging with SNB between 2008-2019 in our large volume tertiary centre were identified and included in the study. Results: Out of the 48 patients who were identified, four patients had a positive SNB (8%). Two of those patients were found to have micro metastatic disease. None of the patients with a positive SNB had local or systemic recurrence (median follow up: 40 months). One non-breast cancer-related mortality was reported. Two patients were identified who had recurrent disease, one with an invasive recurrence in the breast, and the other with systemic recurrence in the form of bone disease. Both of these patients had a negative SNB. Conclusion: Our results confirm that performing axillary staging with SNB in DCIS is not justifiable, as it does not affect patient outcomes. This supports the emerging evidence that being more surgically conservative may decrease morbidity without affecting patient survival. Introduction Ductal carcinoma in situ (DCIS) is defined as a type of breast cancer in which there is an abnormal proliferation of cells within the milk ducts which have not invaded beyond the basement membrane to the surrounding tissues. It is, therefore "in situ" cancer, which does not spread to the lymph nodes or distant organs. DCIS accounts for approximately 20% of screen-detected breast tumors in the UK [1]. The vast majority of cases are diagnosed on screening, with up to 90% of cases being impalpable and asymptomatic. Only 10% are associated with symptoms which include a mass, nipple discharge, and ulceration (Paget's disease). The incidence of DCIS has increased in recent years, widely attributed to the use of screening programs as mentioned, but also better technological advancements improving diagnosis. Over 60,000 women are diagnosed with DCIS each year in the USA, over 7000 in the UK, and over 2500 in the Netherlands [2]. Due to the characteristic inability to spread, a biopsy to assess metastasis to the lymph nodes in the axilla in DCIS would logically not be considered necessary. However, in approximately 20%-30% of resections for DCIS, invasive cancer is discovered in the post-operative histological specimen. It is well established that invasive cancer correlates with sentinel node metastasis, with an incidence of involved nodes of 15.6% compared with 2% in pure DCIS [3]. It is not possible to pre-operatively predict which patients with DCIS will also have occult invasive disease. Parameters which are considered to convey an increased risk are: size >50 mm, presence of extensive calcifications on mammography, and clinical presentation with a palpable lump. Sentinel node biopsy (SNB) is therefore currently offered to these patients, with this justification. In planned mastectomy for widespread DCIS in the breast, SNB is also commonly performed as it would not be feasible at a later stage should an invasive component subsequently be found [4]. Despite these criteria, SNB is still reported to be performed in surgery for DCIS in up to 51% of cases [3]. SNB is carried out by a radioisotope +/-blue dye subcutaneous injection into the breast, in order to trace the lymphatic channels to the first draining lymph node of the breast. The procedure involves minimal dissection and division of lymphatic channels compared to formal axillary node clearance (ANC); nevertheless, complications are extensive and confer significant morbidity. They include infection, seroma, hematoma, anaphylaxis, axillary vein injury, shoulder stiffness, limitation in shoulder range of motion, and lymphedema [5]. Recent focus with the advent of adjuvant therapies such as radiotherapy has been in tailoring breast cancer management strategies to the individual, with the aim of maximizing the benefits and avoiding unnecessary risks. In known invasive breast cancer, there is a shift towards minimizing invasive dissection in the axilla in clinically node-negative patients with one to two sentinel node metastases with seemingly no adverse effect on mortality [2]. This in turn has led to trials investigating whether SNB can be omitted altogether in breastconserving surgery under select conditions [6]. If invasive axillary treatment can be safely minimized or avoided in invasive breast cancer, this prompts discussion for re-evaluating axillary management in DCIS. However, evidence for this is currently inadequate due to the paucity of data in the literature and a lack of long-term follow-up studies [7]. The aim of this study is to evaluate whether SNB is a justifiable management strategy for DCIS, or whether it can be safely omitted. Materials And Methods All consecutive patients diagnosed with DCIS in our large volume tertiary center from 2008 to 2019 were retrieved from two databases: the local breast reconstruction database and the coding department database. Fifty-one patients were identified who fulfilled our detailed search criteria, namely patients who had a diagnosis of DCIS on core biopsy and subsequently had axillary surgery performed, three patients were excluded as their DCIS was recurrent, with a previous diagnosis of ipsilateral carcinoma in the past. The nature of the axillary surgery was either as an upfront SNB, intra operatively, or post operatively as completion axillary staging when invasive disease was found on the resection specimen. Clinically nodenegative patients were included. Patients with ipsilateral invasive breast cancer were excluded. Data were extracted electronically for all patients and included basic patient demographics such as age, core needle biopsy methods, surgical procedures, and pathology reporting on tumor type, grade, presence of microinvasion, size, and (sentinel) lymph nodes. In our pathology department, specimens are processed according to national guidelines (National Institute for Health and Clinical Excellence (NICE)). In accordance with this, whenever there is uncertainty in histopathological core biopsy specimens, an image-guided vacuum-assisted biopsy is performed. In the event of high clinical suspicion in patients presenting with a palpable lump or thickening, a clinical core (non-image guided) biopsy is performed, or patients undergo diagnostic excisional biopsy. Statistical analysis of data was carried out using Statistical Package for the Social Sciences (SPSS Inc., Chicago, IL). Correlations between variables which are considered to be predictors of the risk of synchronous invasive disease in patients with DCIS were analyzed using Pearson's coefficient of correlation. Results We identified a total of 51 patients who had DCIS on their core biopsies and subsequently underwent an SNB. Three patients were excluded, due to their DCIS being a local recurrence. Upfront (prior to cancer surgery) SNB was performed in 2/48 (4%) cases, in both cases the decision to stage the axilla upfront was taken due to the presence of large areas of calcifications in young patients. Surgery for DCIS and simultaneous SNB was carried out in 41/48 (85%) of patients. In 3/48 (6%) of cases, axillary staging was performed after the initial cancer resection surgery, as invasive cancer was discovered in the histological specimen. 2/48 (4%) patients had mastectomies for DCIS without any staging of the axilla. Out of these 48 patients, 4/48 (8%) had a positive SNB; 2/48 (4%) patients had micro metastases which were detected intra-operatively using one-step nucleic acid amplification (OSNA), and 2/48 (4%) had macro metastases. None of the patients with positive sentinel nodes had local or systemic recurrence, median follow up 40 months, a minimum of two months, and a maximum of 10 years and 11 months (confidence level at 95%=1). Those patients who had positive sentinel nodes were analyzed case by case to identify factors which might potentially be associated with sentinel node positivity ( Table 1). An ANC was performed in one patient (Patient A) who had three macro metastases in her SNB. No invasive disease was found in her breast despite re-examination of her breast specimen by a further two consultant histopathologists. Her sentinel nodes demonstrated metastatic high-grade carcinoma and expression of human epidermal growth factor receptor 2 (HER-2), therefore, she was commenced on chemotherapy and Herceptin. She underwent an ANC following her adjuvant treatment which did not demonstrate any further involved nodes. Both Patients B and D, who had micrometastasis to one node only never had any further axillary surgery, in line with the NICE guidance. Patient C who had one micrometastatic node along with one macrometastatic node, never had a completion ANC, as they were too frail for the procedure. We considered risk factors in the literature which are known to increase the likelihood of having invasive disease in patients who have DCIS. Micro invasion was associated with a higher risk of sentinel node positivity, Pearson's correlation -.425-(Sig (2 tailed)) .003 (correlation is significant at the 0.01 level (2 tailed). No other factors in our cohort demonstrated a significant correlation between DCIS and having a positive sentinel node including the size and grade of DCIS, and the age of the patient at diagnosis ( In our cohort of patients diagnosed with DCIS, two patients were identified who had recurrent disease. One patient had an invasive recurrence in the breast, and the other patient had systemic recurrence in the form of bone disease. Both of these patients had a negative SNB. We found that the survival of patients was not affected by a diagnosis of DCIS, nor by having involved sentinel nodes, regardless of whether these were micro or macro metastasis. The only mortality recorded was non-breast cancer related, and was linked to other comorbidities of the patient. Discussion With the widespread dissemination of breast cancer screening in the Western World, breast cancer is being diagnosed increasingly frequently. Currently, DCIS accounts for 10%-20% of newly diagnosed breast cancers [8]. The influence of the presence of microinvasive disease on outcomes has been hotly debated. Cavaliere et al. [9] suggested that the natural history of microinvasive DCIS resembles DCIS and is different from T1a invasive ductal cancers. They demonstrated no axillary recurrence or distant metastases in their cohort of 31 patients over long-term follow up (mean 75 months). This appears to be in agreement with our findings, where microinvasive disease was correlated with a higher risk of a positive sentinel node but did not affect the survival of patients or confer an increased risk of local recurrence. We found 8% (4/48) of patients with DCIS also had invasive disease in their operative specimen and went on to have axillary staging. None of these patients subsequently had macro or micro metastases in their sentinel nodes. These findings appear to be in agreement with the conclusions of Fallowfield et al. [10] which suggested that watchful waiting, in comparison to performing axillary lymph node staging, showed no significant differences in regional recurrence rates after a mean follow-up of 6.3 years. Leading international societies such as the National Comprehensive Cancer Network (NCCN) and the American Society of Clinical Oncology (ASCO), do not recommend axillary lymph node evaluation for patients undergoing BCS for DCIS, as non-invasive breast cancer cannot lead to lymph node metastasis. An SNB may however be performed in patients undergoing total mastectomy, as well as in cases of surgery performed in anatomical sites which would potentially compromise a future procedure (central breast, upper outer quadrant, or axillary tail) and in cases of large volume DCIS [11]. Our results concur that performing an SNB did not affect the oncological outcomes of almost all of our patients. The morbidity of SNB such as lymphoedema is approximately 7.5%, as well as giving rise to other known complications (shoulder stiffness, numbness, infection, etc) [8,12]. A retrospective review conducted in cases of DCIS treated with BCS in the Netherlands [13] between 1998 and 2011 demonstrated a rate of axillary surgical procedures of 43.9% (including 4.5% for ANC). Clearly, this is higher than the guidelines would suggest. Our results are in line with previous research which has shown that axillary staging in DCIS, even if selective, has no correlation with patients' management plans or outcomes. One of the limitations of our study was the retrospective design. Although allowing a higher number of study subjects, retrospective cohort studies are known for their selection bias of the studied population. In our study, we tried to obviate this by retrieval of data from two different sources. Another limitation is the relatively small sample size, although this is representative of the small number of axillary staging procedures performed in cases of DCIS. Conclusions Despite the low association between the high-risk characteristics of DCIS and the involvement of axillary nodes, axillary staging via SNB is still routinely performed in selected patients in the UK. In accordance with our findings, SNB in patients with DCIS should not be performed as it is invasive and may confer significant morbidity, and does not improve patients' oncological outcomes. We are in need of a larger-scale study with robust data analyses to incorporate these findings in the management of all cases of DCIS. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Not required issued approval Not applicable. Not required for this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-03-05T05:15:47.344Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "b129a075034ec2e65bc4ab6b5637058659fb9b35", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/51487-sentinel-node-biopsy-in-ductal-carcinoma-in-situ-is-it-justifiable.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b129a075034ec2e65bc4ab6b5637058659fb9b35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9793527
pes2o/s2orc
v3-fos-license
Marine Pseudovibrio sp. as a Novel Source of Antimicrobials Antibiotic resistance among pathogenic microorganisms is becoming ever more common. Unfortunately, the development of new antibiotics which may combat resistance has decreased. Recently, however the oceans and the marine animals that reside there have received increased attention as a potential source for natural product discovery. Many marine eukaryotes interact and form close associations with microorganisms that inhabit their surfaces, many of which can inhibit the attachment, growth or survival of competitor species. It is the bioactive compounds responsible for the inhibition that is of interest to researchers on the hunt for novel bioactives. The genus Pseudovibrio has been repeatedly identified from the bacterial communities isolated from marine surfaces. In addition, antimicrobial activity assays have demonstrated significant antimicrobial producing capabilities throughout the genus. This review will describe the potency, spectrum and possible novelty of the compounds produced by these bacteria, while highlighting the capacity for this genus to produce natural antimicrobial compounds which could be employed to control undesirable bacteria in the healthcare and food production sectors. Introduction Antibiotic resistance among pathogenic microorganisms is becoming ever more common. Recent data published by the Centre for Disease Control and Prevention (CDC), estimated that the number of illnesses and deaths caused by antibiotic resistance exceeded 2 million and 23,000, respectively, in the United States [1] and 25,000 deaths annually in Europe are attributed to resistant hospital infections [2]. Drug resistant pathogens are currently placing a heavy financial burden on health systems worldwide with infections due to selected multidrug-resistant bacteria in the EU estimated to result in extra health-care costs and productivity losses of at least €1.5 billion each year [3]. These worrying trends of increased multidrug resistance, has led to a call by The Infectious Disease Society of America, for the delivery of ten new antibiotic drugs by 2020 [4]. Unfortunately, however, the number of new antibiotics being developed has decreased, rather than increased, in recent years [5]. The ever reducing rate at which novel and potent antimicrobials are emerging [6] means we are becoming ever more dependent on our current arsenal of antibiotics, which are rapidly losing potency. There is an urgent need for the discovery and supply of novel and potent antimicrobials from natural resources. Nearly 10% of known biologically active natural products are of microbial origin [6], with terrestrial bacteria considered well studied when compared to marine bacteria, which have in recent years been increasingly recognised for their biotechnological potential [7]. Current estimates for the global market for Marine Biotechnology products and processes stand at €2.8 billion (2010) [8]. Not surprisingly, focus has recently shifted to the oceans as a potential source for natural product discovery. Antimicrobials derived from marine microorganisms are of particular interest due to the suite of compounds being produced [9][10][11][12][13][14]. The aim of this review is to highlight a marine-derived bacteria: Pseudovibrio spp., which have shown themselves to be producers of bioactives and propose that this genus is a novel and a relatively untapped source of antimicrobial compounds. Characterisation of the Genus Pseudovibrio Even though the marine genus Pseudovibrio has been isolated throughout the marine environment, it was described for the first time only ten years ago. To date, four species have been described, i.e., P. denitrificans, P. ascidiaceicola, P. japonicas and P. axinellae sp. nov. The first strain of the genus Pseudovibrio to be identified, a P. denitrificans isolate, was sourced from coastal seawater in 2004 [22]. More specifically, Shieh and colleagues isolated two denitrifying strains designated DN34T and DN33, from sea-water samples collected in Nanwan Bay, Kenting National Park, Taiwan which were named Pseudovibrio gen. nov. Characteristics associated with this species can be seen in Table 1 [22]. In 2006, based on the results of phylogenetic and phenotypic analyses of two strains isolated from ascidians (sea squirts), a novel species, P. ascidiaceicola sp. nov. was proposed [20]. In addition to the results in Table 1, tests for β-glucosidase, arginine dihydrolase and urease are positive. Indole is produced from tryptophan and gelatin and aesculin is hydrolysed [20]. Pseudovibrio japonicas strain WSF2T was isolated in 2007, from surface seawater off the coastline of the Boso Peninsula, Japan, and examined [30]. Tests in addition to those outlined in Table 1 also yielded positive results for alkaline phosphatase, esterase (C4), esterase lipase (C8), leucine arylamidase, valine arylamidase, trypsin, acid phosphatase, naphthol-AS-BI-phosphohydrolase and β-galactosidase [30]. More recently in 2013, O' Halloran et al. [31], described strain Ad2T which, on the basis of phylogenetic analysis, DNA-DNA hybridization and differential phenotypic characteristics, was proposed as the type strain of a novel species, for which the name Pseudovibrio axinellae sp. nov. was proposed. Tests in addition to those mentioned in Table 1 revealed that aesculin, casein, DNA and gelatin are hydrolysed, while starch is not [31]. α-lactose or D-xylose negative results for arginine dihydrolase, lysine decarboxylase, ornithine decarboxylase, lipase (C4), cysteine arylamidase, chymotrypsin, α-galactosidase, β-glucuronidase, α-glucosidase, β-glucosidase, N-acetyl-βglucosamidase, α-mannosidase and α-fucosidase utilisation. Bioactivity of the genus Pseudovibrio The most commercially significant characteristic of Pseudovibrio strains is the production of secondary metabolites, which has been reported in many studies and will be discussed in detail below. Over the last decade there has been a considerable increase in the number of studies relating to the antimicrobial activity of a variety of bioactive compounds by Pseudovibrio spp. These studies highlight that this genus has the potential to be a particularly promising source of novel metabolites. Two families of enzymes, i.e., polyketide synthases (PKS) and non-ribosomal peptide synthetases (NRPS), and their hybrids (PKS/NRPS) are of particular importance in the production of various secondary metabolites, many of which are important/natural products, across a wide range of microorganisms. Analysis of the presence of PKS and NRPS genes is often employed as a means of determining the likelihood that a microorganism has the potential to produce new bioactive compounds. For example, of the four Pseudovibrio cultures by Kennedy et al. [32] from the marine sponge H. simulans, two of which had 99% 16S rRNA gene identity match to P. ascidiaceicola F10102., three were found to contain both putative PKS and NRPS genes, suggesting a high potential for secondary metabolite production. In 2009, this group assessed the bioactivity of these Pseudovibrio bacteria [19]. The three Pseudovibrio strains, PV1, PV2, PV4, that contained PKS and NRPS genes exhibited antimicrobial activity against methicillin-resistant S. aureus, B. cereus, E. coli and B. subtilis. Interestingly, Pseudovibrio strain PV3 which did not contain PKS and NRPS genes, did not show antimicrobial activity against the target strains [19]. The presence of PKS and NRPS genes that might be involved in the production of secondary metabolites by sponge-associated microorganisms was also investigated in a study by Graç a et al. [25]. In agreement with previous studies [33], it was found that the majority of bioactive bacteria in which PKS-I and NRPS genes were detected were Pseudovibrio. More specifically, Graç a et al. found that of the 212 bacteria isolated from the marine sponge E. discophorus, 31% produced antimicrobial compounds. Of these 66 bioactive-producing isolates, the most bioactive genus was Pseudovibrio (47%) with bioactivity observed against B. subtilis ATCC6633, S. aureus MRSA and Aliivibrio fischeri CECT 524 [25]. This high level of activity within the genus has also been reported by Flemer et al. 2011 [34]. In that case out of the thirty Pseudovibrio strains isolated, 27 (90%) exhibited antimicrobial activity against at least two of three clinically relevant bacteria strains tested i.e., E. coli NCIMB 12210, B. subtilis IA40, and S. aureus NCIMB 9518 [34]. The bioactivity profiles observed indicated the production of different antimicrobial compounds and again highlighted the broad range of activities associated with strains from this genus. This team also studied bioactive bacteria from the marine sponges A. fucorum and E. major [29]. In this instance, from the 409 bacterial strains isolated from both sponges and tested for antifungal and antibacterial activity, all of the strains exhibiting antibacterial activity were Pseudovibrio spp. More specifically, eight out of twelve Pseudovibrio strains isolated demonstrated activity. This activity was observed against at least one of the three targets employed, i.e., the aforementioned E. coli NCIMB 12210 as well as B. subtilis IE32 and S. aureus NC000949 [29]. Particularly broad spectrum activity by Pseudovibrio strains was demonstrated by Santos et al. [28]. . It was noted that although activity was seen against both Gram-positive and Gram-negative target strains, the Pseudovibrio strains were more effective against Gram-positive bacteria. Tests to characterize the bioactive compound produced by the Pseudovibrio strains showed that the substances were resistant to the action of all proteolytic enzymes tested suggesting that these antimicrobial substances are not ribosomal proteins. Biofilm production by the isolated strains was observed and was particularly apparent when strains Pm31 and Mm37 were studied [28]. It has been hypothesized that the ability to produce antibacterial compounds combined with an enhancement in biofilm formation may give bacteria a selective advantage and possible dominance over other surface-attached bacteria [35]. O'Halloran et al. [33], referred to briefly above, also demonstrated broad spectrum activity from among a population of 73 Pseudovibrio isolates from the marine sponges A. dissimilis, P. boletiformis and H. simulans. This involved an initial screen using deferred antagonism assays from which it was revealed that 62 isolates (84.9%) demonstrated antimicrobial activity against at least one of the indicator strains tested. The majority of the isolates showed activity against E. coli (58; 79.5%), S. Typhimurium (54; 74%) . Fourteen different antimicrobial activity spectra were identified suggesting and that the Pseudovibrio spp. may be producing a number of different antimicrobial compounds. The Pseudovibrio isolates were also screened for the presence of PKS genes, using degenerate PCR, with keto synthase gene fragments being found in all 73 isolates. Although secondary metabolite production and antimicrobial activity has been shown in many Pseudovibrio-related studies, in many cases the corresponding compounds have not been the focus of further analysis. However, some studies have characterised the relevant bioactive compounds. In one case, isolate Z143, a bacterium from a Philippine tunicate which had 100% 16S rRNA gene similarity with the P. denitrificans type strain DN34 was reported in 2007 by Sertan de-Guzman [15] as the first α-proteobacterium to produce the red pigment heptylprodigiosin ( Figure 1A) also known as 16-methyl-15-heptyl-prodiginine, which shows anti-Staphylococcus aureus activity. Vizcaino et al. [21] revealed the production of a novel polypeptide, pseudovibrocin, which was isolated from three unique coral-derived bacteria with 99% 16S rRNA gene similarity to P. denitrificans that were capable of inhibiting Gram-positive and Gram-negative bacteria. An acetone extract of the associated cell-free supernatant was found to inhibit Kocuria rhizophila and a methanol extract inhibited B. subtilis, Vibrio harveyi and V. coralliilyticus. High-performance liquid chromatography analysis of the methanol extract suggested the presence of at least two antibiotics, one of which shown to be pseudovibrocin. Geng and Belas [16] studied the biosynthesis of tropodithietic acid (TDA), a tropolone antibiotic, by a number of strains. In addition to other members of the Roseobacteracea family, i.e., Silicibacter sp. TM1040 and Phaeobacter gallaeciensis (both genera known to be TDA producers), Pseudovibrio sp. JE062, previously isolated by Enticknap et al. [24] was also found to be a TDA producer. The twelve genes required for TDA biosynthesis in strain JE062 were identified by transposon insertion mutagenesis and the organization of a number of the associated genes, tdaA-F, was found to be identical to that of the other bacteria in this study. The production of TDA by another Pseudovibrio sp., strain D323 ( Figure 1B) has also been reported by Penesyan et al. [23]. This marine epiphytic strain was 99% 16S rRNA gene identical to P. ascidiaceicola (Genbank Acc. #AB175663) and exhibited antimicrobial activity against target strains from the phyla Alphaproteobacteria, Gammaproteobacteria, Bacteroidetes, Firmicutes and Actinobacteria. The authors in particular noted that TDA produced by D323 was highly active against Nautella sp. R11, which causes disease in a marine seaweed Delisea pulchra, thereby supporting the hypothesis that these host-associated bacteria serve as a defence against potential pathogenic surface colonisers. Genome-based methods such as comparative bacterial genomics [36] and "genome scanning" of sequenced genomes of natural-product-producing bacteria [37] can also lead to valuable information on the diversity of bacterial species and can lead to bioactive product discovery. In this regard it is relevant that Bondarev et al. [38] carried out analysis of the genomes of two Pseudovibrio strains, JE062 and FO-BEG1, that originated from the coast of Florida. At the time of writing, strain FO-BEG1 is the only Pseudovibrio for which a fully sequenced closed genome sequence has been reported. Genes involved in the biosynthesis of TDA were detected in FO-BEG1, and it was confirmed through culture-based methods that the strain is indeed a TDA producer. Although TDA production has been reported previously by other Pseudovibrio sp. [23,40], it is not clear how widely distributed this trait is. However, our own analysis of nine antimicrobial producing Pseudovibrio strains reveals the absence of TDA gene clusters from seven of these strains, indicating that this cluster may not be widely distributed [41]. Even in situations where the cluster is present, it is notable that culture conditions can impact on the production of TDA [42]. Indeed, TDA production has been detected during bacterial growth under static conditions [38], but not during incubation under agitation in broth, except for a brief period at approximately 10 hours after inoculation [16]. Another notable feature of the Bondarev study was the identification of genes predicted to encode a hybrid NRPS-PKS system in strain FO-BEG1, which resembled those previously associated with members of the Enterobacteriaceae [43]. More specifically, this cluster corresponds to a 50 kb genomic island which is architecturally similar to a NRPS-PKS system reported by Nougayrè de et al. [44] and determined to be colibactin (an E. coli metabolite). Similarities in the architecture between these two NRPS-PKS systems led to the suggestion that the product of the Pseudovibrio FO-BEG1 cluster is colibactin. Again, it is not clear to what extent this cluster is distributed among Pseudovibrio sp. so we have employed a previously designed PCR primer set [44] to determine the distribution of the NRPS-PKS system within our nine Pseudovibrio strains, and detected the presence of these genes in all nine strains [41]. Romano et al. [45] also demonstrated the astonishing diversity of the exo-metabolome (extracellular metabolites) of strain FO-BEG1 and the drastic effect that phosphate limitation can have on its composition. More specifically, results showed that low phosphate concentrations can induce the production of secondary metabolites in Pseudovibrio FO-BEG1. Under phosphate limitation, a higher production of phenolic and polyphenolic compounds was also observed by Romano et al. [45] when cells entered stationary phase. It was suggested by the authors that some of these compounds may be tropone derivatives, of which TDA is an example, which are commonly produced by bacteria of the Roseobacter clade and can have antibacterial activity. Indeed members of the Roseobacter clade produce TDA in addition to an uncharacterised yellow pigment, which may be the same pigment produced in this study by Pseudovibrio under phosphate-limited conditions and entering stationary phase. In addition, several masses were predicted to correspond to cyclic dipeptides that resemble antimicrobials produced by Roseobacter strains isolated from marine sponges. Given the phylogenetic and physiological similarity between Roseobacter and Pseudovibrio bacteria, the authors reasoned that Pseudovibrio may also release such compounds into the medium [45]. Genomic analysis of Pseudovibrio FO-BEG1 ( Figure 2) and JE062 has also revealed the potential for a diverse array of metabolic abilities within both strains [43]. Analysis revealed the presence of genes predicted to be involved in carbohydrate, lipid, fatty acid, lipopolysaccharide, sugar and glycan metabolism. Pathways involved in terpenoid, sterol, cofactor and vitamin and polyamine biosynthesis have also been identified, as have genes predicted to be involved in the biosynthesis of secondary metabolites such as monolignol, flavanone, flavonoid and paspaline [46]. Our own unpublished analysis [47], of Pseudovibrio FO-BEG1 via antiSmash (antibiotics and Secondary Metabolite Analysis Shell) and BAGEL (prediction of bacteriocins in prokaryotes) have highlighted a number of clusters of interest. More specifically, antiSmash highlighted a terpene type gene cluster (Figure 2A) and two putative bacteriocin clusters ( Figure 2C,D), in addition to the previously mentioned NRPS-PKS system ( Figure 2B) on the chromosome and a type T3pks-T1pks cluster on the plasmid ( Figure 2E) [47]. BAGEL also highlighted loci of significance related to putative bacteriocins [48]. Ultimately, although it is evident from the literature that Pseudovibrio sp. can possess broad range antimicrobial activity, in many cases the basis for this activity has not been further analysed. Therefore the opportunity to identify novel bioactive compounds exists. As previously mentioned, culture conditions can have a significant effect on the production of antimicrobials, which is often tightly regulated, and must be taken into consideration when designing experiments ( Figure 3) in order to establish the optimum conditions for maximum production of bioactives. The schematic diagram below outlines the procedures currently underway and being employed to study a number of Pseudovibrio strains and which may prove fruitful for others. In addition to the identification of potentially novel bioactive compounds, the use of these approaches can allow one to demonstrate the spectrum of antimicrobial activity of the strains, facilitate the identification of putative bioactive clusters and to carry out comparative genomics. Conclusions Antibiotic resistance among pathogenic microorganisms is becoming ever more common. Unfortunately, the development of new antibiotics which may combat these pathogens has decreased. Natural sources has provided, and can continue to provide, a diverse range of compounds for drug development and many of these biologically active natural products are of microbial origin. As the hunt for novel, natural and potent antibiotics continues, focus has recently shifted to the oceans for natural product discovery as the marine environment has revealed itself as the relatively untapped source of potent bioactives. In recent years, the marine-derived Pseudovibrio has been the focus of particular attention due to the dominance of the genus among host-associated microbial communities and its associated antimicrobial producing capabilities. More specifically, the potency, broad-spectrum and novelty of the compounds produced by these bacteria has highlighted this genus as a source of natural antimicrobial compounds which can potentially be employed to combat the prevalence of antibiotic resistant bacteria among the healthcare and food production sectors. Varying physiological growth conditions, gene expression and biological analysis, in addition to the knowledge gained through genome sequencing of members of the genus Pseudovibrio, may lead to the identification of genes involved in the production of secondary metabolites, optimum growth parameters and gene expression; eventually leading to the development of novel bioactives.
2016-03-01T03:19:46.873Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "b69c5ecddce6f1f70a6cb537089528b5e16cf1cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/12/12/5916/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b69c5ecddce6f1f70a6cb537089528b5e16cf1cc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244151675
pes2o/s2orc
v3-fos-license
Surfactant-Mediated Co-Existence of Single-Walled Carbon Nanotube Networks and Cellulose Nanocrystal Mesophases Hybrids comprising cellulose nanocrystals (CNCs) and percolated networks of single-walled carbon nanotubes (SWNTs) may serve for the casting of hybrid materials with improved optical, mechanical, electrical, and thermal properties. However, CNC-dispersed SWNTs are depleted from the chiral nematic (N*) phase and enrich the isotropic phase. Herein, we report that SWNTs dispersed by non-ionic surfactant or triblock copolymers are incorporated within the surfactant-mediated CNC mesophases. Small-angle X-ray measurements indicate that the nanostructure of the hybrid phases is only slightly modified by the presence of the surfactants, and the chiral nature of the N* phase is preserved. Cryo-TEM and Raman spectroscopy show that SWNTs networks with typical mesh size from hundreds of nanometers to microns are distributed equally between the two phases. We suggest that the adsorption of the surfactants or polymers mediates the interfacial interaction between the CNCs and SWNTs, enhancing the formation of co-existing meso-structures in the hybrid phases. Introduction Hybrid materials comprising single-walled carbon nanotubes (SWNTs) and liquid crystalline (LC) mesophase have been thoroughly investigated in the context of nonisotropic nano-composites [1]. Often, the presence of SWNTs modifies the structure and properties of the LC mesophase [2] and shifts the critical concentration (or temperature) at which the phase emerges [3]. There are rare examples in which the SWNTs reside within the LC phase at concentrations high enough to form percolated networks [4]. CNCs are rod-shaped nanocrystals with typical widths of few nanometers and lengths of some tens to hundreds of nanometers. When obtained via sulfuric acid hydrolysis of cellulose, they are negatively charged due to the presence of surface sulfate ester groups [5]. The CNCs form stable suspensions in aqueous media and above a critical volume fraction, ϕ*, and phases separate into a chiral nematic (N*) and optically isotropic (I) phases [6,7]. In these systems the nematic mesophase results from competition between the contribution of orientational entropy and the excluded volume to the free energy, as described by models based on mean-field excluded volume potential in a thermal solvent by Onsager and Flory [8][9][10]. Electrostatic effects are found to shift the phase diagram but preserve the overall behavior, as observed experimentally and rationalized by Stroobants, Lekkerkerker and Odijk (SLO theory) [11]. A steady rotation of the director in the N* phase of suspended CNCs results in helical modulation characterized by a cholesteric pitch, P, typically ranging from 3 to 100 µm [7]. When observed between crossed polarizers, the birefringent phase exhibits iridescent colors and a typical extinction pattern that results from the rotation of the pitch. The latter is known as the "fingerprint pattern". CNCs are known to disperse SWNTs in dilute suspensions, probably via interactions with the π electrons of the SWNTs. While the CNC-dispersed SWNTs are excluded from the N* phase [12], it was found that it is possible to utilize evaporation induced selfassembly (EISA) of CNCs suspensions [13][14][15][16][17] for preparation of thin dried films of CNC-SWNT hybrids. The EISA process is initiated from CNC suspensions in the isotropic phase (ϕ < ϕ*) [18] and the SWNTs are trapped in the drying suspension. The balance of intermolecular interactions in liquid mixtures of CNCs and SWNTs, and in particular the question of what drives the exclusion of SWNTs from the N*, are not yet well understood. Studies of nanometric inclusions embedded in CNC LC phases envision the incorporation of nanoparticles into the N* phase as a size-selective process [19]. Thus, the driving force for the exclusion of the SWNTs from the N* would be the mismatch in the excluded volume (entropic) interactions [12] or elastic inter-particle interactions [20]. Yet, recent studies of hydrophobic particles of different size (from a few nanometers to hundreds of nanometers) and shape (spherical and elongated) indicate that carbonaceous inclusions are excluded from the N*, irrespective of their shape and size. This observation suggests that in the hybrid systems, hydrophobic interactions may play a more significant role than expected [12]. In the study presented here, we investigated the effect of a non-ionic surfactant, BrijS20 (Polyethylene glycol octadecyl ether), and amphiphilic block copolymers, Poly(ethylene oxide)-poly(propylene oxide)-poly(ethyleneoxide), namely F108 and F127 (Pluronics, BASF), on the co-assembly of SWNTs and CNCs in aqueous suspensions at the bi-phasic regime of the CNCs. The surfactant and the polymers are known to adsorb onto SWNTs and CNCs in aqueous media [21][22][23]. While of a similar chemical composition, the molecular assemblies formed by the three surface-active molecules are of different dimensions, enabling us to probe geometrical size-related effects in the hybrid systems comprising CNC-surfactant/polymer-SWNT systems. SAXS investigation reveals that the adsorbed molecules modify the dimensions of the decorated CNCs, while the characteristic inter-particle distance (d 0 ) of the liquid crystalline native CNCs phase is preserved. The (block-copolymer) micelles (or chains) do not induce depletion of the CNCs rods. Furthermore, the chiral nature of the N* phase is preserved in the CNC-surfactant-SWNT hybrids, and the pitch is significantly reduced as compared to the native CNC phases at a similar concentration. While the surfactant-decorated SWNT are not depleted from the N* of the CNCs, they are not incorporated into the chiral nematic structure, but rather form an independent network with a mesh-size of the order of hundreds of nanometers. Thus, decoration of the CNCs and the SWNTs by surfactant molecules (or block-copolymer) mediates the inter-particle interactions and enables the assembly of the N* phase of the CNCs within a percolated SWNT network. These findings may be used as guidelines for utilizing non-ionic surfactants and block copolymers for engineering hybrid nanocomposites based on the LC phases of CNCs. Figure S1 of the SI). Preparation of Solutions BrijS20 aqueous solutions were prepared by stirring the powder in Millipore water (18.2 MΩ cm). Aqueous solutions of F108 and F127 were prepared by vigorous stirring for 24 h at 0 • C and incubation at 4 • C for 3-4 days, for complete dissolution. Preparation of CNC Suspensions As-received CNC slurry (12 wt% suspension in water) was tip-sonicated (Ultrasonic processor model VCX-130, Sonics & Materials Inc., Newtown, CT, USA, 130 W, 20 kHz) at 30% amplitude for 4 min. The 12 wt% CNCs suspension was sonicated in two sessions of two minutes. All suspensions were cooled during sonication to prevent the hydrolysis of the sulfate groups at the CNC surface due to overheating [31,32]. The suspensions were diluted to the desired concentration using Millipore water. The volume fraction of the CNC suspension (ϕ) is calculated according to the relation: where ϕ w is the CNC weight fraction (w/w), ρ s is the solvent density, and ρ CNCs is CNC density (1.5 g·cm −3 ). Preparation of Dispersions of SWNTs Sonication-assisted dispersion of SWNTs was carried out following previously published protocols [33,34]. For details see the SI ( Figure S2). The reported concentrations of the SWNTs relate to the initial concentration of the SWNTs in the dispersions. Electrokinetic Mobility Mobility measurements were carried out in 0.1 wt% suspensions of CNCs using the Zetasizer Nano ZS (Malvern Instruments Ltd., Almelo, The Netherlands) (Table S1 of the SI). Visual Inspection The volume fraction of the non-isotropic phase φ LC was determined by visual inspection of the samples. Polarized Optical Microscopy (POM) POM images of CNC suspensions were taken using Olympus BX53-F2 microscope equipped with a high-resolution Olympus DP74 camera in crossed polarizer configuration and analyzed using ImageJ. The pitch was calculated from the measured distance between the regularly spaced extinction lines ("fingerprint pattern") observed due to the helical rotation of the director in the direction orthogonal to the long axis of the CNCs rods in the chiral nematic phase [15,35]. Small-Angle X-ray Scattering (SAXS) Scattering patterns of CNCs suspensions were collected using SAXSLAB GANESHA 300-XL Xenocs, Grenoble, France. CuKα radiation was generated by a Genix 3D Cusource with an integrated monochromator, 3-pinhole collimation, and a two-dimensional Pilatus 300 K detector. The scattering intensity I(q) was recorded in the λ interval of 0.007 < q < 0.25 Å −1 (corresponding to the length scale of 25-900 Å), where the scattering vector is defined as q = (4π/λ) sin θ, with 2θ and λ being the scattering angle and wavelength, respectively. The measurements were performed under vacuum at ambient temperature. The suspensions were sealed in thin-walled quartz capillaries about 1.5 mm in diameter and with 0.01 mm wall thickness. The scattering curves were corrected for counting time and sample absorption. The 2D SAXS patterns were azimuthally averaged to produce one-dimensional intensity profiles, I vs. q, using the two-dimensional data reduction program SAXSGUI. The scattering spectra of the solvent were subtracted from the corresponding solution data using Igor Pro 9 from WaveMetrix Portland Oragon for analysis of small-angle scattering data [36]. Data analysis was based on fitting the scattering curve to an appropriate model provided by Mao et al. [37] using Wolfram Mathematica 12.3 software, Champaign, IL, USA. Raman Spectroscopy Raman scattering spectra were obtained using a Horiba LabRam HR micro-Raman system, equipped with a Syncerity CCD detector deep-cooled to −60 • C, 1024 × 256 pixels. The excitation source was a 532 nm laser used at 0.1-1% of the maximal laser power, about 0.5 mW. The laser was focused with a 50 × LWD objective (Olympus LMPlanFL-N, NA = 0.5) to a spot of about 1.3 µm. The measurements were taken with a 600 g mm −1 grating and a 100 µm confocal microscope hole. The typical exposure time was 60 milliseconds. Transmission Electron Microscopy (TEM) Imaging Cryo-TEM: rapid cooling enables direct imaging of molecular assemblies and nanostructures in aqueous media. The samples were prepared by applying a 3 µL drop to a TEM grid (300 mesh Cu Lacey substrate, Ted Pella, Ltd., Redding, CA, USA) following a short pre-treatment of the grid via glow discharge. The excess liquid was blotted, and the specimen was vitrified by rapid plunging into liquid ethane precooled by liquid nitrogen using a vitrification robot system (Vitrobot mark IV, FEI). The rapid cooling results in physical fixation of the liquid state so as to preserve the native structures. Thus, it allows examination of the polymeric assemblies in the high vacuum of the electron microscope at cryogenic temperature, which prevents the formation of either cubic or hexagonal ice. The vitrified samples were examined at −177 • C using FEI Tecnai 12 G 2 TWIN TEM (FEI, Hillsboro, OR, USA) operated at 120 kV and equipped with a Gatan model 626 cold stage. The images were recorded by a 4 K × 4 K FEI Eagle CCD camera in low dose mode. TIA (Tecnai Imaging and Analysis, Tecnai 3.0 FEI, Hillsboro, OR, USA) was used to record the images. Negative staining: to increase the inherently low contrast of the polymer assemblies, transmission electron microscopy of dried, stained samples was performed. The solution (2 µL) was applied to a glow-discharged TEM grid (carbon supported film on 300 mesh Cu grids, Ted Pella, Ltd., Redding, CA, USA). The excess liquid was blotted, and the grids were washed on two droplets of de-ionized water following by staining with 2% uranyl acetate for 40 s. The grids were blotted and dried under ambient conditions at room temperature before they were observed by FEI Tecnai 12 G 2 TWIN TEM operated at 120 kV. The images were recorded by a 4 K × 4 K FEI Eagle CCD camera using TIA (Tecnai Imaging and Analysis, Tecnai 3.0 FEI, Hillsboro, OR, USA). SWNTs Dispersions Sonication-assisted dispersions of SWNTs (0.1 wt%) were prepared in aqueous solutions of BrijS20 (1 wt%), F108 (2 wt%), and F127 (2 wt%) following previously published protocols [26]. The surfactants were used at the minimal concentrations needed for preparation of stable dispersions of individual SWNTs [21,26,33]. As polymer adsorption is non-reversible [21], post-dispersion mixing with the CNC suspension did not affect the stability of the dispersion. Note that F108 was used at a concentration below the CMC of the polymer. Cryo-TEM images of the dispersions indicate that the SWNTs are dispersed as individual, non-bundled tubes ( Figure S2 of the SI). Optical images taken between crossed polarizers ( Figure 1a) show an isotropic upper phase and a lower birefringent phase. The volume fraction of the two phases is similar in CNC suspensions in water (left) and CNCs in surfactant/polymer solutions. CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a,b) clearly show that while surfactant-free, CNC-dispersed SWNTs are depleted from the lower phase of the native suspension (left vial in both images) they are dispersed in both the isotropic and N* phase of the CNC-surfactant/polymer suspensions. In these mixtures the chiral nematic structure is preserved as demonstrated in representative POM images of the hybrid CNC-F108-SWNTs and CNC-F127-SWNTs (Figures 2c and S8 of the ESI) with a pitch value of 5 ± 2 µm, as well as the volume fraction of the N* phase (Figure 2d). Thus, the surfactant-decorated SWNTs are neither depleted from the N*, nor do they modify the pitch or the macroscopic phase diagram of CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a,b) clearly show that while surfactant-free, CNC-dispersed SWNTs are depleted from the lower phase of the native suspension (left vial in both images) they are dispersed in both the isotropic and N* phase of the CNC-surfactant/polymer suspensions. In these mixtures the chiral nematic structure is preserved as demonstrated in representative POM images of the hybrid CNC-F108-SWNTs and CNC-F127-SWNTs (Figures 2c and S8 of the ESI) with a pitch value of 5 ± 2 μm, as well as the volume fraction of the N* phase (Figure 2d). Thus, the surfactant-decorated SWNTs are neither CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a,b) clearly show that while surfactant-free, CNC-dispersed SWNTs are depleted from the lower phase of the native suspension (left vial in both images) they are dispersed in both the isotropic and N* phase of the CNC-surfactant/polymer suspensions. In these mixtures the chiral nematic structure is preserved as demonstrated in representative POM images of the hybrid CNC-F108-SWNTs and CNC-F127-SWNTs (Figures 2c and S8 of the ESI) with a pitch value of 5 ± 2 μm, as well as the volume fraction of the N* phase (Figure 2d). Thus, the surfactant-decorated SWNTs are neither ). The curves are shifted for better visualization. SAXS I vs. q curves are presented in Figure S5 of the SI (d) Schematic illustration of the parallelepiped stacking model by Mao et al. [37]. Zeta potential measurements of CNCs indicate that the surface potential of the CNCs in aqueous suspensions is −52 ± 2 mV. The value is reduced in the presence of the surfaceactive molecules, indicating adsorption (Table 2). Table 2. Calculated zeta potential in 0.1 wt% CNCs. Surfactant/Polymer (wt%) Zeta Potential (mV) The measured mobility presented in Table S1 of the SI. POM images show an iridescent lower phase ( Figure 1b) with a typical pitch of P = 5 ± 2 µm in the CNC-surfactant/polymer suspensions as compared to P = 17 ± 1 µm in the CNC-surfactant/polymer suspensions as compared to P = 17 ± 1 µm in the surfactant-free CNCs suspensions. The reduction in the pitch in the presence of the surfactant/polymer at a given CNC concentration (here, 6 wt%) indicates a higher rotation of the director and was reported before for polymer-decorated CNCs [38]. Cryo-TEM images of the CNC water suspensions and CNC-BrijS20 mixture presented in Figures S3 and S4 of the SI show that the two phases retain their typical inter-particle distance and the orientational order of the CNCs. The nanostructure of the CNC-surfactant/polymer suspensions was investigated using SAXS. In Figure 1c, we present the Lorentz-corrected curves of CNC suspensions in polymer and surfactant solutions (I vs. q SAXS curves are presented in Figure S5 of the SI). The curves exhibit a first-order peak, indicative of a typical distance between suspended CNCs d 0 = 2π/q 0 , and a shoulder of a second-order interference peak (q 1 ), at values corresponding to a ratio of 2:1 (Table S2 of the SI), characteristic of the native lamellar periodic structures of the CNCs mesophase at this concentration [39]. The initial slope in the low q regime of the scattering curves is somewhat reduced (I vs. q curves, Figure S5 of the SI), indicating a modification of the structure factor due to some screening of the interparticle interactions. The SAXS curves ( Figure S5 of the SI) were fitted to the parallelepiped stacking model developed by Mao et al. [37,40] where an individual CNC particle is modeled by a parallelepiped with a length L, width b, and thickness a stacked in one direction with a distance d 0 between the surfaces of two adjacent particles (Figure 1d). The fitting parameters presented in Table 3 indicate that the typical interparticle distance d 0 is similar in the different suspensions. The typical thickness (Figure 1d), a, is increased by about 15%, while the most significant effect is on the width, b: the width is doubled for most of the additives and is slightly higher in the F127 suspensions than in the BrijS20 suspensions ( Table 3). The latter may indicate increased aggregation of the CNCs in the presence of the surfactants, probably due to reduced surface charge [41]. The SAXS results, along with the reduction in the zeta potential are consistent with adsorption of the surface-active molecules onto the CNCs particles. CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a,b) clearly show that while surfactant-free, CNC-dispersed SWNTs are depleted from the lower phase of the native suspension (left vial in both images) they are dispersed in both the isotropic and N* phase of the CNC-surfactant/polymer suspensions. In these mixtures the chiral nematic structure is preserved as demonstrated in representative POM images of the hybrid CNC-F108-SWNTs and CNC-F127-SWNTs (Figure 2c and Figure S8 of the SI) with a pitch value of 5 ± 2 µm, as well as the volume fraction of the N* phase (Figure 2d). Thus, the surfactant-decorated SWNTs are neither depleted from the N*, nor do they modify the pitch or the macroscopic phase diagram of the surfactant/polymer-decorated CNCs. The observation was reproducible for the two different types of SWNTs. Raman spectroscopy was used to determine the SWNT-to-CNC ratio in the two phases. The Raman spectra of the pristine materials are presented in Figure 3a,b. For SWNTCH, a tangential stretching mode (G-band) is observed at ~1590 cm −1 and a broader band around ~2670 cm −1 (G′-band) arises from an overtone of the disorder induced mode around 1340 cm −1 (D-band) [42]. For CNCs, two sharp peaks are observed at ~1095 cm −1 and ~2895 cm −1 , characteristic of the C-O ring and C-H stretching, respectively. The broader band observed at ~1380 cm −1 is attributed to HCC, HCO and HOC bending [43][44][45]. The ratio between the D-band and the G-band intensities is similar to that of the pristine SWNTs [46]. The latter indicates that the interaction between the SWNTs and the adsorbed surfactants does not introduce additional defects and does not modify the sp 2 hybridization of the pristine SWNTs. CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a,b) clearly show that while surfactant-free, CNC-dispersed SWNTs are depleted from the lower phase of the native suspension (left vial in both images) they are dispersed in both the isotropic and N* phase of the CNC-surfactant/polymer suspensions. In these mixtures the chiral nematic structure is preserved as demonstrated in representative POM images of the hybrid CNC-F108-SWNTs and CNC-F127-SWNTs (Figures 2c and S8 of the ESI) with a pitch value of 5 ± 2 μm, as well as the volume fraction of the N* phase (Figure 2d). Thus, the surfactant-decorated SWNTs are neither depleted from the N*, nor do they modify the pitch or the macroscopic phase diagram of the surfactant/polymer-decorated CNCs. The observation was reproducible for the two different types of SWNTs. CNC-Surfactant/Polymer-SWNTs Dispersions Liquid mixtures of CNC-surfactant/polymer-SWNTs were prepared as described above. The images (Figure 2a,b) clearly show that while surfactant-free, CNC-dispersed SWNTs are depleted from the lower phase of the native suspension (left vial in both images) they are dispersed in both the isotropic and N* phase of the CNC-surfactant/polymer suspensions. In these mixtures the chiral nematic structure is preserved as demonstrated in representative POM images of the hybrid CNC-F108-SWNTs and CNC-F127-SWNTs (Figures 2c and S8 of the ESI) with a pitch value of 5 ± 2 μm, as well as the volume fraction of the N* phase (Figure 2d). Thus, the surfactant-decorated SWNTs are neither depleted from the N*, nor do they modify the pitch or the macroscopic phase diagram of the surfactant/polymer-decorated CNCs. The observation was reproducible for the two different types of SWNTs. ). Raman spectroscopy was used to determine the SWNT-to-CNC ratio in the two phases. The Raman spectra of the pristine materials are presented in Figure 3a,b. For SWNTCH, a tangential stretching mode (G-band) is observed at~1590 cm −1 and a broader band around 2670 cm −1 (G -band) arises from an overtone of the disorder induced mode around 1340 cm −1 (D-band) [42]. For CNCs, two sharp peaks are observed at~1095 cm −1 and 2895 cm −1 , characteristic of the C-O ring and C-H stretching, respectively. The broader band observed at~1380 cm −1 is attributed to HCC, HCO and HOC bending [43][44][45]. The ratio between the D-band and the G-band intensities is similar to that of the pristine SWNTs [46]. The latter indicates that the interaction between the SWNTs and the adsorbed surfactants does not introduce additional defects and does not modify the sp 2 hybridization of the pristine SWNTs. The relative fraction of SWNTs is calculated from the ratio of the intensities of the SWNTs G-band at ~1590 cm −1 to CNCs characteristic peak at ~1093 cm −1 . The Raman spectra of samples dried from the I phase (upper, isotropic) and the N* phase (lower, chiral nematic) of CNC-surfactant/polymer-SWNTCH dispersions are presented in Figure 3c,d. In CNC-SWNT suspensions, in the absence of surfactants (black curve in Figure 3c Very differently, in the CNC-surfactant/polymer-SWNTs dispersions, the SWNTs are either equally distributed between the two phases (I/N* 1:1 ratio) or enriched in the N* phase (Table S3 of the ESI). Cryo-TEM images of the N* phase of a CNC-BrijS20-SWNTCH mixtures are presented in Figure 4. As in the native suspensions, the CNC rods are oriented over hundreds of nanometers. The SWNTs cannot be resolved at this magnification (Figure 4a), but at a higher magnification (Figure 4b) a fraction of the SWNTs network with typical mesh size ranging from hundreds of nanometers to microns is observed. At even higher magnifications ( Figure S4 of the ESI), Brij micelles can be observed. Similar structures are observed in cryo-TEM images of CNC-F108-SWNTs (Figure 5a,b) and CNC-F127-SWNTs ( Figure 5c,d). Note that the Pluronic micelles (or chains in the case of F108) are not observed in cryo-TEM due to the low contrast of PEO with water [47]. The relative fraction of SWNTs is calculated from the ratio of the intensities of the SWNTs G-band at~1590 cm −1 to CNCs characteristic peak at~1093 cm −1 . The Raman spectra of samples dried from the I phase (upper, isotropic) and the N* phase (lower, chiral nematic) of CNC-surfactant/polymer-SWNTCH dispersions are presented in Figure 3c,d. In CNC-SWNT suspensions, in the absence of surfactants (black curve in Figure 3c,d), the SWNTs are depleted from the N* phase and the SWNTs enrich the isotropic (upper) phase to a ratio of 8:1 (I/N*) (and 5:1 (I/N*) for SWNTAP, see Figures S6 and S7 of the SI). Very differently, in the CNC-surfactant/polymer-SWNTs dispersions, the SWNTs are either equally distributed between the two phases (I/N* 1:1 ratio) or enriched in the N* phase (Table S3 of the SI). Cryo-TEM images of the N* phase of a CNC-BrijS20-SWNTCH mixtures are presented in Figure 4. As in the native suspensions, the CNC rods are oriented over hundreds of nanometers. The SWNTs cannot be resolved at this magnification (Figure 4a), but at a higher magnification (Figure 4b) a fraction of the SWNTs network with typical mesh size ranging from hundreds of nanometers to microns is observed. At even higher magnifications ( Figure S4 of the SI), Brij micelles can be observed. Similar structures are observed in cryo-TEM images of CNC-F108-SWNTs (Figure 5a,b) and CNC-F127-SWNTs (Figure 5c,d). Note that the Pluronic micelles (or chains in the case of F108) are not observed in cryo-TEM due to the low contrast of PEO with water [47]. dimensions (a, b) of a surfactant/polymer-decorated CNC rod nor the interparticle distance d0. Furthermore, the POM image (Figure 6b), indicates that the chiral dimensions (a, b) of a surfactant/polymer-decorated CNC rod nor the interparticle distance d0. Furthermore, the POM image (Figure 6b), indicates that the chiral nature (N*) is preserved and the typical pitch does not change, as compared to the pitch measured in the absence of SWNTs, for CNC-surfactant/polymer suspensions (~5 µm). Discussion The coexistence of surfactant (or polymer)-decorated SWNT networks and surfactant (or polymer)-decorated CNC LC mesophases in the presence of non-ionic surfactants (or triblock copolymers) was observed. Raman measurements indicate that the SWNTs are equally distributed between the two phases and are not excluded from the N* phase of the CNCs. Cryo-TEM images clearly show percolated networks of individual SWNTs in both phases. The mesh-size of the SWNTs network is of the order of hundreds of nanometers to microns, and the SWNTs are not incorporated into the chiral nematic structure but rather form an independent network (Figure 7). Optical images of thin films dried from surfactant-free CNC suspensions of SWNTCH, and surfactant-mediated CNC-BrijS20-SWNTCH suspensions are presented in Figure 6c-f. While the film formed by drying of the optically isotropic (upper) phase of the surfactant-free CNC-SWNTCH suspension is black (Figure 6c), the film dried from the lower phase is bright (Figure 6d), indicating that the SWNTs are depleted from this phase. The dark rim of the dried (I) phase (Figure 6c) indicates segregation of dispersed SWNTs to the perimeter of the drop. In contrast, films prepared in a similar way from the I and N* of CNC-BrijS20-SWNTCH suspensions (Figure 6e-f) are both homogenously dark, indicating that the SWNTs are equally distributed in the two phases. In these films (Figure 6e-f) the rims are somewhat brighter than the inner part of the film. Discussion The coexistence of surfactant (or polymer)-decorated SWNT networks and surfactant (or polymer)-decorated CNC LC mesophases in the presence of non-ionic surfactants (or triblock copolymers) was observed. Raman measurements indicate that the SWNTs are equally distributed between the two phases and are not excluded from the N* phase of the CNCs. Cryo-TEM images clearly show percolated networks of individual SWNTs in both phases. The mesh-size of the SWNTs network is of the order of hundreds of nanometers to microns, and the SWNTs are not incorporated into the chiral nematic structure but rather form an independent network (Figure 7). ), 6 wt% CNCs-0.5 wt% BrijS20-0.1 wt% SWNTCH ( Nanomaterials 2021, 11, 3059 11 of 15 nature (N*) is preserved and the typical pitch does not change, as compared to the pitch measured in the absence of SWNTs, for CNC-surfactant/polymer suspensions (~5 μm). Optical images of thin films dried from surfactant-free CNC suspensions of SWNTCH, and surfactant-mediated CNC-BrijS20-SWNTCH suspensions are presented in Figure 6c-f. While the film formed by drying of the optically isotropic (upper) phase of the surfactant-free CNC-SWNTCH suspension is black (Figure 6c), the film dried from the lower phase is bright (Figure 6d), indicating that the SWNTs are depleted from this phase. The dark rim of the dried (I) phase (Figure 6c) indicates segregation of dispersed SWNTs to the perimeter of the drop. In contrast, films prepared in a similar way from the I and N* of CNC-BrijS20-SWNTCH suspensions (Figure 6e-f) are both homogenously dark, indicating that the SWNTs are equally distributed in the two phases. In these films (Figure 6e-f) the rims are somewhat brighter than the inner part of the film. Discussion The coexistence of surfactant (or polymer)-decorated SWNT networks and surfactant (or polymer)-decorated CNC LC mesophases in the presence of non-ionic surfactants (or triblock copolymers) was observed. Raman measurements indicate that the SWNTs are equally distributed between the two phases and are not excluded from the N* phase of the CNCs. Cryo-TEM images clearly show percolated networks of individual SWNTs in both phases. The mesh-size of the SWNTs network is of the order of hundreds of nanometers to microns, and the SWNTs are not incorporated into the chiral nematic structure but rather form an independent network (Figure 7). ), 6 wt% CNCs-1 wt% F108-0.1 wt% SWNTCH ( Nanomaterials 2021, 11, 3059 11 of 15 nature (N*) is preserved and the typical pitch does not change, as compared to the pitch measured in the absence of SWNTs, for CNC-surfactant/polymer suspensions (~5 μm). Discussion The coexistence of surfactant (or polymer)-decorated SWNT networks and surfactant (or polymer)-decorated CNC LC mesophases in the presence of non-ionic surfactants (or triblock copolymers) was observed. Raman measurements indicate that the SWNTs are equally distributed between the two phases and are not excluded from the N* phase of the CNCs. Cryo-TEM images clearly show percolated networks of individual SWNTs in both phases. The mesh-size of the SWNTs network is of the order of hundreds of nanometers to microns, and the SWNTs are not incorporated into the chiral nematic structure but rather form an independent network (Figure 7). Discussion The coexistence of surfactant (or polymer)-decorated SWNT networks and surfactant (or polymer)-decorated CNC LC mesophases in the presence of non-ionic surfactants (or triblock copolymers) was observed. Raman measurements indicate that the SWNTs are equally distributed between the two phases and are not excluded from the N* phase of the CNCs. Cryo-TEM images clearly show percolated networks of individual SWNTs in both phases. The mesh-size of the SWNTs network is of the order of hundreds of nanometers to microns, and the SWNTs are not incorporated into the chiral nematic structure but rather form an independent network (Figure 7). Discussion The coexistence of surfactant (or polymer)-decorated SWNT networks and surfactant (or polymer)-decorated CNC LC mesophases in the presence of non-ionic surfactants (or triblock copolymers) was observed. Raman measurements indicate that the SWNTs are equally distributed between the two phases and are not excluded from the N* phase of the CNCs. Cryo-TEM images clearly show percolated networks of individual SWNTs in both phases. The mesh-size of the SWNTs network is of the order of hundreds of nanometers to microns, and the SWNTs are not incorporated into the chiral nematic structure but rather form an independent network (Figure 7). Molecular CNC-surfactant/polymer-SWNT assemblies of different typical dimensions are formed in solutions of BrijS20 micelles (radius~3 nm), individual chains of F108 (hydrodynamic radius~4 nm), and F127 core-shell micelles with a radius of about 13 nm [25][26][27][28][29][30]. Yet, the observed behavior of the three-component (CNC-surfactant/polymer-SWNT) hybrid system is similar. At this point, it is not possible to deduce whether individual molecules, micelles, or hemi-micelles decorate the CNCs. Clearly, the nanostructure of the chiral nematic phase is hardly altered by the presence of the surfactants or the SWNTs networks and size-selective exclusion of micelle (polymer)-decorated SWNTs is not observed. The key to CNC-SWNT coexistence is probably the presence of non-ionic surfactants. These are known to act as mediating agents in mixtures of nanostructures enabling the preparation of nano-hybrids, and were shown to improve the homogeneity of dried CNC films [48]. It is observed here that the suspensions can be dried, leading to the formation of thin-film hybrids with macroscopically homogeneous distribution of the SWNTs, without segregation of the SWNTs to the perimeter of the dried films. Detailed analysis of the CNC-surfactant/polymer-SWNT mixtures indicates that the macroscopic phase diagram and inter-particle distance (d0) characteristic of the N* phase of the native CNCs suspensions, as measured via SAXS, are preserved in the rod-sphere mixtures, while the effective thickness of the surfactant-decorated CNCs is increased. These observations suggest that the Brij20 and F127 micelles and the F108 chains (at the low concentrations investigated here) do not induce depletion [49] of the CNC rods. Depletion interactions between the micelles and the CNCs would have led to crowding of the CNCs and a reduction in the inter-CNC distance. POM imaging of CNC-surfactant and the CNC-surfactant-SWNT phases clearly shows the typical fingerprint pattern characteristic of the N* phase of the CNCs. The origins of chirality of the LC CNC phase may be geometrical or related to charge distribution at the surface [7,50]. Thus, one would expect that adsorption of micelles or polymer molecules would screen the chiral interactions. Yet, here we observe that the pitch of the N* phase is reduced (P~5 µm) as compared to the native phase (P~17 µm). Shortening of the helix of the N* phase at a constant CNC volume fraction, indicates that the rotation of the director is higher, probably due to modification of the inter-CNC interactions among the rods [7] and strengthening the chiral interactions. The pitch is not further affected by the presence of the SWNTs. We note here that helical pitch (an equilibrium property of the N* phase) is not associated with d0 (inter-particle distance). Conclusions Liquid suspensions of CNCs and SWNTs can be used as colloidal inks for additive manufacturing [51] and layer-by-layer deposition [18] of multifunctional nanocomposites with novel combinations of optical, electrical, mechanical, and thermal properties [18,52- Molecular CNC-surfactant/polymer-SWNT assemblies of different typical dimensions are formed in solutions of BrijS20 micelles (radius~3 nm), individual chains of F108 (hydrodynamic radius~4 nm), and F127 core-shell micelles with a radius of about 13 nm [25][26][27][28][29][30]. Yet, the observed behavior of the three-component (CNC-surfactant/polymer-SWNT) hybrid system is similar. At this point, it is not possible to deduce whether individual molecules, micelles, or hemi-micelles decorate the CNCs. Clearly, the nanostructure of the chiral nematic phase is hardly altered by the presence of the surfactants or the SWNTs networks and size-selective exclusion of micelle (polymer)-decorated SWNTs is not observed. The key to CNC-SWNT coexistence is probably the presence of non-ionic surfactants. These are known to act as mediating agents in mixtures of nanostructures enabling the preparation of nano-hybrids, and were shown to improve the homogeneity of dried CNC films [48]. It is observed here that the suspensions can be dried, leading to the formation of thin-film hybrids with macroscopically homogeneous distribution of the SWNTs, without segregation of the SWNTs to the perimeter of the dried films. Detailed analysis of the CNC-surfactant/polymer-SWNT mixtures indicates that the macroscopic phase diagram and inter-particle distance (d 0 ) characteristic of the N* phase of the native CNCs suspensions, as measured via SAXS, are preserved in the rod-sphere mixtures, while the effective thickness of the surfactant-decorated CNCs is increased. These observations suggest that the Brij20 and F127 micelles and the F108 chains (at the low concentrations investigated here) do not induce depletion [49] of the CNC rods. Depletion interactions between the micelles and the CNCs would have led to crowding of the CNCs and a reduction in the inter-CNC distance. POM imaging of CNC-surfactant and the CNC-surfactant-SWNT phases clearly shows the typical fingerprint pattern characteristic of the N* phase of the CNCs. The origins of chirality of the LC CNC phase may be geometrical or related to charge distribution at the surface [7,50]. Thus, one would expect that adsorption of micelles or polymer molecules would screen the chiral interactions. Yet, here we observe that the pitch of the N* phase is reduced (P~5 µm) as compared to the native phase (P~17 µm). Shortening of the helix of the N* phase at a constant CNC volume fraction, indicates that the rotation of the director is higher, probably due to modification of the inter-CNC interactions among the rods [7] and strengthening the chiral interactions. The pitch is not further affected by the presence of the SWNTs. We note here that helical pitch (an equilibrium property of the N* phase) is not associated with d 0 (inter-particle distance). Conclusions Liquid suspensions of CNCs and SWNTs can be used as colloidal inks for additive manufacturing [51] and layer-by-layer deposition [18] of multifunctional nanocomposites with novel combinations of optical, electrical, mechanical, and thermal properties [18,[52][53][54]]. Yet, mismatch in the physical characteristics of the two components, including diameter, aspect ratio, rigidity, and interfacial interactions, results in segregation of the components and depletion of the SWNTs from the liquid-crystalline phases of the CNCs [12]. In this study, we report the utilization of non-ionic, surface-active molecules for the preparation of hybrid liquid phases comprising CNC and SWNT networks. It was observed that the surfactant-decorated SWNTs are neither excluded from the surfactant-decorated CNC phases, nor are they incorporated into the nematic (N*) structure. Rather, cryo-TEM imaging reveals that the SWNTs networks co-exist with the surfactant-mediated phases of the CNCs, and Raman spectroscopy indicates that the SWNTs distribute equally between the isotropic and chiral nematic phases of the CNCs. Detailed SAXS and cryo-TEM characterization of the emerging phases show that adsorbed surfactants do not disturb the nematic structure of the CNCs mesophase and POM imaging indicates that the chiral nature of the N* phase is preserved, while the pitch is reduced. These findings indicate that non-ionic surfactants can be used for engineering of the interfacial interactions in hybrid mixtures of CNCs and SWNTs. The resulting mixtures could be used for liquid processing and deposition of multi-component CNC-based functional nanocomposites. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-11-17T16:10:26.230Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "ccc6307153b1d469f94ad98d2576eaa762e3b656", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/11/3059/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef09de615b11fd610a7f39607a3d1b43b7958cff", "s2fieldsofstudy": [ "Physics", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
117189804
pes2o/s2orc
v3-fos-license
The galaxy-wide IMF of dwarf late-type to massive early-type galaxies Observational studies are showing that the galaxy-wide stellar initial mass function are top-heavy in galaxies with high star-formation rates (SFRs). Calculating the integrated galactic stellar initial mass function (IGIMF) as a function of the SFR of a galaxy, it follows that galaxies which have or which formed with SFRs>10 Msol yr^-1 would have a top-heavy IGIMF in excellent consistency with the observations. Consequently and in agreement with observations, elliptical galaxies would have higher M/L ratios as a result of the overabundance of stellar remnants compared to a stellar population that formed with an invariant canonical stellar initial mass function (IMF). For the Milky Way, the IGIMF yields very good agreement with the disk- and the bulge-IMF determinations. Our conclusions are that purely stochastic descriptions of star formation on the scales of a pc and above are falsified. Instead, star formation follows the laws, stated here as axioms, which define the IGIMF theory. We also find evidence that the power-law index beta of the embedded cluster mass function decreases with increasing SFR. We propose further tests of the IGIMF theory through counting massive stars in dwarf galaxies. INTRODUCTION The stellar initial mass function (IMF), ξ(m) = dN /dm, describes the distribution of masses of stars, whereby dN is the number of stars formed in the mass interval [m, m + dm]. It is one of the most important distribution functions in astrophysics as stellar evolution is mostly determined by stellar mass. The IMF therefore regulates the chemical enrichment history of galaxies, as well as their mass-to-light ratios and influences their dynamical evolution. Theoretically unexpected, the IMF is found to be invariant through a large range of conditions like gas densities and metallicities (Kroupa 2001(Kroupa , 2002Chabrier 2003;Elmegreen et al. 2008;Bastian et al. 2010;Kroupa et al. 2013) and is well described by the canonical IMF (Appendix A). Therefore an invariant IMF is widely used to not only describe individual star clusters but also the stellar popula-⋆ E-mail: cweidner@iac.es † E-mail: pavel@astro.uni-bonn.de ‡ E-mail: jpflamm@astro.uni-bonn.de § E-mail: vazdekis@iac.es tions of whole galaxies. But, the question remains whether the IMF, derived from and tested on star cluster scales, is the appropriate stellar distribution function for complex stellar populations like galaxies. With this contribution we attempt to provide a single unifying principle for understanding the observationally deduced variation of the IMF from dwarf irregular to massive elliptical galaxies assuming the stellar IMF to be largely form-invariant in the star-forming building blocks of galaxies. These building blocks are the molecular cloud density peaks that form from a few to many millions of stars within ≈ 1 Myr and within ≈ 1 pc. The observed variations of the galaxy-wide IMF in late-type and early-type galaxies are addressed in Section 2.1 and these observational results are explained in the framework of the IGIMF effective theory in Section 3. Here also the validity of the IGIMF for the Milky Way disc and Bulge are discussed. Section 4 presents the expected number of O and B stars for whole galaxies in dependence of the SFR for the IGIMF including star-bursting systems, and for a constant canonical IMF. Finally, the results are discussed in Section 5. OBSERVATIONAL EVIDENCE FOR A VARYING GALAXY-WIDE IMF IN GALAXIES While for a long time the IMF of galaxies was assumed to be constant and observations didn't find evidence for any striking differences, it has been established over the last few years that the issue of IMF variation is more complex than previously thought. This has been made possible by the availability of very large and deep surveys, like the SDSS, which provide large populations of galaxies, allowing a statistical analysis of the effect of the IMF on galaxy properties. Observational evidence for a varying galaxy-wide IMF in late-type galaxies Using the large amount of data provided by the SDSS survey, Hoversten & Glazebrook (2008) derived IMF slopes above 1 M⊙ (α3; α3 = 2.3 for the Salpeter value) for thousands of star-forming galaxies in dependence of their r-band magnitudes, Mr. They found that fainter galaxies have steeper IMF slopes than brighter (more massive) ones. In order to translate this slope dependence on Mr into a dependence on galaxy mass, a fixed mass-to-light (M/L) ratio of 2 (Bell et al. 2003) was used together with a population build-up time of 12 Gyr. The dashed line in Fig. 1 shows the resulting behaviour for the SFRs derived when distributing the mass of the galaxy over the assumed age of the population of 12 Gyr. Further evidence for IMF variations where found in 2009 when two teams of researchers (Meurer et al. 2009;Lee et al. 2009) discovered independently of each other that the Hα fluxes of galaxies with very low star-formation rates do not match their expected levels when extrapolating from the UV fluxes. The ratio between Hα to UV flux in these galaxies correlate with the galaxy-wide SFRs. Both groups reached the same conclusion that the IMF of these galaxies must vary systematically with physical parameters such as the SFR. The GAlaxy Mass Assembly team (Driver et al. 2009;Robotham et al. 2010;Baldry et al. 2010) recently studied a large sample of nearly 44000 late-type galaxies with multi-colour photometry and spectroscopic redshifts. While attempting to model the stellar populations of these galaxies, Gunawardhana et al. (2011) found that a flatter than Salpeter slope was necessary to do so and that this slope becomes increasingly flatter with increasing SFR. Gunawardhana et al. (2011) used the derived slopes and the measured SFRs to directly fit a relation for the dependence of IMF slope above 1 M⊙, α3, on the SFR. This relation is plotted as a long-dash-short-dashed line in Fig. 1. Additionally shown in the figure are a number of different galaxy-wide IMF estimations. The cross with error bars is the Scalo-IMF derived from the field present-day Milky-Way (MW) mass function (Scalo 1986), while the box is the IMF adopted by Kennicutt (1983) to explain the Hα flux of a number of starforming galaxies. The dot with error bars is the Ballero et al. (2007) IMF determination from detailed chemical evolution modelling of the MW and M31 bulges (see § 3.4 below). They find a best-fitting model with α3 = 1.95 and find that the slope could be at most α3 = 2.1. The lower limit for α3 is less constraint as the overall metallicity can be reproduced with an IMF as flat as α3 = 1.33 but the authors also state that this would lead to some amount of oxygen overproduction. Furthermore, plotted as three star symbols is the Davé (2008) result which is derived from comparing hydrosimulations of star-forming galaxies with observations to explain the evolution of the stellar-mass-SFR relation from redshift z ≈ 2 to z = 0. Davé (2008) assumes a redshift dependent characteristic mass, m * , of the IMF (instead of a changing mass function) of the form m * = 0.5(1 + z) 2 . Below m * the slope is set to 1.3 and above to 2.35. This can be transformed into a high-mass slope dependence with redshift by calculating the mean mass of an IMF with changing slope above a fixed m * of 0.5 M⊙ and comparing this mean mass with the one obtained when using the redshift depended m * . When both mean masses agree for a given redshift, the slope for the fixed m * is assigned to this redshift. Davé (2008) defines a star-formation activity parameter, α sf = (M * /SF R)/(tH − 1Gyr) with M * = 10 10.7 M⊙ and tH = 13.1698 Gyr. The author describes this parameter as the fraction of the Hubble time (minus a Gyr) that a galaxy needs to have formed stars at its current rate in order to produce its current stellar mass. With the α sf observationally derived (Davé 2008) at three different redshifts (0.45, 1 and 2) it is possible to derive the SFR at a given z via SF R = M * /[α sf (tH − 1Gyr)]. In Fig. 1 the slope is shown for three z values of 0.45, 1 and 2. 2.2 Observational evidence for a varying galaxy-wide IMF in early-type galaxies While the above studies have been focussed on star-forming, late-type galaxies, recently evidence has also emerged for the galaxy-wide variation of the IMF in early-type or elliptical (E) galaxies. Cappellari et al. (2012) studied a volumelimited sample of 260 E galaxies via integral field spectroscopy and photometry. Using different assumptions on the dark matter halos these galaxies could reside in, from no dark matter to different halo types, Cappellari et al. (2012) found that the SDSS r-band mass-to-light ratios of these galaxies do not agree with the assumption of a single slope Salpeter IMF nor with the canonical IMF (see Appendix A). They conclude that either an extremely bottom-heavy or a very top-heavy IMF in the early universe is necessary to explain the mass-to-light ratios. and , too, found apparent evidence for a bottom-heavy IMF in ellipticals from NaI, CaII and FeH spectral-line observations. Falcón- Barroso et al. (2003) finds a similar trend for bottom-heavy IMFs in the bulges of late-type galaxies. Saglia et al. (2002) and Cenarro et al. (2003) however disfavour the bottom-heavy IMF as an explanation using Ca observations in E galaxies, naming e.g. the use of the solar Ca abundance ratio not scaled to the metallicity of the E Figure 1. The dependence of the slope of the observationally deduced galaxy-wide IMF above 1 M ⊙ , α 3 , on the SFR. The dashed line is derived from the Hoversten & Glazebrook (2008) SDDS r-band magnitudes when using a fixed M/L ratio and a population build-up time of 12 Gyr. The long-dash-short-dashed line is the fit for late-type galaxies as deduced by the GAMA-team (Gunawardhana et al. 2011). A large cross marks the Scalo (1986) result for the Milky Way field derived from the present-day mass function (Kroupa et al. 1993) and the large box is the Kennicutt (1983) value for the Milky Way. The three star symbols are results from Davé (2008) while the big dot is from Ballero et al. (2007). The thin horizontal line marks the Salpeter/Massey slope, α = 2.35 (Salpeter 1955;Massey 2003). galaxies as a possible solution. The bulges of the Milky Way and M31 have also been reported to have had a top-heavy IMF (Ballero et al. 2007). EXPLAINING THE IMF VARIATIONS OF GALAXIES With all the above compelling evidence for IMF variations in galaxies, is it possible to arrive at a unifying theory which allows quantitative understanding of the observations with a few basic principles? The IGIMF theory Observationally, IMF variations within the Milky Way have long been considered unlikely or at least rather small (Kroupa 2002). While already noted by Scalo (1986) the IMF slope of the Milky Way field above 1 M⊙, derived from the present-day stellar mass function, is much steeper (α3 ≈ 2.7) than the Salpeter/Massey slope generally obtained from studies of young star clusters and OB associations in the Milky Way and the Magellanic clouds. The Salpeter/Massey value of α = 2.3 (or 2.35) was and still is widely used in extragalactic astronomy. It was in 2003 that it was shown for the first time that the galaxy-wide IMF should be steeper than the IMFs in individual starforming regions (Kroupa & Weidner 2003). In 2005 this work was extended to galaxies with different SFRs demonstrating that the galaxy-wide IMF ought to steepen with decreasing SFR (Weidner & Kroupa 2005). The key assumption of this theory is that stars form in the densest regions of molecular clouds and that the star-formation activity of a whole galaxy is the sum over all these starforming clumps. That is, stars form in groups or spatially (≈ 1 pc) and temporally (≈ 1 Myr) correlated star formation events (CSFEs), generally termed embedded clusters (Lada & Lada 2003;André et al. 2010). These clusters need not be physically bound and in the highly obscured, deeply embedded phase, it is usually not possible to determine whether they are bound or not. The typical time scale, δt, to build a system of clusters within a galaxy which statistically samples the mass function of embedded clusters would be about 10 Myr , which is in accordance with the timescale between the formation of molecular clouds and the emergence of new star clusters observed in disk galaxies (Egusa et al. 2004(Egusa et al. , 2009) and with CO observations which show that clusters older than 10 Myr do not have large molecular clouds associated with them (Leisawitz et al. 1989). In recent years, the following six empirical relations governing galaxy-wide star formation (or "laws of star-formation") emerged. These are our axioms upon which the IGIMF theory is based 1 : 1. The IMF, ξ(m), within embedded star clusters is canonical (see Appendix A) for cloud core densities, ρ cl , 9.5 × 10 4 M⊙ / pc 3 , where ρ cl = 3M cl /4πr 3 h and M cl is the original molecular cloud core mass in gas and stars, which is for a star-formation efficiency, ǫ, of 33% (Lada & Lada 2003) three times the mass of the embedded cluster, M ecl , and r h is its half-mass radius, 2. the CSFEs populate an embedded-cluster mass function (ECMF), which is assumed to be a power-law of the form, ξ ecl (M ecl ) = dN / dM ecl ∝ M −β ecl , 3. the half-mass radii of CSFEs follow r h (pc) = 0.1×(M ecl /M⊙) 0.13 ) yielding log 10 (ρ cl ) = 0.61 × log 10 (M ecl /M⊙) + 2.85, in units of M⊙ / pc 3 , 4. the most-massive star in a cluster, mmax, is a function of the stellar mass of the embedded cluster, M ecl , , 2006Weidner et al. 2010bWeidner et al. , 2013b, mmax = mmax(M ecl ) (e.g. eq. 10 in Pflamm-Altenburg et al. 2007), 5. there exists a relation between the star-formation rate (SFR) of a galaxy and the most-massive young (< 10 Myr) star cluster, log 10 (M ecl,max /M⊙) = 0.746 × log 10 (SF R) + 4.93 , where the SFR is in units of M⊙ yr −1 , 6. the dependence of the IMF slope, α3, of stars above 1 M⊙ on the initial density of the CSFE and metallicity of the CSFE as is given by eq. 2 below ). These axioms make it possible to calculate the galaxy-wide, or integrated galactic stellar initial mass function (IGIMF) (Weidner & Kroupa 2005), explicitly in dependence of the galaxy-wide SFR and the metallicity, Here ξ(m mmax) ξ ecl (M ecl ) dM ecl is the stellar IMF, with mmax limited by M ecl (above axiom 4), contributed by ξ ecl dM ecl clusters with stellar mass in the interval M ecl , M ecl + dM ecl . While M ecl,max follows from the SFR-M ecl,max -relation (above axiom 5), M ecl,min = 5 M⊙ is generally adopted, corresponding to the smallest known CSFE, namely the individual groups of very young ( 1 Myr) stars in Taurus-Auriga (Kroupa & Bouvier 2003;Kirk & Myers 2011). Due to the dependence of M ecl,max on the SFR the IGIMF depends on the SFR of a galaxy. In the case of starbursts, it has been recently found by Dabringhausen et al. (2009Dabringhausen et al. ( , 2012 and from an analysis of globular clusters and ultra-compact dwarf-galaxies that the IMF within CSFEs becomes topheavy under very large star-formation-rate densities and that it can be described by, with x = −0.14 [Fe/H] + 0.99 log 10 (ρ cl /(10 6 M⊙pc −3 )). We here correct a minor error in the IMF formula of where erroneously x 0.87 in their eq. 14 but it should instead read x −0.87. Marks et al. (2012) find a limit of 9.5 · 10 4 M⊙ / pc 3 , above which the IMF in CSFEs becomes top-heavy. This translates into a cluster mass of M ecl > 2.7 · 10 5 M⊙ when using the radius-M ecl relation for embedded clusters from Marks & Kroupa (2012) (axiom 3 above) and assuming a star-formation efficiency of 33%. Solar metallicity is assumed for all calculations in this work. A more comprehensive overview of the theoretical and observational background of the IGIMF is given in Kroupa et al. (2013). With the IGIMF theory it was not only possible to predict the Hα-flux to UV-flux dependence of the SFR of a galaxy (Pflamm-Altenburg et al. 2007, 2009), as has been observed by Meurer et al. (2009) and Lee et al. (2009), but the IGIMF reproduces these flux variations readily without any parameter adjustments. Furthermore, the IGIMF very naturally reproduces the observed mass-metallicity-relation of galaxies (Köppen et al. 2007) as well as the differences in metallicity between disk and bulge stars in the Milky Way (Calura et al. 2010) and reduces the need for galaxy downsizing (Recchi et al. 2009). results for dwarf galaxies assuming β = 2 = constant. This IGIMF model is extended into the starburst regime as described in Kroupa et al. (2013) and based on axioms 1-6. Assuming additionally that β varies according to eq. 3 yields the IGIMF behaviour shown as the thick grey line. Star forming galaxies Fig. 2 shows the IGIMF model with the parameters used to explain the Hα-flux to UV-flux variation and the top-heavy extension as described above (axioms 1-6) as a thick solid line together with the constrains from Hoversten & Glazebrook (2008) and Gunawardhana et al. (2011). The recent observational analysis by Gunawardhana et al. (2011) indicate a stronger flattening of the galaxy-wide mass function slope with increasing SFR at large SFRs than calculated for the IGIMF as based on axioms 1-6. In Weidner et al. (2011) the possibility of a systematic variation with SFR of the lower mass limit of the ECMF, M ecl,min , and/or of the slope of the ECMF, β, were considered and these possibilities are reconsidered here as potential solutions for the observed stronger flattening. Generally, a variation of the ECMF such that it becomes increasingly top-heavy with increasing SFR would be in accord with the general expectation (e.g. the Jeans mass increasing with increasing ambient temperature).Very little is known about M ecl,min as it is not straightforward to define what might be the smallest possible CSFE (Lada & Lada 2003). Considerable scatter is found for β (1.8 -2.5, Larsen 2009) and so far no clear indication for a variation of β with the SFR has been discovered. But the ECMF of the Antennae interacting galaxies, which have a high SFR ≈ 20 M⊙ yr −1 (Zhang et al. 2001), seems to be flatter than the cluster mass functions in normal spirals ( fig. 6 in Larsen 2009). Direct measurement might be difficult as clusters are known to be destroyed rapidly (Kroupa & Boily 2002;Boily & Kroupa 2003) and it is not clear whether this process is mass dependent or not. The Gunawardhana et al. (2011) study uses initially a single slope Salpeter IMF to determine the SFRs before introducing a variation of the slope. This prompts the question if their results can be directly used with the IGIMF which is based on the canonical IMF as described in Appendix A. However, the similarity of the Gunawardhana et al. (2011) results with studies using different methods and IMFs, like Davé (2008) and Ballero et al. (2007), gives confidence that the impact of the differences cannot be very large. Furthermore, Ferré-Mateu et al. (2013) studied the impact of IMF variations on recovered galaxy properties, like the total mass, for single-slope and two-slope IMFs and found that only for very steep IMFs the recovered galaxy mass varies strongly while for flat slopes the variation is very limited. The small α3 values at high SFRs (α3 ≈ 2 at SFR ≈ 10 2 M⊙ yr −1 from Davé 2008; Gunawardhana et al. 2011) shows a limitation in the present IGIMF theory as based on axioms 1 -6 in that α3 cannot reach such small values at this SFR. To address this we need to introduce an additional axiom 7 to the IGIMF, which can be achieved by either adjusting the power-law index, β, of the ECMF or the minimum cluster mass with SFR. We therefore suggest the following relation between β and the SFR for 1 SFR/(M⊙ yr −1 ) 50, in order to reproduce the Gunawardhana et al. (2011) constrains shown in Fig. 2, with β being the slope of the ECMF and SFR is in units of M⊙ yr −1 . The equation is arrived at by varying β for a given SFR and calculating the resulting IGIMF slope. When this slope is within 0.01 dex of the SFR-α3-relation of Gunawardhana et al. (2011) the value is used for the fit. While the Gunawardhana et al. (2011) data cover SFRs only up to 50 M⊙ yr −1 , we apply eq. 3 to large SFRs. An alternative possibility would be β = 2 = constant but a changing lower mass limit of the ECMF, M ecl,min , and the following fit results in the same IGIMF changes as eq. 3, Both descriptions have been tested and deliver identical results but, as stated above, variations of M ecl,min are at the moment virtually non-constrained by observations. It may become possible to obtain constrains with in-depth HST observations or with the upcoming JWST. While it is possible to explain the Gunawardhana et al. (2011) results with a varying β or M ecl,min (our 7th axiom), other explanations have been proposed. For example a diffuse mode of star-formation with a truncated or a variable IMF (Meurer et al. 1995;Larsen 2004) might also be able to reproduce the IMF slope behaviour. . The dependence of the logarithmic IGIMF (eq. 1) on the SFR of a galaxy. The IGIMF is normalised to the same values at m < 1 M ⊙ . The IGIMF is plotted with thick solid lines. It uses the canonical IMF which becomes top-heavy at solar metallicity embedded-star-cluster densities (gas + stars) ρ cl > 9.5 × 10 4 M ⊙ /pc 3 (eq. 2), with a constant ECMF, β = 2, M ecl,min = 5 M ⊙ and the mmax − M ecl relation (axiom 4). The corresponding SFRs for the thick solid lines are indicated as numbers in the plot (SFR = 10 −5 , 10 −3 , 10 −1 , 10 1 , 10 3 , 10 5 ) and are in units of M ⊙ yr −1 (from left to right). The case when β is dependent on the SFR for SFR 1 M ⊙ yr −1 (eq. 3) is shown with thick long-dashed lines for SFRs of 10 1 , 10 3 , 10 5 M ⊙ yr −1 from bottom to top. The thin lines are IMFs with different power-law indices, α ′ , for m > 1.3 M ⊙ . α ′ = 1.5, 1.7, 1.9, 2.1, 2.3, 2.4, 2.6, 2.8, 3.0, 3.5, 4.0 (top to bottom), whereby the canonical value α ′ = 2.3 = α 3 is shown as the thick short-dashed line. Adopted from fig. 35 from Kroupa et al. (2013). Note that the IGIMF has α ′ 3 ≈ 2.6 for SFR = 1 M ⊙ yr −1 in agreement with the Scalo (1986) determination of the Galactic-field IMF. The resulting form of the IGIMF calculated from the above seven axioms is shown in Fig. 3 in dependence of the galaxy-wide SFR. Elliptical galaxies With the IGIMF theory as defined by eq. 1 and axioms 1-7 it is possible to account for the observationally determined α3 variations for late-type galaxies that have 10 −5 SFR / (M⊙ yr −1 ) 10 4 (Fig. 2). Can the same IGIMF theory also account for the properties of E galaxies? Elliptical galaxies are known to be very old and to typically have formed as major bursts with 10 SFR / (M⊙ yr −1 ) 10 4 . According to the IGIMF theory they ought to have produced a large fraction of stellar remnants (Fig. 3). To compare the Cappellari et al. (2012) results for E galaxies with IGIMF models it is necessary to calculate the M/Lratios for the models. As available stellar population synthesis models do not allow for a variable IMF these models had to be calculated using our own code. In order to do so, the mass axis of the IGIMF is divided into 2000 logarithmic mass bins. As a lower mass limit 0.1 M⊙ is used and 100 M⊙ is the upper limit. The centre of each bin is treated as a single star and evolved over 15 Gyr in 1 Myr time-steps using stellar evolution models (references for the models are listed in Tab. 1 and the initial-final mass relation used is detailed in Tab. 2). At each step the effective temperature, T eff , and surface gravity, log 10 g, of the models is used to locate the appropriate stellar atmosphere model (Hauschildt et al. 1999). The resulting spectrum is then integrated over the SDSS r-band filter curve (Gunn et al. 1998) to obtain the fraction of the total luminosity of the stars in the SDSS r-band. The resulting Lr is averaged over a 10 Myr time interval to match the δt = 10 Myr time-scale necessary to fully populate the ECMF in the IGIMF models. With the IGIMF models the number of stars formed per δt in each mass bin is calculated and multiplied with Lr. The luminosity and mass of the stars are updated every time step according to the age of each population. Two different types of star-formation histories (SFH) are used (examples are shown in Fig. 4). The SFR is either assumed to be constant between 10 −5 M⊙ yr −1 and 10 4 M⊙ yr −1 over 1 Gyr (left panel of Fig. 5) or exponentially declining on 100 and 1000 Myr exponential time scales (right panel of Fig. 5). This leads to an IGIMF which is top-heavy early-on (low metallicity, high SFR) and which evolves towards an increasingly top-light and bottom-heavy IGIMF as the metallicity increases and SFR decreases (Fig. 6). It should be noted here that Vazdekis et al. (1996) and Vazdekis et al. (1997) already proposed an IMF change in E galaxies in a twophase model. They start with a flat IMF early on (< 0.5 Gyr) which later turns into a Salpeter-like one, in order to achieve significantly improved fits for various line-strengths and colours for ellipticals. The Vazdekis et al. (1996) model, our IGIMF model III (see § 4) and the observational results by Ferreras et al. (2013) are compared in Fig. 6. It is also important to notice that a purely bottom-heavy IMF for giant elliptical galaxies as proposed by van Dokkum & Conroy (2010) is at odds with the chemical evolution of these objects as they have solar or super-solar metallicities which are impossible to reproduce with very steep IMFs. However, a top-heavy galaxy-wide IMF burst with a later-on bottomheavy galaxy-wide IMF works well (Weidner et al. 2013a). In Fig. 5, the r-band IGIMF M/L-ratios, divided by M/L ratios obtained by the same procedure as described above but using a constant Salpeter-IMF from 0.1 to 100 M⊙ are plotted in dependence of the IGIMF M/L ratio. For the IGIMF models eq. 3 is applied to determine β for the given SFR. As E galaxies are dominated by old populations, in Fig. 5 only the M/L ratios for stellar populations with ages between 10 and 14 Gyr are plotted. The Cappellari et al. (2012) results are shown as light grey dots in Fig. 5, assuming no dark matter within the effective radii of the galaxies. But because only very little dark matter is expected inside these radii the dependence on the type and shape of a potential dark matter halo is negligible (Cappellari et al. 2012). The red solid line is the mean of the observations. . Typical star-formation histories used for the calculation of the IGIMF models for E galaxies. The solid lines refer to galaxies which form in total 10 11 M ⊙ , either constantly over 1 Gyr or exponentially declining over 100 Myr or 1 Gyr. The dashed lines are SFHs for a total mass of 10 10 M ⊙ formed either with a constant SFR over one Gyr or with an exponentially declining SFR over 100 Myr . The masses shown are the total masses prior to stellar evolution reducing the present-day masses of the galaxies which has been taken into account in the calculations of the M/L-ratios. Burrows et al. (1993Burrows et al. ( , 1997 To compare the model results with a different set of observations, the Dabringhausen et al. (2008) compilation of V -band M/L ratios of elliptical galaxies (triangles) and bulges of spiral galaxies (circles) are plotted versus the dynamical mass of the systems together with the model results (solid lines) in Fig. 7. For the lines it was assumed that all stars formed between 6 and 12 Gyr ago within 1 Gyr with an exponentially declining SFR. The agreement between models and observations is reasonable, considering the model and observational uncertainties, and the trend of declining M/L ratio with decreasing mass is reproduced. The faster decline of the observed M/LV values with decreasing mass can be seen as possible evidence that star-formation histories were more extended in lower mass systems than in massive E galaxies, as is also reported for example by Recchi et al. (2009) or Rogers et al. (2010). The IGIMF calculations shown here thus cover the range of the observations very well. It can be concluded that the IGIMF theory based on the seven axioms of § 3.1 and § 3.2 explains the Cappellari et al. (2012) results. Thus, by having constrained β = fn(SFR) (eq. 3) using late-type star-forming galaxies we arrive at a consistent description of early-type galaxies without needing any further adjust- Figure 6. Variation of the IGIMF index α 3 with time during the formation of elliptical galaxies. The solid line shows α 3 for IGIMF model III for a galaxy with 10 12 M ⊙ and an exponentially declining SFH over 1 Gyr as show in Fig. 4, while the dashed line is the Vazdekis et al. (1996) model, which uses a short burst (0.2 to 1 Gyr) and then an exponentially declining star-formation rate. The shaded area marks the range of IMF power-law indices found by Ferreras et al. (2013) for ellipticals with central velocity dispersions between 250 and 300 km/s derived from fitting lineindices for different tracers with single stellar population models and the Starlight code (Cid Fernandes et al. 2005). ments. Nevertheless, additional tests of the IGIMF-theory are required. The Milky Way (MW) case In the MW many star clusters and their IMF are well studied but the galaxy-wide IMF of the MW is not that well constrained. In his seminal work Salpeter (1955) found a power-law index of α Salpeter = 2.35 for stars in the field with masses between 0.4 and 10 M⊙. Later, Scalo (1986) with more sophisticated modelling and better star count data, corrected the field power-law index above a few tenth of a M⊙ to α Scalo = 2.7 but with a large uncertainty of 0.3 dex (Kroupa et al. 1993). Using the IGIMF as defined by axioms 1-7 results, for a SFR of 1 M⊙ yr −1 , in a power-law index for stars above 1 M⊙ of α3 = 2.6, somewhat steeper than the Salpeter (1955) value and well consistent with the Scalo (1986) determination. No good determination of the IMF of the stellar population of the MW halo exists but recent work on the bulge of the MW indicates a difference in the IMF in comparison with the MW field. Ballero et al. (2007) used chemical evolution modelling to investigate how to reproduce the metallicity distribution of stars in the MW bulge. They found that a model with a rapid star-formation (formation period of 0.1 Gyr) and an IMF power-law index of α Balero = 1.95 for stars Figure 7. The V -band mass-to-light ratios in dependence of dynamical masses for observed elliptical galaxies (triangles) and bulges of spiral galaxies (circles) from a compilation in Dabringhausen et al. (2008). Additionally are plotted as solid lines the range of mass-to-light ratios of models with an exponential SFH with an exponential time scale of 1 Gyr evaluated at ages between 6 and 12 Gyr. Generally, the lower end of the M/L ratios correspond to an age of 6 Gyr age and the upper end to 12 Gyr, though the ratios are not exactly linear in time. As the observations tend to have lower mass-to-light ratios than the models at lower galaxy masses it could be that these galaxies had more extended SFHs. A similar trend has been found in chemical evolution models of galaxies able to reproduce the observed [α/Fe] vs. velocity dispersion relation (Recchi et al. 2009). above 1 M⊙ is necessary to explain the observed metallicities. Assuming that the MW bulge comprises about 10% of the MW stellar population (M bulge = 10 10 M⊙) and using the 0.1 Gyr formation period, the IGIMF theory yields a power-law index α3 = 1.94, in excellent agreement with the Ballero et al. (2007) value. Thus, the galaxy-wide IMF of the MW is very well described by the IGIMF theory. THE EXPECTED NUMBER OF OB STARS FROM THE IGIMF AS A FUNCTION OF THE SFR As a test of the IGIMF theory it is possible to predict the number of O and B stars for whole galaxies for a given SFR. These numbers then need to be compared to observational results of galaxies with a range of SFRs. Note that the parameters of the IGIMF model are not freely adjusted but are based on the axioms of § 3.1 and § 3.2 which follow from observational constrains (see also Kroupa et al. 2013). The number of O stars for each model is relatively easy to calculate for a given SFR as the maximum lifetime of O stars is shorter than δt = 10 Myr, which is the typical timescale for a population of star clusters to hatch from their embedded phase Egusa et al. 2004Egusa et al. , 2009). Therefore, with m1 = 18 M⊙ and m2 = 150 M⊙ as limits for O stars, the number of O stars for the different SFRs are listed in Tab. 3, being calculated as When a new period of star-formation starts after δt, all the O stars formed in the previous period are gone as supernovae. To calculate the number of B stars is more complicated as B stars have life times up to several hundred Myr. In order to derive this number a steady state of forming and dying B stars is assumed. This is done by dividing the range of masses of B stars (m1 = 3 to m2 = 18 M⊙) into 15 bins (each 1 M⊙ wide) and taking the mean time, τB, the stars in each bin are of spectral type B from stellar evolution models (Cordier et al. 2007;Meynet & Maeder 2003). The τB are then multiplied by the number of stars in each mass bin derived from using eq. 5 and the IGIMF models to obtain the total number of B stars. For example, if τB is 200 Myr for a given mass bin, and the number of B stars in this bin formed during δt = 10 Myr is 1000, the total equilibrium contribution to the number of B stars from this bin would be 1000 × 20 = 20000. The total number of B stars is arrived at by summing the contributions of each mass bin. These predicted relative numbers of B and O type stars for a given SFR are shown in Table 3. In the first column are listed the different SFRs. In the second column are the absolute numbers of B stars to exist in a galaxy with the given SFR assuming an invariant canonical IMF. These values are the mean numbers to be expected from the canonical IMF. Between individual galaxies a certain level of statistical variation is to be expected. The same is given in the third column, though for O stars instead of B stars. Columns 4 and 5 give the relative B and O star numbers, respectively, for the IGIMF with a constant α3 of 2.35 for the IMF within the star clusters and a constant β of 2 for the ECMF (IGIMF model I). In columns 6 and 7 are given the relative numbers of B and O stars formed when assuming that the IMF of very massive clusters is top-heavy (see eq. 2) leading to a top-heavy IGIMF (IGIMF model II) but the ECMF slope still has a constant value of 2. For the columns 8 and 9 (IGIMF model III) additionally β varies with SFR according to eq. 3. The values in column 4 to 9 are all relative to the canonical IMF numbers. In order to get the actual numbers, column 4, 6 and 8 have to be multiplied by column 2, while columns 5, 7 and 9 need to be multiplied by column 3. For example, for a galaxy with a SFR of 10 −4 M⊙ yr −1 , about 385 B stars and 3 O stars are to be expected if the IGIMF in that galaxy is identical to the canonical IMF. If, instead, the IGIMF model I with β = 2 is applied, there should be no O stars and 212 B stars (385 × 0.55). Those cases where the results of the top-heavy IGIMF models differ from the non-top-heavy IGIMF are marked in boldface. Table 3 is visualised in Fig. 8. It shows that B stars are generally not very well suited to differentiate between the models. For SFRs above 10 −3 M⊙ yr −1 , the number of B stars is always roughly within 20% of the value for an invariant IMF. Only for SFRs below 10 −3 M⊙ yr −1 , does the number of B stars in the IGIMF models drop significantly below what would be expected for the canonical IMF. In the case of O stars the situation is quite different. For SFRs below 10 −1 M⊙ yr −1 , the expected number of O stars for the IGIMF models are always much below the canonical IMF values. And for high SFRs (above 1 M⊙ yr −1 ) it becomes possible to separate the different IGIMF models. With 4% to 71% of the number of O stars, the values for the nontop-heavy IGIMF (IGIMF model I) are always lower than the canonical IMF predictions. The top-heavy IGIMF with constant β (IGIMF model II) has 22% to 80% more O stars than the canonical IMF for SFRs 10 2 M⊙ yr −1 . In the case of the top-heavy IGIMF with variable β for SFR 1 M⊙ yr −1 (IGIMF model III), the number of O stars is 31% to 184% larger than would be expected from the invariant canonical IMF. A different way to visualise these results is, for example, to plot how the B band magnitude, MB, changes with SFR for the integrated population. This is done in Fig. 9, though the impact of the IGIMF on optically visible photometric bands is unfortunately rather small. Haas & Anders (2010) studied the expected number of Milky Way O stars detectable by the GAIA astrometry mission and predict that for a SFR of 1 M⊙ yr −1 between 1000 and 4000 O stars should be observable for IGIMF models with different assumptions on the underlying IMF, the cluster mass function and the sampling method. As can be seen in Table 3 the predicted number of O stars for a SFR of 1 M⊙ yr −1 is between 16157 and 40400. The factor of about 10 between both results originates from the assumption in Haas & Anders (2010) that GAIA will only be able to observe about 10% of the Galactic O stars and therefore, both results agree. DISCUSSION & CONCLUSIONS Whether or not the IMF of whole galaxies, the IGIMF, is identical to the IMF as observed in individual pc-scale star formation events or in star clusters is of great importance for the modelling of stellar populations, the chemical evolution of galaxies and our understanding of star-formation in general. While only little to no indications for systematic variations of the slope of the IMF in star clusters in the Milky Way and the Magellanic Clouds have been found so far (Bastian et al. 2010;Kroupa et al. 2013), increasing evidence for a variable galaxy-wide IMF is seen in unresolved extragalactic observational data (Lucatello et al. 2005;van Dokkum 2008;Davé 2008;Elmegreen 2008;Wilkins et al. 2008;Habergham et al. 2010;Dabringhausen et al. 2012). Multi-wavelength and spectroscopic studies of large volume-limited samples of galaxies give further evidence for systematic IMF vari- Table 3. The expected number of O and B stars in a galaxy in dependence of the SFR of the galaxy. Columns 2 and 3 are the expected number of O and B stars for the canonical invariant IMF between the mass limits of 0.1 and 100 M ⊙ . Columns 4 to 9 are the numbers for 3 different IGIMF models. Columns 4 and 5 are for IGIMF model I with constant α 3 = 2.35 for the IMF and constant β = 2 for the ECMF. In columns 6 and 7 are the IGIMF model II that are top-heavy because α 3 = fn(ρ ecl ) as described by eq. 2. And for columns 8 and 9 (model III), α 3 varies as for columns 6 and 7 but also β = fn(SFR). In columns 4, 6 and 8 the B star numbers are given in fractions of column 2, while in columns 5, 7 and 9 the O star numbers are given in fractions of column 3. In order to, for example, calculate the actual number of expected O stars for IGIMF I with constant α 3 and β for a SFR of 1 M ⊙ yr −1 the 32313 expected O stars from the canonical IMF have to be multiplied by 0.51 and therefore only 16157 O stars are to be expected. ations. Both late-type (Hoversten & Glazebrook 2008;Meurer et al. 2009;Lee et al. 2009;Gunawardhana et al. 2011) and early-type (Vazdekis et al. 1996(Vazdekis et al. , 1997Cappellari et al. 2012, 2013b,a) galaxies show evidence for the same behaviour that at high SFRs the stellar populations in galaxies are top-heavy. Furthermore, evidence for a variation of the IMF is also available for resolved populations as Scalo (1986) the Fornax dwarf galaxy (Li et al. 2013) and the Sagittarius dwarf galaxy (McWilliam et al. 2013) precursors, in good agreement with the IGIMF theory (Kroupa et al. 2013). Adopting the observed correlations and distribution functions in star-forming galaxies (the 7 axioms of the IGIMF theory presented in § 3.1 and § 3.2), it has become evident that the IMF of a whole galaxy must differ from the canonical IMF, thus implying the IGIMF to vary with SFR. One possible explanation for the top-heavy IMFs at very high star-formation rates could be due to cosmicray heating of Giant Molecular Clouds (GMCs) in starbursts (Papadopoulos 2010;Papadopoulos et al. 2011;Papadopoulos & Thi 2013) which may have a two-fold impact on the IGIMF. It might prefer more massive clouds to collapse, thus leading to an increase of M ecl,min and/or a decrease of β, but also within the most-massive clusters it can induce a top-heavy IMF as discussed in Weidner et al. (2011) and by inhibiting fragmentation of molecular cloud cores. We correct eq. 14 of (our eq. 3). In dwarf galaxies on the other hand, the lack of shear forces might prefer the collapse of GMCs into a single or very few clusters instead of many (Weidner et al. 2010a), thus changing M ecl,min for such objects as well. With the Gunawardhana et al. (2011) results on starformation in late-type galaxies it is possible to deduce a relation between the power-law index of the ECMF, β, and the SFR (eq. 3), suggesting a reduction of the formation of low-mass star clusters in starbursts (axiom 7 of the IGIMF theory). That is, we have un-earthed that the embedded cluster mass function may become top-heavy in starbursting galaxies. Remarkably, with this result it thus follows that the observed change of the M/L values of E galaxies with their mass can be readily understood within the IGIMF framework without further adjustment. While the results presented with this contribution demonstrate that the IGIMF theory is in good agreement with the latest observational data it is necessary to further test it. To facilitate such a test the expected numbers of O and B stars in galaxies with different SFR are presented. They can be used to directly compare observational results with theoretical expectations. The best regime to discriminate between the IGIMF and the canonical IMF is to look for O stars in dwarf galaxies with SFRs around 10 −2 M⊙ yr −1 . Here only around 80 O stars are expected when the canonical IMF predicts about 320. Likewise, lower SFRs are suitable, too. The different IGIMF models evaluated here are indistinguishable from each other at low SFRs. For high SFRs the IGIMF models I-III studied here predict different very large numbers of O stars but these would be far too many for direct counting. Only integrated quantities like the Hα luminosity can be used for observational tests in these cases. But note that the here used SFRs are the total SFRs, rather than Hα-based SFRs which sensitively depend on the number of O stars. B stars are only suitable to distinguish between an IGIMF and the canonical IMF for SFRs below 10 −4 M⊙ yr −1 . Above this limit their numbers only deviate mildly from the values for the canonical IMF. While still widely used in cosmology and extragalactic stellar population studies, an invariant IMF for galaxies is excluded by mounting observational evidence. These independent observational results are readily explained by the IGIMF theory and corresponding models fit relatively well (Figs. 2 and 5), showing that the IGIMF is a relevant description of stellar populations in galaxies. This conclusion is independent of whether a constant SFH or an exponentially declining one is used. The IGIMF theory, based on the knowledge of the local star-formation process, is therefore a useful description of large-scale star-formation in whole galaxies. Thus with our knowledge of star-formation in the Milky Way, it is in principle possible to explain all observed IMF variations in extragalactic sources. An extension of this work to include the results of Ferreras et al. (2013), who found a strong dependence of the galaxy-wide IMF power-law index on the central velocity dispersion of elliptical galaxies, is currently underway. where dN = ξ(m) dm is the number of stars in the mass interval m to m + dm. The exponents αi represent the standard or canonical IMF (Kroupa 2001(Kroupa , 2002Kroupa et al. 2013). For a numerically practical formulation see Pflamm-Altenburg & . An equivalent log-normal form is provided by eq. 4-56 ( fig. 4-28) in Kroupa et al. (2013). The advantages of the multi-part power-law description are the easy integrability and, more importantly, that different parts of the IMF can be changed readily without affecting other parts. Note that this form is a two-part power-law in the stellar regime, and that brown dwarfs contribute about 1.5 per cent by mass only and that differing binary properties near mH implies most brown dwarfs to be a separate population (k ′ ≈ 1 3 , Thies & Kroupa 2007. The observed IMF is today understood to be an invariant Salpeter/Massey power-law slope (Salpeter 1955;Massey 2003) above 0.5 M⊙, being independent of the cluster density and metallicity for metallicities Z 0.002 (Massey & Hunter 1998;Sirianni et al. 2000Sirianni et al. , 2002Parker et al. 2001;Massey 1998Massey , 2002Massey , 2003Wyse et al. 2002;Bell et al. 2003;Piskunov et al. 2004;Pflamm-Altenburg & Kroupa 2006). Furthermore, unresolved multiple stars in the young star clusters are not able to mask a significantly different slope for massive stars (Maíz Apellániz 2008;Weidner et al. 2009). Kroupa (2002) has shown that there are no trends with present-day physical conditions and that the distribution of measured high-mass slopes, α3, is Gaussian about the Salpeter value thus allowing us to assume for now that the stellar IMF is invariant and universal in each pc-scale star-formation event. There is evidence of a maximal mass for stars (mmax * ≈ 150 M⊙, ), a result later confirmed by several independent studies (Oey & Clarke 2005;Figer 2005;Koen 2006). However, according to Crowther et al. (2010) mmax * may also be as high as 300 M⊙; though Banerjee & Kroupa (2012) could show that these super-massive objects are very likely mergers of star formed with 150 M⊙ or less. uncovered a systematic trend towards top-heaviness (small α3) with increasing star-forming cloud density (see eq. 2).
2013-10-17T07:17:49.000Z
2013-09-25T00:00:00.000
{ "year": 2013, "sha1": "c2ec1ec84e8654fc4e0f84316528ab5060b813dc", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/436/4/3309/3090837/stt1806.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c2ec1ec84e8654fc4e0f84316528ab5060b813dc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218480355
pes2o/s2orc
v3-fos-license
Overcoming the bottleneck to widespread testing: a rapid review of nucleic acid testing approaches for COVID-19 detection The current COVID-19 pandemic presents a serious public health crisis, and a better understanding of the scope and spread of the virus would be aided by more widespread testing. Nucleic-acid-based tests currently offer the most sensitive and early detection of COVID-19. However, the “gold standard” test pioneered by the U.S. Centers for Disease Control and Prevention takes several hours to complete and requires extensive human labor, materials such as RNA extraction kits that could become in short supply, and relatively scarce qPCR machines. It is clear that a huge effort needs to be made to scale up current COVID-19 testing by orders of magnitude. There is thus a pressing need to evaluate alternative protocols, reagents, and approaches to allow nucleic-acid testing to continue in the face of these potential shortages. There has been a tremendous explosion in the number of papers written within the first weeks of the pandemic evaluating potential advances, comparable reagents, and alternatives to the “gold-standard” CDC RT-PCR test. Here we present a collection of these recent advances in COVID-19 nucleic acid testing, including both peer-reviewed and preprint articles. Due to the rapid developments during this crisis, we have included as many publications as possible, but many of the cited sources have not yet been peer-reviewed, so we urge researchers to further validate results in their own laboratories. We hope that this review can urgently consolidate and disseminate information to aid researchers in designing and implementing optimized COVID-19 testing protocols to increase the availability, accuracy, and speed of widespread COVID-19 testing. OVERVIEW On March 11th, 2020, the World Health Organization deemed COVID-19 a global pandemic (World Health Organization 2020). As of April 26th, SARS-CoV-2 infections have been confirmed in almost 3 million people worldwide, yet even this staggering figure is likely to be an underestimate (Elflein 2020). To have any actionable impact on our control of the pandemic propagation, tests should be performed repetitively on a large fraction of the population in order to detect outbreaks before they spread. Current estimates of the testing capacity needed to end the pandemic are in the range of tens of millions of tests per day in the U.S., far above the ∼145,000 tests currently conducted nationally (Goodnough et al. 2020;Irfan 2020). A solution to massively scaling up testing by orders of magnitudes may be aided by an innovative combination of the molecular tools presented here. Current testing approaches fall into two categories-nucleic-acid or serological. Nucleic-acid tests directly probe for the RNA of viruses swabbed from a patient's throat or nasal passage (Fig. 1), while serological tests detect antibodies present in the patient's serum. During the first days of infection, patient viral titers are high and a single patient nasopharyngeal swab may harbor close to 1 million SARS-COV-2 viral particles (Wölfel et al. 2020). However, patient IgG and IgM antibody production, termed seroconversion, typically occurs 5-10 d after the onset of initial symptoms (Wölfel et al. 2020). Therefore, nucleic acid tests offer the earliest and most sensitive detection for the presence of SARS-COV-2 and will be the subject of this review. The RT-PCR test pioneered by the CDC has been deemed the "gold standard" for clinical diagnosis but takes hours to perform and requires specialized reagents, equipment, and training (Centers for Disease Control and Prevention 2020). In the first few weeks of the global SARS-CoV-2 pandemic, required reagents were already in short supply, and researchers and testing centers reported issues acquiring almost every necessary reagent from commercial suppliers-from patient nasopharyngeal swabs to lysis buffer to RNA extraction kits (Akst 2020;Baird 2020). Some testing centers have even been running multiple testing protocols side-by-side to increase throughput and allow for decreased reliance on a single reagent (Soucheray 2020). A few commercial test systems exist but are primarily designed to give single-patient results (Abbott 2020; Xpert Xpress SARS-CoV-2, https://www.fda.gov/media/136314/ download). A scalable, high-throughput platform will be required to deliver millions of tests per day. Here we investigate recent advances and approaches to nucleic-acid testing for COVID-19. We highlight some findings from research groups who have compared commercial reagents or created homemade solutions in order to decrease cost or reliance on particular commercial reagents. We also outline several alternative nucleic-acid tests involving isothermal amplification or CRISPR-based detection. Finally, we examine some recent applications of specialized techniques such as sequencing, digital PCR, and DNA nanoswitches as tools for COVID-19 detection. We have tried to be as exhaustive as possible throughout this review, but due to the rapid daily developments in testing we may have unwittingly excluded some published works. Another review by Shen et al. (2020) published in late February may be useful to readers. In this review, we greatly expand the scope to evaluate and compare many more recently published articles, address advancements in sample lysis, direct addition, and novel detection methods, and include a quantitative comparison of these methods including workflow time, cost, and limit of detection. The general workflow for RT-PCR tests, such as that approved by the CDC, includes three main steps: sample collection and transport, lysis and RNA purification, and amplification (Fig. 2). Typically, a clinician collects a nasopharyngeal swab and transfers it to a vial containing a few milliliters of viral transport medium (VTM), which is transported to a laboratory for testing. Chemical lysis buffers or heating may be used to lyse and inactivate viral particles. The viral RNA is then purified from a fraction of the swab sample (typically 1/20th of the swab) using column-based RNA purification kits or magnetic beads. The eluted purified RNA is then amplified using a one-step master mix containing reverse transcriptase and DNA polymerase enzymes with three primers targeting specific regions of the viral genome. Primers targeting a human gene, such as RNaseP, are also included as a positive control for swab collection, RNA extraction and amplification. A spike-in control RNA, such as MS2 bacteriophage genomic RNA, may FIGURE 1. An overview of COVID-19 nucleic acid testing. Samples collected via nasopharyngeal swab are lysed and inactivated, and an amplification reaction is performed using either a crude swab sample or purified RNA. Amplification of specific viral sequences by RT-PCR, LAMP, or RPA is detected using fluorescent or colorimetric dyes, sequence-specific CRISPR-Cas nuclease cleavage of a reporter, or separation of reaction products on a lateral flow dipstick. alternatively be used. Amplified products can be detected using TaqMan probe fluorescence or DNA-intercalating dyes, and a threshold cycle of amplification is set to distinguish positive from negative results. A test result is typically considered positive if amplification is observed for two or more viral targets, while it is considered negative if amplification is observed for the control RNA but for none of the viral targets (Centers for Disease Control and Prevention 2020). The standard CDC RT-PCR test takes about 3 h to perform and costs ∼$10 per test (Supplemental Table S1). Specialized reagents or equipment can lead to high pertest costs and may limit the number of tests that can be conducted, in some cases resulting in a lag of several days before a patient receives a diagnosis. The variety of approaches presented here span a wide range of costs and processing times, with several published protocols reaching results in less than 1 h (Fig. 3). Some investigators have found homemade solutions that drastically decrease the required reagent cost allowing for tests to be performed for just a few dollars (Supplemental Table S2). Others have proposed completely novel solutions that can cut the testing time to tens of minutes but may still require costly reagents to perform. While widespread testing will necessarily require high-throughput approaches, other tests may offer higher sensitivity for low titer cases or rapid turnaround for point-of-care diagnosis. Recent ingenuity in COVID-19 nucleic-acid testing offers a wide range of solutions and further innovation may be required to maximize testing accuracy while providing a low-cost and fast-turnaround solution. SAMPLE LYSIS AND DIRECT ADDITION Testing for the presence of SARS-CoV-2 viral RNA typically begins with the collection of a patient swab sample which is stored and transported to a testing facility in viral transport medium (VTM). These samples are lysed and viral RNA is typically purified using either RNA extraction columns or magnetic beads ( Fig. 2; Centers for Disease Control and Prevention 2020). One advantage of RNA pu-rification is that the viral RNA present in the more dilute swab sample can be concentrated and eluted in a buffer compatible with RT-PCR. However, in order to decrease reliance on commercial lysis buffers and viral RNA extraction kits and simplify COVID-19 testing, there has been great interest in finding alternative strategies or eliminating RNA purification altogether by adding patient swab samples directly to the RT-PCR reaction. Additionally, eliminating RNA purification can dramatically speed up the overall workflow time per test and may be an ideal solution for streamlining testing times (Fig. 4). Swab samples must be lysed to release viral RNA into solution for purification and to neutralize the virus for safe handling. Many protocols use commercial reagents for lysis, including DNA/RNA Shield (Zymo Research), Buffer RLT (Qiagen), and MagNA Pure External Lysis Buffer (Roche). However, multiple researchers have recently found that when compared to commercial solutions, homemade solutions containing 4M (Scallan et al. 2020;Sridhar et al. 2020) or 5M (Aitken et al. 2020) guanidinium thiocyanate work equally well to lyse samples and recover viral RNA after purification. However, these solutions contain strong denaturants and are therefore not compatible with addition directly into amplification reactions. Other laboratories have assessed lysis conditions that are compatible with direct addition in order to streamline sample preparation and reduce overall test time. Preliminary FIGURE 3. An analysis of the total workflow time and calculated cost (in U.S. dollars) of published COVID-19 nucleic acid tests. Calculated costs are estimated from available online pricing for consumables and do not include labor or equipment. Protocols which required key reagents to be synthesized or created in a laboratory are not included but are likely to be even cheaper than commercially priced reagents. All raw data available in Supplemental Tables S1, S2. FIGURE 2. An overview of sample processing. Patient nasopharyngeal swabs are collected and transported for testing. Viral particles are inactivated and lysed by heat and/or lysis buffer addition. Swab sample is then added directly to amplification reactions or RNA is purified from the sample and then amplified. studies report that direct-to-test addition of unprocessed swab samples generally allows for SARS-CoV-2 detection but may decrease test sensitivity. Viral RNA stability and compatibility with downstream reactions will be heavily dependent on the buffer used for swab collection and transport. Arumugam and Wong have shown that RNA can be detected from nonreplicative recombinant virus particles (SeraCare AccuPlex) in VTM spiked directly into the RT-PCR master mix without an RNA extraction step (Arumugam and Wong 2020). Merindol et al. (2020) compared a few common swab collection buffers for compatibility with direct PCR addition. Swab samples stored in Hank's medium or saline solution and directly added to RT-PCR reactions amplified poorly using either the RealStar SARS-CoV-2 RT-PCR kit (Altona Diagnostics) or the Allplex 2019-nCoV RT-QPCR kit (Seegene) compared to purified RNA from the same swabs. Interestingly, however, viral RNA added directly from swabs stored in water or UTM (Remel) at 4°C showed equivalent RT-PCR amplification to RNA purified from the same swabs. In the presence of RNase inhibitor, viral RNA could be amplified with similar efficiency from swabs stored in water at 4°C for up to 5 d (Merindol et al. 2020). Many groups are further optimizing direct-to-test addition by heating and/or lysing swab samples prior to testing. In one study, direct addition of swab samples in viral transport media to the Luna Universal Probe One Step RT qPCR master mix (New England Biolabs) accurately identified 92% of 155 COVID-19 cases but reached the detection threshold four cycles later (corresponding to a 16-fold loss in detection of starting material) than a test using RNA extracted from a swab sample using the QIAamp Viral RNA Mini kit (Qiagen) (Bruce et al. 2020). This procedure could correctly identify cases across low, medium, or high SARS-CoV-2 RNA copy loads (as defined by cycles to detection from tests of the same samples after RNA purification). Heating the swab sample at 95°C for 10 min before direct-to-test addition improved detection of low copy load samples (Bruce et al. 2020). Another group reported that directly added samples were detected 3.5 cycles later than RNA isolated using the MagNA Pure kit (Roche), but heating the sample at 95°C for 5 min before direct-to-test addition resulted in detection only one cycle later, with 97.4% accuracy compared to tests using purified RNA (Fomsgaard and Rosenstierne 2020). However, Grant et al. (2020) found the opposite-heating direct-to-test samples in VTM at 95°C for 10 min delayed detection of viral RNA compared to directly added samples not heated prior to amplification. Additionally, they found that direct sample addition in VTM without heating to the TaqMan Fast Virus 1-step Master Mix (Thermo Fisher) RT-qPCR reaction allowed detection 3.77 cycles earlier than the same test performed with RNA purified using the EZ1 Qiagen kit. Overall, their test using a direct, unheated sample had 98.8% diagnostic accuracy when compared to car-tridge-based RNA purification and RT-qPCR using the Panther Fusion system (Grant et al. 2020). Intermediate inactivation temperatures seem to perform worse than high heat or no heating at all. One group reported that swab sample heat treatment at 75°C for 10 min prior to direct-to-test addition delayed detection by 6.1 cycles (Alcoba-Florez et al. 2020). Multiple groups have reported contradictory findings on the advantages of heating samples before direct addition into RT-PCR mixes. RNases present in the nasal swab are likely active even at high temperatures and thus RNA degradation may be particularly sensitive to the temperature and buffer conditions of inactivation. Slightly more complex approaches use lysis buffers to aid RNA recovery and improve RT-qPCR test sensitivity. In one report, positive patient swab samples diluted 1:1 into Quick Extract DNA extraction solution (a buffer containing detergents and proteinase K), heat inactivated and directly added to the RT-PCR reaction mix were detected at the same amplification cycle as, or even slightly before, samples processed with the QIAmp Viral RNA Miniprep kit (Qiagen) (Ladha et al. 2020). Another group reported that swab samples added to Quick Extract DNA extraction solution were detected with equal sensitivity to column-purified RNA (Sentmanat et al. 2020). Finally, another study reported that direct addition of swab samples treated 15 min with proteinase K yielded sensitivity comparable to the use of RNA isolated with the automated ELITe InGenius Sp200 system (ELITech Group) (Marzinotto et al. 2020). The discrepancy between the sensitivities of direct-totest addition procedures may be due to differences in protocols and kits used for the RT-qPCR test and isolation of control RNA, types of lysis buffers, heating parameters, and varying viral RNA loads in the swab samples of each study. Despite these discrepancies, it appears that direct-to-test addition of a small volume of swab sample treated with lysis buffer or Proteinase K allows for robust SARS-CoV-2 RNA detection. Direct-to-test addition of patient swab samples may prove useful in settings where there is a lack of RNA purification reagents or time constraints that render laborious RNA isolations infeasible. Further work is required to ascertain optimal swab sample lysis, heating and storage conditions prior to direct-to test addition, as well as whether direct-to-test addition could be used in tests other than RT-qPCR. RNA PURIFICATION As uncovered by multiple groups, eliminating RNA isolation prior to RT-PCR altogether may be possible. However, a dedicated RNA isolation step may improve detection sensitivity or be necessary to remove incompatible sample buffers prior to amplification for some protocols. However, column-based kits used to purify RNA from the patient swab sample can also occasionally lead to unintentional carryover of ethanol or retention of some RNA, which can be kit-specific. In our laboratory, we have found that the RNeasy Mini Kit (Qiagen) leads to an approximately eightfold (3 C t ) loss of synthesized SARS-CoV-2 viral N gene RNA after column purification. We found similar results with inactivated positive patient swab samples; C t values were consistently lower when RNA was purified via isopropanol precipitation or using the Direct-zol RNA Miniprep Plus kit (Zymo Research) when compared to the RNeasy Mini Kit (Qiagen). However, we have not compared directly with the CDC recommended Qiagen QIAmp Viral RNA kit (C Dugast-Darzacq, T Graham, GM Dailey, et al., unpubl.). Several recent papers have investigated alternative methods for RNA purification, including unique approaches as well as traditional laboratory techniques. Zhao et al. (2020) present a synthesis protocol for magnetic nanoparticles that can combine sample lysis and RNA binding in a single step. The polyamino ester with carboxyl groups-coated magnetic nanoparticles (pcMNPs) are also directly compatible with the RT-PCR reaction, greatly streamlining the protocol from lysis through RNA purification, and the pcMNPs can be synthesized on-site. Kalikiri et al. (2020) find that AmpureXP beads (Beckman Coulter) yield equal sensitivity to the NucliSENS easyMAG automated extraction platform (bioMérieux). Other commonly used laboratory reagents for RNA purification include TRIzol, which includes guandinium thiocyanate and phenol-chloroform to extract RNA from cellular samples. Won and coworkers describe a complete workflow for COVID-19 testing which includes TRIzol extraction and isopropanol precipitation of the RNA from swab samples. The authors found no difference between TRIzol and the approved Qiagen QIAamp Viral RNA Mini Kit in RNA extraction efficiency from Lentivirus-infected HEK293 cells, but was not directly compared using SARS-CoV-2 RNA (Won et al. 2020). In our laboratory, isopropanol precipitation of synthesized SARS-CoV-2 viral N gene RNA resulted in almost no loss of RNA, with or without the presence of additional human RNA as a carrier (C Dugast-Darzacq, T Graham, GM Dailey, et al., unpubl.). Standard laboratory RNA purification methods offer an attractive alternative to commercial FIGURE 4. Examination of the total workflow for published COVID-19 testing methods. Each step of the workflow is shown with colored bars. Four example commercial RT-PCR kits are included for reference (blue) and were directly compared within a single publication. The CDC RT-PCR test is shown in red. ( * ) Sequencing typically takes 4-12 h but can vary significantly depending on library preparation and the platform used, and was not specifically stated in the cited protocols. Raw data available in Supplemental Table S1. kits, as they generally use inexpensive, abundant materials. For clinical testing, however, these solutions may be difficult to scale to high-throughput pipelines and may require special handling of hazardous materials. It may be useful to assess where RNA extraction can be eliminated while maintaining the necessary sensitivity and accuracy of testing. If eliminating RNA purification is not possible, however, these procedures could be useful as cheap, homemade solutions for small-scale testing operations. RT-PCR RT-PCR master mixes use a mixture of reverse transcriptase enzymes, such as the thermostable MMLV RT, and a DNA polymerase, like Taq. Primers that anneal specifically to the SARS-COV-2 viral genome are included to prime amplification. The U.S. CDC protocol utilizes primers that target the viral N gene, while the China CDC uses primers matching both the N gene and the ORF1ab region, and Charité Germany primers target the RdRp and E genes (for review, see Udugama et al. 2020). For detection of amplification, qPCR can be performed using intercalating dyes like Sybr-Green. Because these dyes are nonspecific for DNA products, any amplification (specific or not) will lead to an increased fluorescent readout. Higher sequence specificity in the detection of amplicons can be achieved using Taqman probes. These short oligonucleotides contain a 5 ′ fluorophore and 3 ′ quencher and anneal to sequences within the DNA template. Taq polymerase degrades the annealed probe by its 5 ′ to 3 ′ exonuclease activity and cleaves off the fluorophore, thereby releasing it from being quenched. This fluorescence is proportional to the number of amplified product molecules, is sequence-specific for the correct amplified product, and can be measured in real-time on a qPCR machine (Fig. 5). RT-PCR has been deemed the "gold standard" for COVID-19 diagnosis because it has shown to be very sensitive for accurately detecting viral genomes present, down to just one molecule of RNA (Fig. 6). Multiple commercial master mixes exist that enable sensitive one-step RT-PCR. The original CDC protocol approved four commercial master mixes for the RT-PCR test from Quantabio, Promega, and ThermoFisher (Centers for Disease Control and Prevention 2020). However, published RT-PCR protocols have also successfully used one-step RT-PCR master mixes from a variety of companies including NEB, Applied Biosciences, Qiagen, Roche, Takara, and others, and a growing list of approved alternative commercial reagents can be found at the FDA EUA website (Alcoba-Florez et al. 2020;Chandler-Brown et al. 2020; Food and Drug Administration 2020; Kalikiri et al. 2020;Marzinotto et al. 2020;Merindol et al. 2020;Won et al. 2020;Xu et al. 2020;Zhao et al. 2020). Many commercial master mixes seem to function well in the detection of SARS-CoV-2, although a detailed side-by-side comparison of the numerous commercial reagents is lacking. Brown et al. (2020) compared four popular one-step RT-PCR kits (Takara One Step PrimeScript III kit, Qiagen Quantifast Multiplex RT-PCR + R Master Mix, ThermoFisher TaqPath 1-step RT-qPCR Master Mix, and the Thermo Fisher Taqman Fast Virus 1-step Master Mix) on 74 patient nose and/or throat swabs. Comparison of the four master mixes showed that three out of the four mixes performed optimally with the N2 primers for SARS-CoV-2 detection-the Takara, Qiagen, and TaqPath. The best, however, seemed to be the Takara master mix, which was able to detect just a single viral genome copy using the N1 primers. Consistent with the Takara mix being the most sensitive, none of their patient samples that tested negative with the Takara mix tested positive with the Qiagen kit, whereas the reverse did not hold . Additionally, in order to decrease reliance on a particular company to generate master mix reagents for testing, which in the course of the pandemic could experience supply chain disruptions or delays, at least one laboratory has developed a completely homemade, open-source master mix. Bhadra et al. have developed master mixes using the evolved reverse transcriptase/DNA polymerase RTX that are compatible both with either dye-based or TaqMan qPCR. The RTX enzyme can be expressed in E. coli and purified using Ni-NTA agarose and heparin columns, and master mix buffers can be made easily and cheaply in a laboratory. The authors demonstrated detection of as few as 100 molecules of in vitro transcribed SARS-CoV-2 N gene RNA, using either RTX enzyme alone in a dyebased reaction or a mixture of RTX and Taq in a TaqMan reaction. TaqMan reactions with RTX and Taq showed Cq values comparable to the commercial TaqPath kit (Bhadra et al. 2020). Future studies should assess homemade master mixes using patient samples, to provide inexpensive, open-source options for testing. While a variety of commercial and laboratory options exist for RT-PCR master mixes, active enzymes typically require careful refrigeration for storage and transport. Xu et al. (2020) have demonstrated that the Takara RT-PCR mix maintains its activity after being freeze-dried and stored at room temperature for 28 d. Further innovation in homemade or room-temperature stable reagents may improve testing capabilities in remote locations or at the point-of-care. ISOTHERMAL AMPLIFICATION A promising alternative to RT-PCR is isothermal amplification, which does not require thermocycling. Two isothermal techniques used for rapid and sensitive diagnostics are loop-mediated isothermal amplification (LAMP) and recombinase polymerase amplification (RPA) (Fig. 7). LAMP uses a strand-displacing DNA polymerase together with four specially designed primers containing regions of complementarity to six target sequences. The 3 ′ end of the forward inner primer (FIP) primes synthesis of an initial DNA strand, which is subsequently displaced by synthesis primed by the forward outer primer (FOP). A reverse-complementary sequence in the 5 ′ end of the FIP anneals with a downstream sequence in the displaced ssDNA strand, forming a loop. The same process repeats with the backward inner and outer primers (BIP and BOP) at the opposite end of the amplicon. Repeated rounds of priming and strand extension generate a mixture of stem-loop and "cauliflower" structured products. Because LAMP includes primers that anneal to six unique target regions, it is highly sequence specific (Notomi et al. 2000). Release of hydrogen ions upon incorporation of dNTPs into the nascent DNA chain can be detected using colorimetric pH FIGURE 6. The limit of detection for published tests equivalent to the fewest number of molecules accurately assayed in a single reaction. For some spike-in controls, authors used viral DNA, plasmid DNA, or a pseudovirus instead of viral RNA (shown as open diamonds), which may have a different amplification efficiency than SARS-CoV-2 RNA and thus alter their calculated limit of detection. Raw data available in Supplemental Table S1. FIGURE 7. Molecular overview of isothermal amplification techniques. LAMP uses specially designed nested primers with complementary regions that form hairpins to permit priming of subsequent rounds of amplification. RPA uses recombinase-catalyzed strand invasion to prime amplification. Colorimetric pH indicators can be used to detect hydrogen ion release during dNTP incorporation. indicator dyes (Tanner et al. 2015). RT-LAMP has been validated for detection of a multitude of RNA viruses including influenza, Zika, Ebola, and MERS (for review, see Wong et al. 2018). A slightly more recent addition to the isothermal amplification toolkit is RPA. RPA uses a recombinase to catalyze strand invasion of a primer into dsDNA. Single-stranded binding proteins are included to stabilize the open duplex structure and a strand-displacing DNA polymerase extends the primer (Piepenburg et al. 2006). Some groups have demonstrated very high sensitivity and specificity of target amplification by combining RPA and LAMP into a two-stage amplification protocol, termed RAMP. The outer LAMP primers can be used for RPA amplification and then combined with the additional LAMP primers for further amplification in a single tube or microfluidic device. The combined RAMP approach exhibits the extremely high specificity of LAMP, together with enhanced sensitivity from dual amplification, and a higher tolerance to inhibitors (Song et al. 2017). Song et al. (2017) demonstrated the huge potential of the RAMP approach for diagnostics by multiplexing 16 pathogenic targets including HIV-1 and multiple strains of HPV ZIKV. These isothermal methods are relatively fast and can be read out colorimetrically, with a lateral-flow stick, or even with nanoparticle-based biosensors (Zhu et al. 2020), making them easy to use at home or at remote points of care. Several groups have now developed novel isothermal protocols for detection of SARS-COV-2 RNA. Lu et al. (2020b) tested their LAMP-based detection method with spiked-in SARS-COV-2 RNA and were able to detect a colorimetric change indicating a positive result after just 40 min of amplification, with sensitivity down to 30 viral RNA copies per reaction. They and others have demonstrated that LAMP detection of SARS-CoV-2 is specific by showing no cross-reactivity to other respiratory pathogens including human coronavirus strains HCoV-OC43 and HCoV-229E (Lu et al. 2020b;Park et al. 2020). Zhang et al. have shown that their LAMP strategy gives results that match the RT-PCR standard test in COVID-19 positive patient samples, reporting 100% sensitivity and specificity. They also find that the LAMP protocol may be compatible with cell lysates, potentially eliminating the need for RNA purification from patient samples (Zhang et al. 2020a). Using 130 samples, Yan et al. were able to directly compare RT-PCR with RT-LAMP. The LAMP assay gave identical clinical diagnoses to the RT-PCR test, with similar sensitivity, and it was faster and easier to read out (Yan et al. 2020). Others have reported similar success, with LAMP amplification yielding 90%-100% sensitivity and 95%-99% specificity in patient samples with improved accuracy for amplification of multiple gene targets Mehmood Butt et al. 2020;Yang et al. 2020;Yu et al. 2020). A smaller cohort study found that their RT-LAMP test had a sensitivity of 80%, compared to con-secutive RT-PCR swabs, which could be adequate clinically, they suggest, if repeated testing were used (Österdahl et al. 2020). By combining two common isothermal techniques, LAMP and RPA, into a single-tube RAMP reaction, El-Tholoth and colleagues were able to improve detection 100-fold over RT-PCR in mimic patient samples, providing an early proof-of-concept for an extremely sensitive method that can detect down to just a few viral RNA copies, but that to date has not yet been tested on patient samples (El-Tholoth et al. 2020). From these early demonstrations, under optimized conditions isothermal amplification techniques can provide equal sensitivity and specificity to the RT-PCR test for SARS-CoV-2 detection. These methods allow for faster amplification, less specialized equipment, and easy readout. LAMP methods also benefit from the ability to multiplex targets in a single reaction and can be combined with other isothermal methods, like RPA in the RAMP technique, to increase test accuracy even more. These techniques may be particularly useful for rapid, point-of-care diagnoses or for remote clinical testing without the need for laboratory equipment. CRISPR-BASED DETECTION A unique group of Cas nucleases, including Cas12 and Cas13 were recently discovered to have promiscuous DNA or RNA cleavage activities (Gootenberg et al. 2017;Chen et al. 2018;Li et al. 2018), which have been exploited for nucleic acid detection. Multiple assays combining isothermal amplification and CRISPR have recently emerged as diagnostic tools for rapid detection of SARS-CoV-2 viral RNA (Fig. 8). Cas13a is a nonspecific RNase that remains inactive until it binds its programmed RNA target. It has been harnessed for sensitive DNA or RNA detection in a method termed SHERLOCK (Gootenberg et al. 2017) In SHERLOCK, the target RNA is first amplified by a combination of RT-RPA and T7 transcription. The amplified product RNA activates Cas13a, which in turn cleaves a reporter RNA, liberating a fluorescent dye from a quencher. This method consistently detects synthetic SARS-CoV-2 viral RNA in the range between 10 and 100 copies per µL of input, only requires a lateral flow dipstick for visual readout of the detection result, and can be completed in 40 min (Hou et al. 2020) or 57 min (Metsky et al. 2020;Zhang et al. 2020b) after the RNA extraction step. SHERLOCK (termed "CRISPR-nCoV" in Hou et al. 2020) also demonstrated its diagnostic potential by detecting SARS-CoV-2 RNA with 100% sensitivity in 52 patient samples (Hou et al. 2020). Cas12 is another member of the CRISPR-Cas effector family. It is an RNA-guided DNase that indiscriminately cleaves ssDNA upon binding its target sequence. In a method termed "DETECTR," Cas12a ssDNase activation is combined with isothermal amplification to achieve sensitive and specific DNA detection (Chen et al. 2018). Multiple groups have recently used DETECTR for SARS-CoV-2 RNA detection. Viral RNA is first converted to DNA and isothermally amplified. Specific target sequences in the amplified DNA activate Cas12a, which in turn cleaves a ssDNA reporter to unquench a fluorophore. Using RT-RPA for amplification in DETECTR, Lucia et al. (2020) detected 10 copies of SARS-CoV-2 RNA per µL of input within 60 min (after RNA sample preparation). Ding et al. improved the protocol by combining RT-RPA and CRISPR-based detection in a one-pot reaction and incubating at a single temperature. This "All-In-One Dual CRISPR-Cas12a" (AIOD-CRISPR) assay detected as few as 4.6 copies of SARS-CoV-2 RNA per µL of input in 40 min (Ding et al. 2020). Similarly, Guo et al. developed another single-tube and constant temperature protocol ("CDetection"), in which they used recombinase-aided amplification (RAA) instead of RPA for nucleic acid amplification. They showed that Cas12b behaves similarly to Cas12a for ssDNA reporter cleavage and can achieve a detection limit of five copies/µL in 40-60 min ). Moreover, Broughton et al. used LAMP instead of RPA in DETECTR and further reduced the testing time for SARS-CoV-2 RNA to 30-32 min while maintaining a low detection limit (10 copies/µL) (Broughton et al. 2020). In combination with fast isothermal amplification, CRISPR-based techniques can harness highly specific nucleases to achieve fast read-outs and sensitivity down to a few viral RNA copies. CRISPR detection can be coupled to lateral flow readouts, which are an attractive option for easy, at-home testing scenarios. SEQUENCING FOR DIAGNOSIS Sequencing-based detection methods provide the benefit of collecting base-pair-level information of patient strains, which allows for viral mutation tracing but comes at the cost of expensive sequencing platforms and lengthy sample processing times. However, several laboratories have investigated high-throughput approaches or portable, fast sequencing to use this technology as a diagnostic tool for COVID-19. Nanopore target sequencing (NTS) is an attractive option for clinical testing because it is fast, highly portable, and sensitive. Wang et al. have developed an NTS approach targeting 11 viral regions that is able to detect as few as 10 viral copies/mL with 1 h of sequencing. By relying on a sequencing-based approach, this group also demonstrated that viral genome mutations can be identified within the target regions, and that an additional panel of targets against common respiratory viruses can be included to detect co-infection . Additionally, commercial sequencing approaches have also been adapted for high-throughput SARS-CoV-2 testing. BillionToOne Inc. seeks to use the extensive national infrastructure for Sanger sequencing, which they propose could "unlock more than 1,000,000 tests per day in the US" (Chandler- Brown et al. 2020). BillionToOne uses a one-step RT-PCR mix to amplify viral RNA directly from swab samples, which are collected in viral transport medium rather than a custom lysis buffer. Sanger sequencing then proceeds with inclusion of a synthetic, shortened SARS-CoV-2 sequence as a spike-in control allowing for careful quantitation of viral abundance down to ∼10 genomic equivalents (Chandler-Brown et al. 2020). While traditional sequencing approaches typically require substantial cost and specialization, repurposed portable or quantitative sequencing approaches may offer extremely accurate highthroughput diagnostics during the pandemic. OTHER NATs ("THINKING OUTSIDE THE BOX") Beyond the approaches described above, ingenious methods are being developed for widespread, at-home, or point-of-care COVID-19 diagnostics. Most isothermal amplification steps require incubation at elevated temperatures around 60°C. To facilitate isothermal amplification at remote FIGURE 8. Molecular overview of CRISPR detection of amplified products. Binding to specific target sequences in amplified RNA or DNA activates Cas nucleases, which cleave reporter molecules. Reporter cleavage can then be assayed using a lateral dipstick. testing facilities, González-González et al. (2020) developed a 3D-printed water circulator that can act as a heat block for LAMP amplification and have demonstrated the ability to detect as few as 62 viral RNA molecules after 1 h of incubation. To make RT-PCR more accessible for remote testing, Wee et al. (2020) have demonstrated a rapid, extractionfree PCR protocol that can detect six SARS-CoV-2 RNA copies using a portable thermocycler. While samples collected from patients with symptoms or who have been hospitalized seem to present relatively high viral titers that are likely to be easier to detect (Wölfel et al. 2020), testing of asymptomatic patients or testing prior to quarantine release may require extremely sensitive tests. While specialized reagents and equipment are required, digital and digital-droplet PCR may allow for even more sensitive testing than RT-PCR. Lu et al. (2020a) report 96.3% accuracy for testing of clinical samples using digital PCR and were able to detect virus in four patient samples that were deemed negative by RT-PCR. Furthermore, digital droplet methods have been shown to be capable of detecting down to 0.4 viral RNA copies/μL in patient samples (Suo et al. 2020). Because digital PCR allows for more careful quantitation of viral RNA copy number over the course of the disease, this highly sensitive test may also be useful to evaluate treatment progress or assess patient release after quarantine. Particularly as testing becomes more widespread, testing of the general population and asymptomatic individuals may lead to a large number of negative samples and a huge increase in the demand of testing supplies. In an innovative effort to further conserve resources using existing testing methodologies, some groups have investigated pooling many patient samples to decrease the number of tests required for larger populations. Proposed pooling approaches can be adaptive, where samples are first pooled and tested and positive pools are retested individually. This is a relatively simple solution which decreases overall testing resources used but may introduce several disadvantages, including longer wait times for results since positive samples must be iteratively tested, and a slight loss in sensitivity from diluting positive patient samples with negative ones. Multiple groups have modeled patient pooling and proposed algorithms that optimize positive sample detection and testing efficiency (Noriega and Samore 2020;Sinnott-Armstrong et al. 2020). Some simple approaches like pooling the rows and columns of a 96-well plate during testing can increase efficiency four-to eightfold for low prevalence populations (Sinnott-Armstrong et al. 2020). Random pooling has also been shown to be useful for estimating disease prevalence and transmission within a local area (Hogan et al. 2020). More complicated pooling assignments and nonadaptive approaches have been proposed and may significantly increase efficiency and al-low for single-iteration pooled testing, but may be more difficult to implement with common clinical workflows and robotic pipetting (Täufer 2020). The solution to widespread testing is likely to require an adaptive, multipronged approach. While pooling of samples may not be the most appropriate solution for very sensitive testing, pooling may drastically improve our ability to screen large populations while conserving limited testing resources. Lastly, it is important to consider that enzyme-based tests are not always feasible in resource-limited scenarios. One potential alternative lies in the use of DNA "nanoswitch"-based tests that have been developed for Zika virus detection. Based on a DNA origami design, these nanoswitch DNA oligomers bind viral RNA to undergo a conformational change that can be visualized on an agarose gel. DNA nanoswitches targeting different species of RNA viruses can also be combined in one test, allowing for the detection of co-infections. Unfortunately, the Zika nanoswitch test requires ∼5.2 × 10 5 Zika RNA genomes per test for reliable detection and is thus far less sensitive than RT-PCR . Due to being a gel-based method, the throughput of DNA nanoswitch tests is also severely limited. To avoid low-throughput gel detection, the Godin group developed a nanopore sensor capable of detecting these conformational changes and could detect as few as ∼500 target molecules (Beamish et al. 2019). Further innovation in DNA nanoswitch detection of SARS-CoV-2 and developments in generating fluorescent or colorimetric outputs could significantly improve the test's throughput and facilitate its use for COVID-19 detection in low-resource areas. OUTLOOK In response to the tremendous global toll of COVID-19, researchers have rapidly mobilized to investigate solutions for testing, diagnosis, and treatment. Preprint and published articles from the past several months describe a variety of options for rapid, affordable, sensitive, and highthroughput nucleic acid testing, which is currently the most reliable approach for early detection of SARS-CoV-2. To address the dire need for increased testing, researchers across disciplines have quickly compared widely available commercial products, proposed repurposing existing reagents and infrastructure, and created novel laboratory solutions to optimize the COVID-19 testing pipeline. Academic researchers at a few institutions around the world have published detailed blueprints for establishing local pop-up testing centers, and others are likely to follow (Aitken et al. 2020; Innovative Genomics Institute SARS-CoV-2 Testing Consortium and Doudna 2020; Sridhar et al. 2020). A combination of testing approaches may be the most efficient way to fill the current gaps in testing. We are hopeful that the explosion of creative and multifaceted approaches to COVID-19 nucleic-acid testing will continue to seed solutions as society addresses the COVID-19 pandemic. GLOSSARY RT-PCR-reverse transcription polymerase chain reaction; amplification of RNA in a one-step reaction containing a reverse transcriptase enzyme, DNA polymerase, and a specific primer complementary to a target region. LAMP-loop-mediated isothermal amplification that uses four primers that recognize six complementary regions within the target. Amplification occurs from a strand-displacing polymerase and elongation of primers with self-complimentary regions for hairpins that prime further rounds of amplification forming large "cauliflower" structures of amplified products. RPA-rapid amplification; recombinase polymerase amplification uses a recombinase-primer complex which finds matches in the DNA or RNA template and enables strand exchange to form an open complex. Single-stranded binding proteins stabilize the open duplex and strand-displacing DNA polymerase amplifies the template. RAMP-a two-stage isothermal amplification technique combining a primary RPA reaction using the outside LAMP primers with a secondary LAMP reaction including selfcomplimentary internal primers. C t or C q value-in qPCR, the amplification cycle where the fluorescence curve exhibits the greatest curvature and exceeds the background fluorescence threshold. Sensitivity (Baratloo et al. 2015)-the ability of a test to detect a true positive = true positive true positive + false negative * 100. Specificity (Baratloo et al. 2015)-the ability of a test to detect a true negative = true negative false positive + true negative * 100. Accuracy (Baratloo et al. 2015)-the ability of a test to differentiate true positive and negative results correctly from the total tests = true positive + true negative true positive + false positive + true negative + false negative * 100. SUPPLEMENTAL MATERIAL Supplemental material is available for this article.
2020-05-03T13:03:12.717Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "3fbeeb443386235b8b77cef516c17e7107f267f4", "oa_license": "CCBY", "oa_url": "http://rnajournal.cshlp.org/content/26/7/771.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "02c6c233ed30ed4652063b4f54af4773c0b193cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
239510436
pes2o/s2orc
v3-fos-license
Implementation of buprenorphine services in NYC syringe services programs: a qualitative process evaluation Background Syringe services programs (SSPs) hold promise for providing buprenorphine treatment access to people with opioid use disorder (OUD) who are reluctant to seek care elsewhere. In 2017, the New York City Department of Health and Mental Hygiene (DOHMH) provided funding and technical assistance to nine SSPs to develop “low-threshold” buprenorphine services as part of a multipronged initiative to lower opioid-related overdose rates. The aim of this study was to identify barriers to and facilitators of implementing SSP-based buprenorphine services. Methods We conducted 26 semi-structured qualitative interviews from April 2019 to November 2019 at eight SSPs in NYC that received funding and technical assistance from DOHMH. Interviews were conducted with three categories of staff: leadership (i.e., buprenorphine program management or leadership, eight interviews), staff (i.e., buprenorphine coordinators or other staff, eleven interviews), and buprenorphine providers (six interviews). We identified themes related to barriers and facilitators to program implementation using thematic analysis. We make recommendations for implementation based on our findings. Results Programs differed in their stage of development, location of services provided, and provider type, availability, and practices. Barriers to providing buprenorphine services at SSPs included gaps in staff knowledge and comfort communicating with participants about buprenorphine, difficulty hiring buprenorphine providers, managing tension between harm reduction and traditional OUD treatment philosophies, and financial constraints. Challenges also arose from serving a population with unmet psychosocial needs. Implementation facilitators included technical assistance from DOHMH, designated buprenorphine coordinators, offering other supportive services to participants, and telehealth to bridge gaps in provider availability. Key recommendations include: (1) health departments should provide support for SSPs in training staff, building health service infrastructure and developing policies and procedures, (2) SSPs should designate a buprenorphine coordinator and ensure regular training on buprenorphine for frontline staff, and (3) buprenorphine providers should be selected or supported to use a harm reduction approach to buprenorphine treatment. Conclusions Despite encountering challenges, SSPs implemented buprenorphine services outside of conventional OUD treatment settings. Our findings have implications for health departments, SSPs, and other community organizations implementing buprenorphine services. Expansion of low-threshold buprenorphine services is a promising strategy to address the opioid overdose epidemic. Background Opioid overdose deaths in New York City (NYC) more than doubled between 2010 and 2019, despite the availability of evidence-based opioid use disorder (OUD) treatment [1]. Buprenorphine is a safe and effective medication treatment for OUD that reduces non-prescribed opioid use [2], HIV risk behaviors [3], and opioid overdose mortality [4]. However, buprenorphine treatment is underutilized, in part because of barriers to treatment, such as provider availability or program practices that are burdensome for patients [5]. Low-threshold buprenorphine services seek to increase access to and acceptability of buprenorphine treatment for people with OUD. Lowthreshold buprenorphine services are characterized by: (1) same-day treatment entry, (2) a harm-reduction orientation, (3) flexibility, and (4) availability in non-traditional settings [6]. A harm reduction orientation refers to non-judgmental provision of services and respecting patients' goals, even if they do not intend to stop all drug use [7]. Increasing access to low-threshold buprenorphine services may help reduce opioid-related harms. While low-threshold buprenorphine services can be provided in traditional medical settings [8], syringe services programs (SSPs) are ideally positioned to reach people with OUD who are at risk for overdose and marginalized from other sources of care [9,10]. Historically, SSPs have increased access to other health services, such as STI testing, HIV and hepatitis C virus testing, and naloxone for opioid overdose reversal. In recent years, some SSPs in the USA have offered onsite buprenorphine services. Programs described in the literature have varied by location of services offered (e.g., mobile site, drop-in center) and treatment philosophy (e.g., requiring abstinence from illicit opioids, not conducting urine toxicology testing) [11][12][13][14][15]. Promising treatment outcomes have been reported, but to our knowledge, implementation of buprenorphine services in SSPs has not been studied systematically. In NYC, 15 SSPs are currently funded by the NYC Department of Health and Mental Hygiene (DOHMH) to provide harm reduction services to approximately 16,000 participants per year. In 2017, DOHMH launched an initiative to support SSPs in developing low-threshold buprenorphine services as part of a multipronged initiative to reduce opioid-related overdoses. Our objectives were to provide the first qualitative evaluation of buprenorphine service implementation at SSPs. First, we categorized program characteristics. Then, we sought to identify barriers to and facilitators of implementing SSP-based buprenorphine services. We use these data to make recommendations for implementation of SSPbased buprenorphine services. Program overview In 2017, DOHMH funded nine SSPs to develop buprenorphine services. Funding generally covered staff time, including consultants and subcontractors, and program supplies and equipment. Awards to programs ranged from $87,000-218,000 yearly (USD). DOHMH additionally provided technical assistance to SSPs, which consisted of informational sessions on regulatory compliance (two-part training) and quarterly learning communities (five sessions) over 15 months (December 2018-February 2020). Program managers and at least one other staff member involved in buprenorphine services were required to attend compliance and learning community sessions. Topics covered are listed in Box 1. DOHMH also funded a harm reduction organization to conduct annual staff trainings to prepare staff to counsel SSP participants on treatment options. Programs were required to develop policies and procedures for buprenorphine services, for which DOHMH offered funding to hire consultants. Programs could request funding for an electronic health record (EHR) (software, installation fees, staff training, and a certain number of user licenses) in their budget proposals. Programs were encouraged to request individualized DOHMH technical assistance for any challenges they encountered in program development. Clinical mentoring was available from a harm reduction-experienced buprenorphine provider. All SSPs reported on buprenorphine services provided using an existing online reporting system developed by DOHMH for SSPs to report on other harm reduction services provided. Evaluation of implementation This evaluation was deemed to not be human subjects research by the DOHMH Institutional Review Board. Two academic physicians (AJ and AF) unaffiliated with DOHMH evaluated SSPs' experiences with implementing low-threshold buprenorphine services with support from DOHMH. AJ completed a total of 26 semi-structured qualitative interviews from April 2019-November 2019 at eight of the nine SSPs in NYC that received funding. One SSP was excluded because it developed buprenorphine services in collaboration with an academic medical center, so their experiences were not generalizable to the eight other SSPs. Interviews were conducted with three categories of staff: leadership (i.e., buprenorphine program management or leadership, eight interviews), staff (i.e., buprenorphine coordinators or other staff, eleven interviews), and buprenorphine providers (six interviews). In the staff category, all but one SSP staff member were employed by the SSP prior to development of buprenorphine services. SSP leadership selected staff and provider interviewees after researchers contacted them and explained the objectives of the study. Semi-structured interviews followed a standard script that was developed in collaboration with DOHMH. Interview guides were tailored to the program role of the interviewee (Table 1). Twenty-two of the 26 interviews were conducted in-person, and four were conducted by phone. All interviews were audio-recorded, transcribed, and uploaded to Dedoose, a web-based tool for qualitative analysis. An analytic team comprised of two academic physicians (AJ and AF) and one qualitative researcher at DOHMH (AH) developed and iteratively refined a codebook consistent with study objectives. We used thematic analysis to first categorize program characteristics and second identify overarching themes related to barriers and facilitators to program implementation. Transcripts were then coded by one researcher (AJ). Findings from interviews are summarized describing prominent themes. We describe program characteristics without quotations for brevity. Data on barriers and facilitators of program implementation are supplemented with direct quotations from interviewees that provide context or highlight critical points. We provide recommendations for program implementation based on the findings from this study. Throughout the manuscript, we refer to people who use SSP services as "SSP participants". Program characteristics Stage of buprenorphine development: Of the eight SSPs included in the evaluation, five developed new buprenorphine programs and were active at time of interview, one SSP had a buprenorphine program established prior to DOHMH funding, and two SSPs initiated the development of new buprenorphine programs, but had to stop due to setbacks (one lost its office space; the other lost its provider). Characteristics of the six active buprenorphine programs are highlighted in Table 2 and additional details are provided below. Buprenorphine providers Buprenorphine providers were nurse practitioners, physician assistants, and physicians (family practice, psychiatry, and general internal medicine), employed part-time by the SSP or full-time by the organization's medical clinic. One SSP partnered with an addiction medicine fellowship program to host a rotating addiction medicine provider-in-training. SSPs employed between one and three buprenorphine providers contracted for a set number of hours per week. Electronic documentation and prescribing Programs varied in their use of electronic documentation. Organizations that had medical clinics used existing EHRs. At one program, the provider used their own cloud-based EHR which they also used in their private practice. Two programs used paper charts. One program used the SSP's existing data management software. All programs used electronic prescribing, in compliance with state and federal regulations. Buprenorphine coordinators The six active programs all employed buprenorphine coordinators who were nurses, medical assistants, social workers, or other SSP staff with informal training in buprenorphine treatment. Buprenorphine coordinator roles included providing education and orientation to buprenorphine services; conducting eligibility screening; monitoring participant engagement; providing navigation services; coordinating with buprenorphine providers; supervising buprenorphine peer specialists and navigators; and SSP duties unrelated to buprenorphine treatment. Examples of navigation services included: making appointment reminder calls; contacting participants who were due for refills; helping with pharmacies and insurance authorizations; and providing psychosocial support (text messaging and phone calls to support participants in taking their medication and abstaining from non-prescribed opioids). Participant engagement strategy SSP staff promoted buprenorphine services using fliers, brochures, and conversations with existing participants at office and mobile sites. Three programs formally involved peers (i.e., SSP participants with lived experience of OUD) as buprenorphine champions or specialists to engage SSP participants who expressed interest in buprenorphine. These peers conducted community outreach at mobile sites and served as point-persons for other SSP staff members who identified SSP participants interested in buprenorphine. Interested participants were then connected with buprenorphine coordinators for more in-depth counseling and an introduction to buprenorphine services. Buprenorphine treatment policies and procedures Buprenorphine can displace other opioids from opioid receptors and cause severe withdrawal symptoms if taken too soon, thus participants must wait until they are in moderate opioid withdrawal to take the first buprenorphine dose. Most programs used a "home induction" approach, where the provider instructed participants when and how to take the first dose of buprenorphine at home [16]. Some programs offered the option of "officebased induction" and one program required it, where participants would take the first buprenorphine dose at the SSP office, so a provider could monitor their level of withdrawal before and after starting buprenorphine. Three of the six active programs reported being able to consistently offer same-day treatment. The other programs did not, either because of lack of provider availability (one program), lengthy intakes during the first visit (one program), or requiring participants to be in withdrawal to receive a buprenorphine prescription (one program). Generally, participants were required to follow up with the provider weekly or every two weeks at the beginning of treatment and then were seen monthly after stabilizing. None of the programs required participants to participate in additional counseling beyond that which was routinely provided by providers. Urine toxicology tests Programs performed urine toxicology testing at different frequencies, ranging from every buprenorphine visit to random intervals. Use of urine toxicology testing varied depending on the provider. No provider reported routinely stopping treatment for opioid-positive urine toxicology tests. Some providers increased the frequency of visits or spoke with participants about alternative treatments if they had multiple opioid-positive tests. All buprenorphine providers required that participants have buprenorphine-positive urine toxicology tests to continue treatment. Barriers to implementation There were numerous barriers to providing buprenorphine services at SSPs. These included: staff knowledge and skills gaps, difficulty hiring and retaining buprenorphine providers, managing tension between harm reduction and traditional OUD treatment philosophies, and financial constraints. Challenges also arose from serving a population with unmet psychosocial needs. SSP leadership lacked experience implementing medical services Program staff members reported that their leadership needed additional guidance at the beginning of implementation, particularly sites that did not have existing clinical infrastructure. Although leaders were experienced in managing nonprofits, many lacked experience building health service programs. Specifically, leaders lacked requisite knowledge regarding provider recruitment and contracting, malpractice insurance requirements, creating clinical policies and procedures, regulatory requirements, and electronic health records. Medical provider challenges Provider-related challenges fell into two main categories: (1) Hiring buprenorphine providers and (2) Comfort with harm reduction or "low-threshold" treatment principles. (1) Hiring buprenorphine providers: Programs found it challenging to identify buprenorphine providers who were experienced with buprenorphine treatment and willing to work part-time and in a harm reduction context. Covering malpractice insurance was prohibitively expensive for SSPs, and finding buprenorphine providers who had their own malpractice insurance was difficult, limiting the pool of potential candidates. Programs posted job listings online and asked personal or professional connections to advertise positions. Programs affiliated with medical clinics benefitted from established clinician recruiting teams. SSPs not affiliated with medical clinics hired buprenorphine providers for one to twelve hours per week, due to financial constraints, which was another challenge. However, finding the right provider was difficult: "We really don't have a provider that really understands the population. "-Program Coordinator (Program 2). 2) Comfort with harm reduction or "low-threshold" treatment principles: Programs had difficulty finding harm reduction-oriented buprenorphine providers, and few buprenorphine providers had previous experience working in harm reduction settings: Some buprenorphine providers expressed concerns about their participant's continued opioid use. More than one provider were reluctant to provide buprenorphine prescriptions to participants who were also taking benzodiazepines. Individual buprenorphine providers had different practices around continuing to prescribe buprenorphine to participants who missed appointments. Buprenorphine providers also expressed concerns about their legal liability and risks to participants: "I told you I'm a little bit of a control freak… And I'm like that with the buprenorphine because it is a controlled substance… And number one, I don't want to get myself in trouble… and number two, I also don't want to be so lackadaisical that someone else could hurt themselves… I'm responsible… I'm not giving it to you for you to hurt yourself. " -Buprenorphine provider (Program 4). Harm reduction staff did not always agree with provider practices that conflicted with harm reduction principles or deviated from their understanding of clinical guidelines, but they were uncomfortable communicating this to buprenorphine providers: "It's a little difficult as to how we manage because we don't want to disrespect the doctor. " -Program Manager (Program 2). Some staff members suggested that DOHMH should train buprenorphine providers in harm reduction principles, as they felt they had limited authority to give buprenorphine providers feedback on their practice. The state department of health offered a learning community for SSP buprenorphine providers, but attendance was voluntary, and buprenorphine providers often were unable to attend due to conflicting clinical schedules. When programs were able to find a harm reductionoriented provider, this was a major facilitator to program implementation. Harm reduction-oriented buprenorphine providers were able to engage with SSP participants and work effectively in non-traditional settings: "And then we had [redacted], who's a wonderful fit. She was with us I think for six months, she was really great. She was the one that was out in the mobile unit, was able to engage a lot of people into the pro-gram… She's a harm reductionist, like she understood opioid use disorder in a way… that most prescribers that I've talked to have not understood it. " -Program Manager (Program 3). Differences between harm reduction and traditional OUD treatment philosophy SSPs historically have not offered OUD treatment, and some programs noticed philosophical differences between traditional, abstinence-based treatment, and harm reduction approaches. Program leadership discussed concerns that offering buprenorphine (bupe) services would imply that the organization expected participants to stop non-prescribed opioid use. What has been the biggest challenge with the program to date? (Interviewer) "-well, it's about moving from not offering bupe into making it widely available without sending the message that you are being abstinence based.… But, when we talk to our clients (we) say this is an option… it all depends on how this relates to your life and to the things that you want to do. " -Leadership (Program 5). Staff members also found it difficult navigating their roles as harm reductionists and helping people engage in buprenorphine services. One staff member spoke about how offering buprenorphine treatment changed their expectations for participants, leading to disappointment if participants resumed using non-prescribed opioids, which typically would be understood differently from a harm reduction perspective: "I think it's sometimes, it's sometimes knowing that someone is going in the path that they want and all of a sudden, (they have) a big relapse. So that, emotionally for the harm reduction team as much as they want to keep the philosophy, it just really bothers the team… So that's a challenge, that it's hard to see, but because you're getting to the same level of the clients and you're not being pushy about it, but ask(ing) them what they want --right, it gets a little bit more frustrating. " -Program Manager (Program 6). Staff knowledge and comfort communicating with participants about buprenorphine Interviewees reported challenges with staff knowledge about buprenorphine at the beginning of program implementation. The annual staff training provided was perceived to be geared toward a medical audience, which was too technical for frontline staff. Even staff members closely involved in the program primarily learned about buprenorphine informally. For example, some staff members had personal experience with buprenorphine treatment, and others learned on the job from working closely with a provider or another buprenorphine program staff member. Programs also identified a need for refresher trainings for staff. Frontline staff desired training to help communicate quickly and effectively to SSP participants about buprenorphine: SSP leadership and staff perceived participant challenges in the following categories: (1) Unique characteristics; (2) Unmet service needs; and (3) Participants' prior negative experiences with buprenorphine. (1) Unique characteristics: Some programs observed that unique characteristics of their participants tempered interest in buprenorphine services. One program served a young population, whom they perceived as lacking interest in OUD treatment. Another program was located near a methadone program, and most SSP participants were already enrolled in methadone treatment. Programs serving populations without stable housing noted the unique challenges of buprenorphine in this population: "… they knew that buprenorphine was something that was mostly prescribed to specific populations meaning, you know, white America that were fully housed… and it was really, they didn't find it to really be like for them… so if I start this… I get a prescription where do I keep it, where do I store it, where do I put it. For clients who are chronically homeless that… certainly becomes a challenge…" -Leadership (Program 5). To help address challenges with storage, some participants received small quantities of medication and returned multiple times per week for renewal of prescriptions. Other participants found pill boxing buprenorphine helpful to improve their adherence. (2) Unmet service needs: A common theme was that buprenorphine alone did not meet all of participants' needs. Staff perceived that participants required supportive services related to basic needs and buprenorphine to be successful in treatment. Services identified included peer navigations services, vocational training, and housing: "We need other resources, viable resources that we can present to the clients for them to be adherent and stable in their life. I mean… some form of housing vouchers and even like some clothing, meals… employment, they just need realistic options -I think like right now, they don't really see a way for-ward… Okay, I'm getting this medication and stuff, I'm taking Suboxone, but these other things in my life ain't getting right… I think they need something to see in the future and I don't think they're really seeing it. " -Buprenorphine Coordinator (Program 4). (3) History of negative experiences with buprenorphine: Many SSP participants reported having had past experiences of precipitated withdrawal when taking buprenorphine and were reluctant to try it again. In response, staff attempted to dispel misinformation about buprenorphine and counseled participants on how to avoid precipitated withdrawal when starting buprenorphine. Financial constraints The primary financial constraint for programs was hiring buprenorphine providers. Most programs only had funding to hire a medical provider for a limited number of hours per week and could not afford buprenorphine providers without their own malpractice insurance coverage. Lack of funding also made it difficult to retain buprenorphine providers and mental health professionals in some organizations. SSPs not associated with medical clinics reported that electronic health records were unaffordable. SSPs not affiliated with a medical clinic were also unable to bill insurance for medical services and thus relied exclusively on grant funding to sustain the program. Sustainability was particularly difficult for programs operating out of mobile vehicles. Upkeep and cost of repairing mobile vehicles was a barrier for sustainability. Finding buprenorphine providers who were interested and skilled in working in a mobile setting was also challenging. Provider model SSPs that were part of organizations with medical clinics had the greatest capacity to provide regular services and same-day buprenorphine treatment. Other programs were able to successfully contract with part-time buprenorphine providers when these buprenorphine providers had their own malpractice insurance (either independently or through another organization) and were willing to extend their availability via telehealth. Provision of remote services via telehealth helped bridge gaps when in-person hours were unavailable. Two programs arranged for telephonic follow-up if participants came to the SSP when the provider was not available inperson. One program compensated their provider (using grant funding) for an additional 1.5 hours per week for telehealth visits to attend to participants who had been unable to attend in-person appointments and facilitate prescription refills. Buprenorphine coordinators were crucial to maintaining continuity of care in programs with limited provider hours. Technical assistance from DOHMH Technical assistance from the DOHMH was the key in several areas, particularly in developing policies and procedures. At the beginning of the initiative, as part of the funding requirements, programs were asked to create policies and procedures which included protocols for starting buprenorphine, follow-up intervals for participants, and laboratory testing tailored for their organization and participants. However, many programs struggled, having little experience creating clinical protocols. Two SSPs hired consultants using funds provided by DOHMH, but other programs could not identify consultants with the necessary expertise. DOHMH later provided templates and individualized assistance to SSPs to develop their own policies and procedures, a key facilitator to programs that did not hire consultants. The DOHMH also assigned a single staff member as the point person to answer questions that arose during the implementation process. This point person assisted programs with a range of challenges, including finding buprenorphine providers and addressing medicolegal concerns (legal liability associated with providing clinical services). "And [DOHMH staff member] was very helpful and responsive… it was helpful to have conversations because we would identify and then look at issues that hadn't been thought of in advance… I was focused to some extent on… risk management for the organization, right; making sure that we were not going to have the state health department… breathing down our necks because we were providing services in some way, that, you know, was considered too broad… And then there were questions around, you know, insurance and whose insurance covered what, if it was under the individual, their malpractice. Policy and procedures, questions, data ques-tions…" -Leadership (Program 1). Dedicated buprenorphine coordinators DOHMH encouraged SSPs to designate a dedicated buprenorphine coordinator. Programs who followed this advice reported that it was a facilitator of program success. Buprenorphine coordinators gained participants' trust, perhaps more easily than buprenorphine providers: "So for the most part, our clients are kind of honest in telling us things… And I told them like if you're using, you know, I ain't going to stop you from getting prescribed… I mean, that's not what I'm here for.… How could I help you maintain your adherence to Suboxone and stop you from using-and some like just need to talk. " -Buprenorphine Coordinator (Program 3). At one program, coordinators collaborated closely with buprenorphine providers to identify ways to support participants: "…Me and the providers got together and we started identifying clients that were high risk of failure or at risk of failure, for whatever variety of reasons, and then we'll come in, collaborate with the doctor and the client at the same time -and work out a plan as to okay, this is how we can work this client through this part of his life to become stable on Suboxone. " -Buprenorphine Coordinator (Program 3). Robust participant support Several SSPs offered more support services than typically can be provided in a doctor's office. Buprenorphine coordinators and peers provided a variety of navigation and support services. At one SSP, peer navigators (supervised by the buprenorphine coordinator) accompanied participants to healthcare appointments, conducted home visits, and, when needed, delivered MetroCards the day before appointments. The following quote details some of the auxiliary supports offered at another SSP: "…And we do everything possible, calling insurances, walking to the pharmacy -so like every step of the way -we make sure you get [buprenorphine] and nothing happens in between from the van to the pharmacy… We have that urgency like you're here now, we're getting this for you now. " -Buprenorphine Coordinator (Program 2). Relationship with pharmacy Establishing a relationship with a local pharmacy able to stock and dispense buprenorphine was a key facilitator for four programs. Staff could be confident that buprenorphine would be in stock (including a variety of strengths and formulations, depending on what participants' insurance covered), participants would be treated respectfully, and pharmacies would help troubleshoot insurance problems. Pharmacies affiliated with federally qualified health centers were able to provide discounted medication through the 340B Drug Pricing program [17]. One pharmacy delivered prescriptions directly to the SSP for onsite buprenorphine initiation. "Well, I've been fortunate that the pharmacy we deal with is actually pretty good and -with the population that we serve, you know, there's always that that, that stigma… and they have been looked at dif-ferently, not him (pharmacist)… He greets them, he speaks to them like folks. " -Buprenorphine Coordinator (Program 6). Recommendations for implementation of SSP buprenorphine services Taken in total, these interviews provide key lessons learned for implementing low-threshold buprenorphine services at SSPs. Below we summarize our recommendations for key stakeholders (health departments, SSPs, and buprenorphine providers) based on the barriers and facilitators we identified in this report (Table 3). Discussion This study aimed to identify barriers to and facilitators of implementation of SSP-based low-threshold buprenorphine services and make recommendations for implementation. We found that most programs successfully implemented at least some buprenorphine services despite experiencing challenges related to the novelty of providing buprenorphine services onsite and finding buprenorphine providers. Programs with pre-existing clinical infrastructure had many advantages in implementing and sustaining buprenorphine services. Many SSPs throughout the USA do not have this advantage Table 3 Recommendations for stakeholders in SSP buprenorphine services implementation Health departments Provide robust support for: 1) Building clinical infrastructure (e.g., health record, billing systems) 2) Developing policies and procedures 3) Addressing medicolegal concerns (e.g., malpractice insurance, legal liability associated with providing clinical services) 4) Selecting and training buprenorphine providers in harm reduction principles 5) Training frontline SSP staff to counsel participants about buprenorphine Designate a point-person who can provide individualized technical assistance to SSPs SSPs Train buprenorphine providers in harm reduction principles and facilitate a system for staff to safely provide feedback on practices Buprenorphine providers Past experience or dedicated time for training in: 1) Low-threshold treatment principles and practices 2) Harm reduction principles and practices Work collaboratively with harm reduction staff, particularly: 1) Soliciting and incorporating feedback from team members 2) Identifying and addressing client goals and basic needs and would benefit from support from public health agencies for developing clinical infrastructure, selecting and training providers, and training staff. Overall, SSPs are promising sites to expand access to low-threshold buprenorphine services. At SSPs, buprenorphine providers are generally not onsite full-time, therefore, having dedicated staff who can provide continuity is crucial. As such, we recommend having a dedicated buprenorphine coordinator to facilitate program implementation and ongoing management. This has been demonstrated in HIV treatment settings and is a key component of the "Massachusetts Model" of office-based buprenorphine treatment [18,19]. Similarly, other programs in low-threshold settings have used nurse care managers, in which nurses play central roles in completing initial assessments, counseling participants about initiation procedures, conducting follow-up visits, obtaining and discussing urine toxicology results, and discussing dose changes [20,21]. Maximizing collaboration between buprenorphine providers and other SSP staff members is particularly important for low-threshold settings. Other program characteristics that differed between sites may also facilitate implementation. SSPs were able develop successful programs within drop-in centers, mobile units or in partnership with established community health centers. However, not all programs were able to hire a harm-reduction oriented provider, which was an essential component of successful programs. Programs also differed in their involvement of peers. Few programs formally involved peers in buprenorphine services. When they were formally involved, peers served as participant navigators, provided other supportive services, and played critical roles in engaging participants. Training peers in buprenorphine and involving them in implementation of buprenorphine services could be an important strategy to improve the reach of buprenorphine services. Establishing successful SSP-based buprenorphine services will also require confronting philosophical differences between OUD treatment and harm reduction. Heller and colleagues described these differences in reference to implementing HIV care at SSPs, highlighting that traditional medical models are hierarchical, center around physician expertise, and expect patients to be compliant with prescribed treatment plans [22]. Our finding that harm reduction staff expressed discomfort in providing feedback to buprenorphine providers may reflect this hierarchy. Harm reduction models emphasize inclusivity, collaborative decision-making, and valuing small changes. Medical practice has begun to embrace more patient-centered approaches [23], but as exemplified by the provider who commented, "I'm not giving it to you for you to hurt yourself, " some clinicians may view their role in making prescribing decisions less collaboratively. Specific to buprenorphine treatment, accepting patient-centered treatment goals, including managing and reducing opioid use as opposed to stopping non-prescribed opioid use completely, could lead to better collaboration. Accordingly, buprenorphine providers can and should be trained in harm reduction principles [8]. Giving buprenorphine providers clear guidance about what prescribing practices are allowable could assuage concerns about legal liability. For example, buprenorphine providers expressed concerns about prescribing buprenorphine to SSP participants who took benzodiazepines; however, in 2017, the US Food and Drug Administration provided guidance that withholding buprenorphine from patients who use benzodiazepines or other sedatives could increase risk due to untreated OUD [24]. Changing medical culture to embrace harm reduction will require training and feedback, both of which could be provided by provider champions who are trusted messengers [25]. Infusing traditional OUD treatment with harm reduction principles could both boost program engagement and protect participant safety. Implementing buprenorphine services at SSPs also requires additional attention to financial sustainability. Programs were funded by a large city health department as part of a major multi-sector strategy to reduce overdose deaths. Significant financial support is needed to hire buprenorphine providers and pay for malpractice insurance. Innovations in the malpractice market are necessary to make contracting with individual buprenorphine providers more feasible for SSPs. Until then, in places where there are multiple SSPs or community organizations that wish to implement buprenorphine services, organizations may be able to partner with a medical clinic and provide funding for them to lend a part-time buprenorphine provider. Alternatively, health departments could employ buprenorphine providers to work in SSPs. In some states, SSPs may be able to bill for medical services, including buprenorphine services. Health departments and SSPs should explore whether SSPs billing for buprenorphine services would create a viable revenue stream and increase program sustainability or whether the start-up costs and staffing costs for billing would be too high. In some states, policy changes may be required to allow SSPs to bill for health services. Buprenorphine treatment is highly cost effective to society due to reductions in patients' use of emergency health services and criminal-legal involvement [26]. Thus, adequately funding programs could be a wise investment for communities with high levels of opioidrelated harms. Strengths of this study Interviewing individuals from multiple programs at different stages of development provided a diversity of models and perspectives on barriers and facilitators throughout the implementation process. Interviews were conducted with individuals with varying roles at the SSPs, including buprenorphine providers, leadership, buprenorphine coordinators, and other SSP staff. Finally, members of the study team were from outside DOHMH, reducing some potential biases in the study. Limitations We used strictly qualitative methods, so data were not collected systematically on process measures such as number of staff trainings SSPs held or number of SSP participants approached about buprenorphine treatment. The study was conducted up to two years after implementation, introducing recall bias and reducing the opportunity to act on program feedback in a timely manner. The study interviewed SSP staff but not participants, so may have missed important perspectives of those most impacted. Finally, we only interviewed SSP stakeholders, not DOHMH staff, so the perspective of the funders was not formally examined. Future directions Our finding that offering buprenorphine services may change SSP participants' and staff members' perceptions of the SSP's harm reduction mission deserves additional investigation. While we did not examine participant perspectives in this study, some SSPs reported concern that participants would question the organization's commitment to harm reduction after they started offering buprenorphine services. Staff members also reported shifts in their expectations for participants who engaged in treatment, expressing hopes for consistent adherence to buprenorphine, abstinence from non-prescribed opioids, and greater stability in participants' lives. Some staff were concerned that such change in expectation would compromise their non-judgmental stance toward a participant's substance use. It is important to support staff and organizations in exploring their understanding and practice of harm reduction and an evolving understanding of harm reduction principles and OUD treatment applied in new contexts. Lastly, this study examined implementation of low-threshold buprenorphine services, but understanding SSP participants' experiences with such services will be an important area of future study. Conclusions Despite encountering challenges, eights SSPs in NYC have implemented buprenorphine services with DOHMH support, serving a population at risk for opioid-related harms that may be reluctant to seek treatment elsewhere. Lessons learned from this study can be used to support SSPs and other community organizations in developing and improving buprenorphine services. Over time, SSPs have adapted to community needs in providing sterile syringes, distributing naloxone, and now improving access to lifesaving OUD treatment. SSPs are valuable community resources that improve the health of people who use drugs.
2021-10-23T15:17:44.872Z
2021-10-21T00:00:00.000
{ "year": 2022, "sha1": "b92ce3e436f4e12ea0edd6ac90527e0088fad89c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "428456db2fabfde278c83147be99c385d94a9b81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252916478
pes2o/s2orc
v3-fos-license
Immunosensitivity and specificity of insulinoma-associated protein 1 (INSM1) for neuroendocrine neoplasms of the uterine cervix Objective Previously, we reported that insulinoma-associated protein 1 (INSM1) immunohistochemistry (IHC) showed high sensitivity for neuroendocrine carcinoma of the uterine cervix and was an effective method for histopathological diagnosis, but that its specificity remained to be verified. Therefore, the aim was to verify the specificity of INSM1 IHC for a large number of non-neuroendocrine neoplasia (NEN) of the cervix. Methods RNA sequences were performed for cell lines of small cell carcinoma (TCYIK), squamous cell carcinoma (SiHa), and adenocarcinoma (HeLa). A total of 104 cases of formalin-fixed and paraffin-embedded specimens, 16 cases of cervical NEN and 88 cases of cervical non-NEN, were evaluated immunohistochemically for conventional neuroendocrine markers and INSM1. All processes without antigen retrieval were performed by an automated IHC system. Results The transcripts per million levels of INSM1 in RNA sequences were 1505 in TCYIK, 0 in SiHa, and HeLa. INSM1 immunoreactivity was shown only in the TCYIK. Immunohistochemical results showed that 15 cases of cervical NEN showed positive for INSM1; the positivity score of the tumor cell population and the stain strength for INSM1 were high. Two of the 88 cases of cervical non-NENs were positive for INSM1 in one case each of typical adenocarcinoma and squamous cell carcinoma. The sensitivity of INSM1 for cervical NEN was 94%; specificity, 98%; the positive predictive value, 88%; and the negative predictive value, 99%. Conclusion INSM1 is an adjunctive diagnostic method with excellent specificity and sensitivity for diagnosing cervical NEN. Higher specificity can be obtained if morphological evaluation is also performed. INTRODUCTION Neuroendocrine carcinoma (NEC) of the uterine cervix is a rare disease that represents 1% to 5% of cervical cancers [1,2]. Compared with other histologic types of cervical cancer, cervical NEC is highly malignant; it gives rise to hematogenous metastases from an early stage and has a poor prognosis [3][4][5]. The 5-year survival rate is approximately 90% for International Federation of Gynecology and Obstetrics (FIGO) stage IB1 ordinary cervical cancers, but it has been estimated to be only 55% to 63% for FIGO stage IB1 cervical NEC [3][4][5][6][7]. Since the biological characteristics of cervical NEN differ from those of other histological types of cervical cancer, the therapeutic strategy must also differ. Systemic chemotherapy even in the early stages has been suggested as necessary to achieve complete control of a local cervical NEN lesion [3,6]. The National Comprehensive Cancer Center Network Clinical Practice Guidelines in Oncology (NCCN guideline) showed specific therapeutic strategies for cervical small cell neuroendocrine carcinoma (SCNEC) [8,9]. Insulinoma-associated protein 1 (INSM1), a zinc-finger transcription factor related to neuroendocrine differentiation, is frequently expressed in neuroendocrine tumors. In a previous study, we showed that immunohistochemistry (IHC) is more sensitive for INSM1 than for synaptophysin (Syn), chromogranin A (CGA), and neural cell adhesion molecule (NCAM), and that INSM1 may therefore be useful as a neuroendocrine marker [10]. In cervical NEC, we showed that sensitivity was 95%, and that positivity for INSM1 matched histological findings [10]. Occasionally cervical NEC, in particular SCNEC shows a superficial resemblance to poorly differentiated or basaloid type of squamous cell carcinoma, necessitating immunohistochemical study. Although INSM1 has been considered to be a promising marker for neuroendocrine differentiation, a subset of squamous cell carcinoma in some organ can be focally positive for INSM1 and thus its specificity and cut-off point remains a matter. It was difficult to evaluate specificity in previous study, because we performed INSM1 IHC by a manual procedure in accordance with the approach used in that work. Therefore, in this study, the specificity and sensitivity of INSM1 for uterine cervical neuroendocrine neoplasia (NEN) were examined under stable staining conditions with an automated IHC system. Case selection The study included patients diagnosed with FIGO (2018) [11] stage IB1 to IVB cervical NEN from 2004 to 2020 at St. Marianna University Hospital, Kanagawa, Japan, and Kumamoto University Hospital, Kumamoto, Japan. Two cases in Kumamoto University were reported previously as a case report [12]. In addition, we included cases evaluated as FIGO stage IB1 to IVB cervical non-NEC at St. Marianna University Hospital. All specimens for histological analysis were obtained before any chemotherapy or radiotherapy was performed. The study was approved first by the ethics committee of St. Marianna University Hospital (Kanagawa, Japan) (approved No. 5138) and then by the ethics committee of Kumamoto University Hospital (approved No. 2032). Cell culture HeLa and SiHa cell lines were obtained from the American Type Culture Collection (Manassas, VA, USA). The TCYIK cell line was obtained from RIKEN BRC through the National BioResource Project of MEXT/AMED, Japan. SiHa and HeLa cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 units/mL penicillin, and 100 μg/mL streptomycin. TCYIK cells were maintained in RPMI-1640 medium supplemented with 10% FBS, 100 units/mL penicillin, and 100 μg/mL streptomycin. All cell lines were grown at 37°C in a humidified atmosphere containing 5% CO 2 . RNA extraction and RNA-Seq RNA was extracted from HeLa, SiHa, and TCYIK cells using the RNeasy Mini kit (Qiagen) according to the manufacturer's instructions. The Agilent Bioanalyzer 2100 system (Agilent Technology) was used to check RNA integrity (RIN). Total RNA (1 µg) was processed using NEBNext ® Poly(A) mRNA Magnetic Isolation Module (NEB E7490), and 3 libraries were prepared with the NEBNext ® Ultra™ RNA Library Prep Kit for Illumina ® (NEB E7530) following the manufacturer's instructions. DNA libraries were sequenced using Illumina NovaSeq 6000 to generate paired-end reads. Preparation of hematoxylin and eosin (H&E)-stained sections and immunohistochemically stained sections Four-μm-thick, formalin-fixed, paraffin-embedded (FFPE) sections were deparaffinized in xylene and rehydrated in a descending ethanol series. For the histologic re-evaluation of all cases, slides stained with H&E were used. Syn, CGA, and NCAM (HISTOSTAINER 48A; Nichirei Biosciences) have been traditionally and routinely used as neuroendocrine markers at our institution. Cell lines were centrifuged, formalin fixed, and FFPE specimens were prepared and performed IHC as same method of NEN sections. Histological analysis NEN and other histological diagnoses of the cervix were defined according to the 2020 World Health Organization (WHO) criteria [13]. NEN includes neuroendocrine tumors and NEC. Neuroendocrine tumor can be categorized as neuroendocrine tumor, grade 1 (NET G1) and grade 2 (NET G2). NEC can be categorized as SCNEC and large cell neuroendocrine carcinoma (LCNEC), respectively. Tumors admixed with NEC were defined as either SCNEC or LCNEC with components of adenocarcinoma or squamous cell carcinoma. Evaluation of histological and immunohistochemical findings Histological and immunohistochemical findings were independently evaluated by pathologists (A.E., F.K., and J.K.). Immunohistochemical scoring was based on the proportion and strength of staining of positively stained cells. The amount of positive staining of the tumor cell population was scored depending on the proportion of positive cells in the tumor area, as follows: score 0 (0%), 1 (≤5%), 2 (6%-50%), and 3 (≥51%). Admixed tumors were evaluated in the area of the histological neuroendocrine tumor, whereby the non-neuroendocrine area was also evaluated. The cells with the highest stain strength were evaluated with a ×20 microscope objective and classified as grade 0 to grade 3+, as follows: grade 0, negative; 1+, weak (nuclear staining can barely be determined with a ×20 objective); 2+, medium (slightly weak staining in the nucleus at objective ×20, but enough to be easily determined); and 3+, strong (Table S1, Fig. S1). Cell line The principal component analysis (PCA) scatter plot and heatmap of TCYIK, HiHa, and HeLa cell lines showed that they were completely different in RNA sequences (Fig. S2). The transcripts per million (TPM) levels of INSM1 in RNA sequences were 1505 in TCYIK, 0 in SiHa, and 0 in HeLa (Fig. S3). INSM1 immunoreactivity was shown only in the TCYIK cell line; none was shown in the SiHa and HeLa cell line ( Fig. 1). Two cases had score 1 and grade 3+ for INSM1, and the positive tumor cells were still easily recognizable (Fig. 3). In one of these cases, even though H&E staining identified a typical SCNEC, no staining was seen for Syn, CGA, and NCAM. This case was stage IIB (T2b, N0, M0), and the patient was alive at the time of the last examination 3 years after the initial diagnosis ( Fig. 3A-F) negative for Syn and CGA, but partially positive for NCAM. This case was stage IIIC1 (T1b2, N1, M0). The patient recurred immediately after initial treatment, radical hysterectomy and adjuvant chemotherapy. The patient was alive with disease at the time of the last examination 2 years after the initial diagnosis ( Fig. 3G-L). Only one case with grade 1+ stain strength had small cell carcinoma, admix type. The tumor cell population positivity score was 2 for INSM 1. This case was diagnosed by being positive for Syn and morphological features. Two of the 88 cases (2%) of cervical non-NEC were positive for INSM1 (Fig. 4). One case was adenocarcinoma with score 1 and grade 3+ for INSM1 ( Fig. 4A and B). The positive cells with stain strength of grade 3+ were clearly located on the basal side of the glandular ducts in the tumor. Stainings for Syn, CGA, and NCAM were negative. This case was adenocarcinoma, HPV-associated (HPV type 16) and stage IB1 (T1b1, N0, M0). Follow-up data were not available after the initial diagnosis. The other case was squamous cell carcinoma with score 1 and grade 3+ for INSM1; stainings for Syn, CGA, and NCAM were negative ( Fig. 4C and D). INSM1-positive cells with stain strength of grade 3+ were randomly scattered throughout the typical morphology of squamous cell carcinoma. This case was HPV-negative and stage IIA1 (T2a1, N0, M0); no follow-up data after the initial treatment were available. DISCUSSION INSM1 is abundantly expressed in fetal developing neuronal and neuroendocrine tissue, but it is significantly reduced or restricted in adult tissues [14,15]. In normal adult tissues, expression has been confirmed in endocrine cells of tissues such as the adrenal medulla, pancreatic islets, gastrointestinal enterochromaffin cells, and cells thought to be endocrine cells in the normal bronchial epithelium of the lungs and non-neoplastic prostate glands [16]. Fujino et al. [15] reported that NE differentiation was balanced between differentiationsuppressing transcription factors, such as Hes1, and differentiation-promoting transcription factors, such as INSM1 and human achaete-scute homolog 1 (hASH1), as seen in the normal lung epithelial system. not in SiHa and HeLa cell lines (Fig. 1). These results showed that only TCYIK cells expressed INSM1 protein. At the same time, the antibody's accuracy was assured. Our previous report showed that sensitivity of INSM1 IHC was 95% in 37 cases of NEC of the uterine cervix [10]. In these cases, the sensitivities of Syn, CGA, and NCAM were 86%, 86%, and 68%, respectively. In a subsequent study, we evaluated the diagnostic efficacy of INSM1 for cervical NEC [10]. However, in that study, we performed IHC by a manual procedure in accordance with the approach used in our previous work, which made it difficult to evaluate stain strength [10]. In the present study, all processes without antigen retrieval were performed by an automated IHC system, which enabled us to compare the stain strengths and therefore evaluate the specificity accurately. In the present study, IHC was performed in 88 cases of non-NEN of the uterine cervix and showed sensitivity of 94% and specificity of 98%. These results demonstrate the high accuracy of INSM1 as a neuroendocrine marker for uterine cervical cancer. In the recent studies, the sensitivity and specificity of INSM1 for cervical NEC were similar, with sensitivity of 92% and specificity of 98% [17,18]. Previously reported data showed that positive rates for traditional NE markers, i.e., Syn, CGA and NCAM, of cervical NEC were 64%-96%, 32%-85%, and 68%-88%, respectively [10,[19][20][21]. Regarding the accuracy of NEN diagnosis, among INSM1 alone, INSM1 combined with other neuroendocrine markers, and combinations of neuroendocrine markers other than INSM1, INSM1 alone showed the highest sensitivity for the diagnosis of cervical NEN in this study. INSM1 plus morphology seemed to be sufficient for the diagnosis of cervical NEN. In the present study, only one case, which was SCNEC admixed with squamous cell carcinoma in which a biopsy specimen was obtained, showed INSM1 stain strength of grade 1+. This case was diagnosed by being positive for Syn (and negative for GCA and NCAM) and by its morphological features. Although staining for INSM1 was confirmed in this case, in clinical practice it would be difficult to make a definitive diagnosis with this weak staining (grade 1+). In 3 biopsy cases in the current study, one was the above case, and another case showed INSM1 stain strength of grade 2+. In 13 cases that underwent hysterectomy or conization, 2 cases showed INSM1 stain strength of grade 2+, one was hysterectomy and another case was conization. We consider that formalin fixation and dealing with the specimens immediately after resection or biopsy may affect INSM1 stain strength. Recently, in small cell lung cancer, it was suggested that there were INSM1-low and INSM1high subgroups. INSM1-low suggests low chemosensitivity or a poor prognosis [22]. The current data did not show a relationship between INSM1 expression status and prognosis (Table S2). However, in 2 cases of stage IVB, one case showed grade 1+ and score 2 INSM1 staining, and another case showed grade 2+ and score 2. It may be noteworthy that, in these IVB cases, staining of both was weak. A large number of cases would be worth investigating. In the present study, there were 2 cases of INSM1-positive cervical non-NEN. One case was adenocarcinoma, HPV-associated; this tumor contained a small number of cells with stain strength grade 3+. Therefore, it may have been an ordinary type adenocarcinoma with a component that showed intestinal type or intestinal-type differentiation; the positive cells were located on the basal side of the glandular ducts in the tumor, suggesting the possibility of symbiotic neuroendocrine cells. However, these cells were negative for Syn, CGA, and NCAM. Another INSM1-positive case was SCC, HPV-independent. This tumor showed the typical characteristics of SCC on H&E staining and was negative for Syn, CGA, and NCAM. S3 and S4). The results suggest that there are many false-positive cases of INSM1 in non-endometrial NEN. All authors discussed about these problems and concluded that endometrial carcinoma should be excluded from this study in this time. The INSM1 staining strength in a small number of cases of endometrial NEC was often reported to be weak and less sensitive [17]. Zou et al. [18] reported staining by neuroendocrine markers of 138 cases of endometrial non-NEN, and tumor was positive in 6 cases (4%) for INSM1, 44 cases (32%) for CGA, 25 cases (18%) for Syn, and 23 cases (17%) for NCAM. In addition, 4 of 10 cases (40%) of undifferentiated endometrial carcinoma were positive for INSM1 [18]. In another study, Moritz et al. [24] reported that 17 of 26 cases (65.4%) of endometrioid adenocarcinoma and 10 of 10 cases (100.0%) of carcinosarcoma were stained by one or more of the neuroendocrine markers Syn, CGA, and NCAM. These results suggest that INSM1 is not as accurate as the other neuroendocrine markers for diagnosing endometrial NEN. In this study, the specificity of INSM1 for cervical cancer was clearly demonstrated by using an automated immunostaining system. INSM1 showed very high specificity for cervical NEN, suggesting that INSM1 is very useful for the diagnosis and differential diagnosis of NEN. However, the limitation is that the non-NENs included in this study were those diagnosed as adenocarcinoma or squamous cell carcinoma, and grade and subtypes were not considered. Since staining for these subtypes may be a pitfall, the next study is to examine cases that show such histological features. In conclusion, INSM1 IHC is an adjunctive diagnostic method with excellent specificity and sensitivity in the diagnosis of cervical NEN. The present study showed that higher specificity can be obtained if morphological evaluation is also performed. Table S1 Definition of positivity of the tumor cell population and stain strength Click here to view
2022-10-18T06:17:34.601Z
2022-10-04T00:00:00.000
{ "year": 2022, "sha1": "e810ebf718cb37d3c32f8898d7c8b154cebc0f08", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3802/jgo.2023.34.e1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7c45b9d0b0a7fd43feeb93da4d41a755c8c6e7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119201369
pes2o/s2orc
v3-fos-license
Measurement of the Branching Fraction for $D^+_s\to\tau^{+}\nu_{\tau}$ and Extraction of the Decay Constant f_{D_s} The branching fraction for the decay $D^+_s\to\tau^{+}\nu_{\tau}$, with $\tau^{+}\to e^{+} \nu_{e}\bar{\nu}_{\tau}$, is measured using a data sample corresponding to an integrated luminosity of 427 fb^{-1} collected at center of mass energies near 10.58 GeV with the \slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R} detector at the PEP-II asymmetric-energy $e^{+}e^{-}$ collider at SLAC. In the process $e^{+}e^{-}\to c\bar c\to D^{*+}_s \kern 0.2em\bar{\kern -0.2em D}{}_{\mathrm{TAG}} \kern 0.2em\bar{\kern -0.2em K}{} X$, the $D^{*+}_s$ meson is reconstructed as a missing particle, and the subsequent decay $D^{*+}_s\to D^+_s\gamma$ yields an inclusive $D^+_s$ data sample. Here $_{\mathrm{TAG}}$ refers to a fully reconstructed hadronic $\kern 0.2em\bar{\kern -0.2em D}{}$ decay, $\kern 0.2em\bar{\kern -0.2em K}{}$ is a $K^-$ or $\kern 0.2em\bar{\kern -0.2em K}{}^0$, and $X$ stands for any number of charged or neutral pions. The decay $D^+_s\to K^0_{\scriptscriptstyle S} K^{+}$ is isolated also, and from ratio of event yields and known branching fractions, ${\cal B} (D^+_s\to\tau^{+}\nu_{\tau})$ = (4.5\pm0.5\pm0.4\pm0.3% is determined. The pseudoscalar decay constant is extracted to be $f_{D_s}$ = (233\pm13\pm10\pm7) MeV, where the first uncertainty is statistical, the second is systematic, and the third results from the uncertainties on the external measurements used as input to the calculation. The branching fraction for the decay D + s → τ + ντ , with τ + → e + νeντ , is measured using a data sample corresponding to an integrated luminosity of 427 fb −1 collected at center of mass energies near 10.58 GeV with the BABAR detector at the PEP-II asymmetric-energy e + e − collider at SLAC. In the process e + e − → cc → D * + s DTAGKX, the D * + s meson is reconstructed as a missing particle, and the subsequent decay D * + s → D + s γ yields an inclusive D + s data sample. Here DTAG refers to a fully reconstructed hadronic D decay, K is a K − or K 0 , and X stands for any number of charged or neutral pions. The decay D + s → K 0 S K + is isolated also, and from ratio of event yields and known branching fractions, B(D + s → τ + ντ ) = (4.5±0.5±0.4±0.3)% is determined. The pseudoscalar decay constant is extracted to be fD s = (233±13±10±7) MeV, where the first uncertainty is statistical, the second is systematic, and the third results from the uncertainties on the external measurements used as input to the calculation. The D + s meson can decay purely leptonically via annihilation of the c and s quarks to a virtual W + boson which decays to a lepton pair. These decays provide a clean probe of the pseudoscalar meson decay constant f Ds , which describes the amplitude for the c and s quarks to have zero spatial separation within the meson, a nec-essary condition for the annihilation to take place. In the Standard Model (SM), ignoring radiative processes, the total width is where M D + s and m ℓ are the D + s and lepton masses, respectively, G F is the Fermi coupling constant, |V cs | is the magnitude of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element that characterizes the coupling of the weak charged current to the c and s quarks [1]. The leptonic decay of the D + s meson is helicitysuppressed because it has zero spin, so that the final state neutrino and lepton must combine to form a spin-0 state. Consequently, the left-handed neutrino forces the anti-lepton to be left-handed, thus suppressing the decay rate by the factor m 2 The net effect of helicity and phase space factors results in large differences in the leptonic branching fractions of the D + s meson. The branching fractions for D + s decays to ℓν ℓ , where ℓ = e + , µ + , τ + , are roughly 2 × 10 −5 : 1 : 10 in proportion. The large branching fraction for the τ + decay mode motivates the use of the decay sequence D + s → τ + ν τ , τ + → e + ν e ν τ in this analysis. The signal branching fraction B(D + s → τ + ν τ ) relative to the well measured branching fraction B(D + s → K 0 S K + ) = (1.49±0.09)% [2] is determined and used to extract the decay constant f Ds . In the context of the SM, predictions for meson decay constants can be obtained from QCD lattice calculations [3][4][5][6][7][8]. The most precise theoretical prediction for f Ds , which uses unquenched lattice QCD, is (241±3) MeV [5]. The most precise measurement of the branching fraction for D + s → τ + ν τ (τ + → e + ν e ν τ ) yields B(D + s → τ + ν τ )= (5.30±0.47±0.22)% [10] and the value f Ds = (252.5±11.1±5.2) MeV. Decay constants of D and B mesons enter into calculations of hadronic matrix elements for several key processes and their theoretical predictions. For instance the calculation of BB mixing requires knowledge of f B . While leptonic decay of the B meson is heavily CKM suppressed, leptonic decay of D + s meson is CKM favored and thus resulting more precise measurements of f Ds can be used to validate the lattice QCD calculations that are applicable to B-meson decay. Several models involving physics beyond the SM can induce a difference between the theoretical prediction and the measured value. These include a two-Higgs doublet model [12] and a model incorporating two leptoquarks [13]. It is important to have high precision determinations of f Ds , both from experiment and theory, in order to discover or constrain effects of physics beyond the SM. The Particle Data Group gives a world average of f Ds = (273±10) MeV [9] but this does not include the most recent results [10,11]. Measuring the branching fraction for D + s → τ + ν τ requires knowledge of the total number of D + s mesons in the parent analysis sample. Alternatively, the branching fraction can be measured relative to that for a D + s decay mode with well-known branching fraction, with the latter then used to obtain B(D + s → τ + ν τ ); this is the procedure followed in the present analysis. The decay mode D + s → φπ + has been used often as a normalization mode. However this is somewhat problematic, since determination of the branching fraction for this decay requires a Dalitz plot analysis of the D + s → K + K − π + process. A description of the Dalitz plot intensity distribution incorporates contributions from other quasi-twobody amplitudes such as K * (892) 0 K + , K * (1430) 0 K + and f 0 (980)π + . The contributions of these other decay modes to the specific mass range used to define the φ → K + K − rate have to be taken into account. These depend on the mass and width values of the resonances and their interference effects, as well as the mass resolution of the experiment. [2,14]. For these reasons, the decay mode D + s → K 0 S K + is chosen instead as reference mode in the present analysis. Its branching fraction is quite well known, and the branching fraction for D + s → τ + ν τ can then be expressed as: where N S and ǫ refer to the number of events and total efficiency for the signal and the normalizing decay modes. The values of the branching fractions used for K 0 S → π + π − and τ + → e + ν e ν τ are (69.20±0.05)% and (17.85±0.05)%, respectively [9]. This analysis uses an integrated luminosity of 427 fb −1 for e + e − collisions at center of mass (CM) energies near 10.58 GeV, corresponding to the production of approximately 554 million cc events. The data were collected with the BABAR detector at the SLAC PEP-II asymmetric-energy collider. The BABAR detector is described in detail in Refs. [15,16]. Charged-particle momenta are measured with a 5-layer, double-sided silicon vertex tracker (SVT) and a 40-layer drift chamber (DCH) embedded in the 1.5-T magnetic field. A calorimeter consisting of 6580 CsI(Tl) crystals is used to measure electromagnetic energy. Charged pions and kaons are identified by a ring imaging Cherenkov detector (DIRC) and by their specific ionization loss in the SVT and DCH. Muons penetrating the solenoid are detected in the instrumented magnet flux return. The D + s → τ + ν τ branching fraction measurement is carried out via the D * + s production process e + e − → cc → D * + s D TAG K 0,− X, and the subsequent decay D * + s → D + s γ. Here, D TAG is a fully reconstructed hadronic D meson decay, required to suppress the large background from non-charm continuum qq pair production; X represents a set of any number of pions (π 0 and π ± ) produced in the cc fragmentation process, and K 0,− represents a single K 0 or K − from cc fragmentation required to assure overall balance of strangeness in the event. The photon from the decay D * + s → D + s γ is referred to as the signal photon. Event selection begins with D TAG construction. Candidates are reconstructed in the following modes: The χ 2 probability for the geometric vertex fit of the TAG decay products must exceed 0.1%. The minimum required CM momentum of the D TAG candidate is 2.35 GeV/c. It is chosen near the kinematic limit for charm mesons arising from B decays in order to eliminate the associated large combinatoric background. The mass of the D TAG candidate must lie in the range 1.7-2.1 GeV/c 2 . A single K − or K 0 S from cc fragmentation that does not have tracks in common with the D TAG combination is found. Kaons are identified using information from the DCH and DIRC. A K 0 S candidate is reconstructed through its decay to two charged pions which must originate from a common vertex. The dipion invariant mass must be within 25 MeV/c 2 of the nominal K 0 S mass value [9], and the flight distance must be greater than three times its resolution. Neutral pions are reconstructed through their decay to two photons; the invariant mass of the photon pair must be within 10 MeV/c 2 of the nominal π 0 mass value [9]. Any charged or neutral pion not associated with the D TAG or the fragmentation kaon is assigned to the fragmentation X candidate. A D * + s candidate is reconstructed as the missing particle with its four-momentum defined as, P D * + s = P e + e − − (P DTAG +P K 0 S /K − +P X ), where the four-momenta (P ) are from the initial state, D TAG , the fragmentation kaon and the fragmentation X, respectively. The mass of the D * + s candidate must be within 200 MeV/c 2 of the nominal D * + s mass value [9]. The production vertex of surviving candidates is then fitted using mass, energy and collision point constraints. In order to be consistent with the decay sequence D + s → τ + ν τ , τ + → e + ν e ν τ , a D + s candidate is selected by requiring that there be a single e + in the event. The e + must have a minimum number of coordinates in the SVT and the DCH to ensure a good quality track. Similarly, the decay D + s → K 0 S K + is identified by requiring a single K 0 S and K + pair. In addition, these candidates must not have tracks in common with D TAG or the fragmentation kaon. The four-momentum of the D + s candidate, for both the signal and the normalization mode, is defined as the recoil in the two-body decay D * + s → D + s γ, P D + s = P D * + s − P γ , where P γ is the four-momentum of the signal photon candidate, which must have CM energy greater than 100 MeV. The resulting D + s candidate must have mass within 200 MeV/c 2 of the nominal D + s mass value [9]. Surviving D + s candidates are further separated from background by requiring that the χ 2 probability for the D * + s kinematic vertex fit exceed 0.1%, and that the CM momentum of the D + s candidate exceed 3.0 GeV/c. The whole reconstruction procedure was evaluated using GEANT4-based [18] Monte Carlo (MC) events generated with EvtGen [19]. The generated MC samples for D + s → τ + ν τ (τ + → e + ν e ν τ ), D + s → K 0 S K + and cc correspond to 14, 26 and 2 times the acquired data samples, respectively. After the final selection, the only background decay modes which contribute to the peak at the recoil D + s mass (with the expected yields and shape determined from MC events weighted to 427 fb −1 ) are D + s → ηe + ν e (226 events), D + s → η ′ e + ν e (24 events), D + s → φe + ν e (75 events) and D + s → K 0 L e + ν e (59 events). The yields for the signal and normalization mode are determined from unbinned maximum-likelihood fits to the respective recoil D + s mass and extra energy (E extra ) distributions. As described earlier, the recoil D + s fourmomentum is defined as P D + s = P D * + s − P γ ; E extra is reconstructed as the sum of the CM energy of all photons in the event with laboratory energy greater than 30 MeV which are not associated with any of the reconstructed charged-particle tracks or reconstructed neutral pions of the event. The signal photon is also excluded. The value of E extra has been found to discriminate most effectively between signal and background events when it is required to be in the range 0-0.5 GeV. The distributions of E extra for signal MC, background MC (events passing selection that do not include the signal) and data are shown in Fig. 1. The difference between data and MC at E extra = 0 is due to MC under-estimation of beam-related backgrounds and of noise in the calorimeter. It has been verified that the MC gives a a good description of the E extra distribution in data for values of E extra above 20 MeV. Correlations between the recoil D + s mass and E extra were found to be negligible. Due to the discontinuity in the E extra distribution, the data are divided into two samples from which the results are determined using a simultaneous unbinned maximum likelihood fit. For E extra = 0, only the recoil D + s mass is used as a discriminating variable. For E extra > 0, E extra and the recoil D + s mass are used. The fit components are signal, peaking background and non-peaking background. The D + s → K 0 S K + mode is found to have no peaking background in MC, and thus no peaking background component is included in the fit. The signal recoil D + s mass probability density function (PDF) consists of a bifurcated Gaussian function with a tail component (BFG) [20], plus a Novosibirsk function [21]. The shape of the E extra distribution for background is taken from the data sidebands of the recoil D + Novosibirsk function. Second order polynomial functions are used for the signal and peaking background PDFs for E extra . A Novosibirsk function is used for the background E extra PDF. The parameters describing the shape of each PDF are obtained from fits to MC distributions. The systematic uncertainties introduced by this procedure are discussed below. Only the parameters specifying the number of signal and background events are allowed to vary in the fit. The number of peaking background events is determined from the MC normalized to the data sample. Using MC pseudoexperiments, the fitting procedure is found to yield unbiased estimates of the signal yield. The fits to data are shown in Figs. 2-5, and yield N τ ντ S =448±36 events and N K 0 S K + S =333±28 events. Using Eq. (2) and the total efficiencies (ǫ τ ν = 0.075%, ǫ K 0 S K + = 0.044%) the branching fraction for D + s → τ + ν τ is measured to be (4.5±0.5)%, where the uncertainty is statistical only. The systematic uncertainties associated with the selection criteria are evaluated by comparing the selection efficiencies for MC and data separately for each selection criterion. The selection efficiency is defined as N N −1 /N All where N N −1 is the number of events passing all selection criteria except the one being evaluated, and N All is the number of events passing all selection criteria. The uncertainty is then defined as |1 − R D + s →τ + ντ /R D + s →K 0 S K | where R is the ratio of data and MC efficiencies for each decay mode. The uncertainty associated with the χ 2 probability for the kinematic vertex fit is 1.1%, and that with the CM momentum of the D + s meson is 2.7%. The uncertainties in the PDF distributions are evaluated by individually varying each PDF parameter by Eextra for Eextra > 0. The solid curves result from the fit described in the text, and the dashed curves represent the signal contribution. vidual contributions are added in quadrature. The impact of the uncertainty in the number of peaking background events for D + s → τ + ν τ on the signal yield is assessed by varying the uncertainties for the individual branching fractions, and refitting for the number of signal events. The peaking background modes and their branching fractions are: D + s → ηe + ν e (2.9±0.6)%, D + s → η ′ e + ν e (1.02±0.33)%, D + s → φe + ν e (2.36±0.26)% and D + s → K 0 L e + ν e (0.19±0.05)% [9]. Other sources of systematic uncertainty include tracking efficiency (0.34% per track) and e + identification efficiency (0.82%). The uncertainties from tagging and fragmentation particles cancel in the ratio of the signal and reference modes. Table I summarizes the systematic uncertainty estimates on the branching fraction. In conclusion, using an integrated luminosity of 427 fb −1 collected with the BABAR detector, the branching fraction for the decay D + s → τ + ν τ is measured to be (4.5±0.5±0.4±0.3)%, where the first uncertainty is statistical, the second systematic and the third from the uncertainties on the branching fractions for D + s → K 0 S K + , K 0 S → π + π − and τ + → e + ν e ν τ [9]. The decay constant is extracted using Eq. (1) and the values of m τ (1776.84±0.17 MeV/c 2 ), m Ds (1968.49±0.34 MeV/c 2 ), τ Ds (0.500±0.007 ps) and by assuming |V cs | = |V ud | = 0.97425±0.00022 [22]. The value obtained is f Ds = (233±13±10±7) MeV, where the first uncertainty is statistical, the second is systematic, and the third is from the uncertainties on the external measured quantities used in the calculation [9]. The results presented here agree to within one standard deviation with the most recent CLEO-c result for D + s → τ + ν τ [10,11] and recent unquenched lattice QCD calculations of f Ds [5,7,8]. We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and kind hospitality.
2010-03-17T19:23:55.000Z
2010-03-16T00:00:00.000
{ "year": 2010, "sha1": "e01e72c9c75ab120a8fddf60d18806331b7b55d8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e01e72c9c75ab120a8fddf60d18806331b7b55d8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27562028
pes2o/s2orc
v3-fos-license
Asian Option Pricing with Monotonous Transaction Costs under Fractional Brownian Motion Geometric-average Asian option pricing model with monotonous transaction cost rate under fractional Brownian motion was established. The method of partial differential equations was used to solve this model and the analytical expressions of the Asian option value were obtained. The numerical experiments show that Hurst exponent of the fractional Brownian motion and transaction cost rate have a significant impact on the option value. Introduction In 1989, Peters [1] firstly proposed that fractional Brownian motion could be used to describe the changes of asset prices.In 2000, the theory of stochastic integral about fractional Brownian motion was studied by Duncan et al. [2], and fractional Itô's formula and Girsanov theorem under the fractional Brownian motion were derived.The fractional Itô's integral had been further developed by Biagini et al. [3] for ≥ 1/2.Equivalent definition of fractional Itô's integral was introduced by Alos et al. [4] and Bender [5].Necula [6] utilized the knowledge of fractal geometry and deduced the Black-Scholes option pricing formula under fractional Brownian motion, which was of great significance to the development of option pricing with fractional Brownian motion.The transaction cost is an important factor affecting the option pricing.Many scholars had studied the pricing problems of contingent claim with transaction costs.Leland [7] groundbreakingly proposed that a modified volatility should be applied to solving the problem of hedging error brought by transaction costs in Black-Scholes model.Barles and Soner [8] assumed that the investor's preference satisfied exponential utility function and provided a more complex model.The Black-Scholes option pricing model with transaction costs was given by Amster et al. [9].Liu and Chang [10] and Wang et al. [11] studied the European option pricing with transaction costs under the fractional Brownian motion.The pricing studies for the Asian option mostly are based on the standard Brownian motion, but the time-varying of Asian option pricing model under fractional Brownian motion had not been studied systematically. Based on the previous references, the geometric Asian option pricing model with monotonous transaction rates under the fractional Brownian motion was presented, and the analytic expression of the Asian option value was derived.The influence of Hurst exponent and transaction cost on the Asian option value was discussed through the numerical calculations.This paper's outline is as follows.In Section 2, we studied the geometric-average Asian option pricing model under the fractional Brownian motion.The closed-form solution of the pricing model was presented in Section 3. In Section 4, the numerical examples were given.Section 5 serves as the conclusion of the whole paper. Geometric-Average Asian Option Pricing Model for a Fractional Brownian Motion under Monotonous Transaction Cost Rate If = 1/2, then the corresponding fractional Brownian motion is the usual standard Brownian motion. In this paper we will drive a geometric-average Asian option pricing model under the following assumptions. (i) The price of the underlying stock satisfies where is the expected yields, is the stock price volatility, and () is the fractional Brownian motion with Hurst exponent. (ii) The risk-free interest rate is a deterministic function of time . (iii) Suppose V shares of the underlying stock are bought (V > 0) or sold (V < 0) at the price ; then the transaction cost is given by ℎ(V )|V | in either buying or selling; here ℎ(V ) = −V (, > 0) is a monotone transaction cost rate. (iv) The expected return rate of the portfolio equals the risk-free interest rate. (v) The portfolio is revised every , where is a finite and fixed small time step. Let = (, , ) be the value of the geometric-average Asian call option on the time , where = (1/) ∫ 0 ln is the geometric average of the underlying asset price in the time period of [0, ]. Construct a portfolio Π : long position of a Geometricaverage Asian call option and short position of Δ shares of underlying asset.Then the value of the portfolio at current time is In the period [, +], the change in the value of the portfolio is where Owing to we have hence we have By the assumption (iv) we have And then Owing to then Choosing Δ = / and substituting it into (10), the following formula can be obtained: Substituting these into (13), we get the following conclusions. Option Pricing Formulas Theorem 3. Supposing the underlying asset prices satisfy (2), then at time the value (, , ) of Geometric-average Asian call option with transaction costs with expiration date and exercise price is as follows: where which satisfies the conditions () = () = () = 0; then we have Substituting ( 23) into (21), we have Let and combined with terminal conditions () = () = () = 0, we can get Thus (21) becomes According to the theory of classical heat conduction equation solution, we have Let By variable restored, we obtain that where and the remaining symbols accord with Theorem 3. Numerical Experiments In this section, the influence of monotone transaction rate parameters and the Hurst exponent on Asian option value will be discussed through applying MATLAB software.The values of the parameters of geometric-average Asian options are assumed as follows: With the option pricing formulas (33) presented, the value of the option can be calculated.Figures 1 and 2 give the relationships between the price of the underlying assets and the value of Asian call option and put option with different Hurst exponent.From the figures, Hurst exponent is inversely proportional to the value of Asian option.Figures 3 and 4 demonstrate the changes of Asian call option with the stock price under the different parameter and parameter .We can draw such a conclusion: the option value increases with the parameter increasing and decreases with the parameter increasing.This is mainly because transaction cost rate is a decreasing function of and an increasing function of . Discussions and Conclusions In this paper, the problem of Asian option pricing with monotonous transaction cost rate under fractional Brownian motion was studied by using the portfolio technology and no arbitrage principle, and the pricing model was established.This model was solved by the method of partial differential equations, and the analytical expressions of the Asian option value were obtained.The numerical experiments showed that Hurst exponent of the fractional Brownian motion and transaction cost rates have a significant impact on the option value. Figure 3 : Figure 3: Asian put option pricing corresponding to the different . Figure 4 : Figure 4: Asian put option pricing corresponding to the different .
2018-04-03T01:00:32.465Z
2013-12-09T00:00:00.000
{ "year": 2013, "sha1": "b5c7a7d47ca1fb715ab02fdad711fafcfb463035", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jam/2013/352021.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b5c7a7d47ca1fb715ab02fdad711fafcfb463035", "s2fieldsofstudy": [ "Business", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
237963383
pes2o/s2orc
v3-fos-license
A prospective study of functional outcome of comminuted metaphyseal distal femur fracture treated with lateral locking compression plate and medial augmentation with TENS Background: Distal femur fractures are complex injuries producing long term disability and present considerable challenges in management. These fractures poses challenges to the treating surgeon because of thin cortex of the femoral condyles, wide medullary canal, relative osteopenia, short condylar fragment and comminution involving articular surface. Distal femur fracture disrupts normal knee joint functioning, hence needed anatomical reduction and stable internal fixation to prevent crippling disabilities and hardware failure. Objective: To evaluate the functional and radiological outcome of comminuted metaphyseal fracture of distal femur treated by Lateral locking compression plate and medial TENS nail using NEER’S score. Methods: In this study, 20 cases of comminuted metaphyseal fracture of distal femur were operated between November 2018 to April 2020 with distal femur lateral locking compression plate and medial augmentation with tens nail. Patients were selected based on inclusion and exclusion criteria and were followed up for 12 months. The results were analysed with NEER’S score. Results: Out of 20 patients with comminuted metaphseal distal femur fractures AO-Muller type A3 -subtypes 15 and C2 subtypes 5 patients were studied. Mean age of the patients was 45.5 years with age ranging from 20 years to 80 years. Right sided fractures were predominant. In 65% cases mode of injury was road traffic accident and rest were self-fall. 2 cases were operated under MIPPO technique and rest all were operated on with standard open lateral approach. Average surgical procedure timing was 119.5 minutes in our study. Average duration of radiological union was 18.6 weeks and average duration of weight bearing was 20.5 weeks. Complications such as superficial wound infection, knee pain and stiffness were observed in 9 patients. The NEER’S score was excellent in 45%, good-fair in 50% and 5% poor outcome. Conclusion: Comminuted distal femur fracture needs dual column fixation to achieve bone healing and restore function of the affected limb in shortest time without compromising stability. The advantage medial augmentation with TENS is active range of motion can be started earlier, stable internal fixation that does not allow varus collapse, mal-union and further implant failure. Introduction In the last few decades, rapid industrialization and the fast pace of life have brought both comforts and catastrophe like road traffic accidents and crippling many young lives. Distal femur fractures are complex injuries producing long term disability and present considerable challenges in management. It constitutes 4 to 7% of all femur fractures. Distal femoral fractures occur predominantly in two patient population -young persons, especially men, after high energy trauma and elderly persons especially women after low energy injuries. 1 These fractures poses challenges to the treating surgeon because of thin cortex of the femoral condyles, wide medullary canal, relative osteopenia, short condylar fragment and comminution involving articular surface. Most supracondylar fractures were treated non-operatively till 1970; however, loss of knee motion, angulatory deformities, knee joint incongruity, as well as the complications of recumbency led to better methods of treatment [2,3] . Anatomic reductions of the articular surface, restoration of limb alignment, and early mobilization have been shown to be effective ways of managing most distal femoral fractures. The majority of distal femoral fractures require surgical stabilization as the results of conservative treatment are frequently unsatisfactory [4] . Supracondylar femur fractures were historically treated with condylar buttress plates [5] . Fixed-angle implants, angle-blade plates, intramedullary retrograde nails and dynamic supracondylar screws were found to have a superior biomechanical design for decreasing varus collapse events compared to condylar buttress plates [6,7] . Locking plate evolved along the line of Dynamic Compression Plate (DCP). Locking plates have increased biomechanical resistance with the possibility of greater numbers of fixation screws in the distal femur metaphysis. Regular Distal Femur-Locking Compression Plate is unable to hold on to every fragment because of its prefixed screw trajectory. Variable Angle-Locking Compression Plate (VA-LCP) has overcome this angular deficit of prefixed trajectory of screws [8] . Fracture type, muscle forces acting on the distal part of the femur, the weight of the lower extremity and natural gravity of the entire limb may increase lever arm and affect fracture stabilization and warrant load neutralization. Numerous bio-mechanical studies have been performed in the past and many others are currently ongoing with the aim of achieving an optimal stable construct [9] . Complications of distal femur fracture are mal-union, nonunion, implant failure, varus angulation, limb length discrepancy, infections and secondary osteoarthritis. As for comminuted metaphyseal fractures of distal femur an important factor for instability of fixation mechanics is medial cortical defect varus or collapse, which leads to early failure of internal fixation. Single LCP often lacks adequate support for medial cortical defects so the vertical load may cause a bending tendency and instable fixation, delayed healing, or nonunion. An option to improve medial stability is by medial augmentation [10,11] .  The purpose of this study is to evaluate outcome in the treatment of distal femur fracture with lateral LCP and medial augmentation with TENS nail. Materials and Methods The study was conducted in the Department of Orthopaedics, Sanjay Gandhi Institute of Trauma and Orthopaedics, Bangalore. This study consisted of 20 patients visiting outpatient department, emergency department of the hospital. Patients diagnosed with distal femur fractures (AO Muller subtype-A3, C2) were included in the study who were operated during the period from November 2018-April 2020. The follow up duration range varied from 6 months to 12 months. Patients admitted with distal femur fracture after meeting inclusion and exclusion criteria were selected for the study. Prior informed consent, pre-operative anesthetic evaluation was done. P re-op planning of fixation was done and proceeded with surgery. Surgical procedure After the patient is anaesthetised, the patient is placed in supine position over a radiolucent table. A sand bag is placed under the ipsilateral hip and a pillow is placed under the knee joint. The limb is painted and draped up to the level of iliac crest. The fracture is approached by a longitudinal incision over the lateral aspect of distal thigh centred over the lateral epicondyle extending distally up to the level of Gerdys tubercle distally and proximal extension as required. Fascia lata is incised longitudinally along the skin incision up to iliotibial band. The incision extends distally through lateral joint capsule and meniscal injury is avoided. Vastus lateralis is reflected off the inermuscular septum along the linea aspera in the anterior direction. Perforating vessels transversing the muscular plane are ligated. Care is taken to avoid unnecessary stripping of soft tissue at the fracture site. Once the fracture site is exposed, haematoma is drained. The fracture fragments freshened and cleaned. In case of fracture extending to articular area the first step is reduction of femoral condyles with maintenance of articular congruity. The medial and lateral femoral condyles are reduced and held together with the help of patella clamp. The condyles are secured through k wires and once reduced; the next step after reduction of femoral condyles is the alignment of condyles with the femoral shaft. In case of difficult reduction of condyle with shaft, reduction is aided with femoral distractor or knee flexion with pillow under the knee joint. Once the fracture site is reduced fracture ends are held with bone clamps and secured with temporary k-wires. Kwires are placed in such way that they are not in the way of plate. Distal femoral LCP is applied along the lateral aspect of the distal femur and the plate is reduced to the bone with cortical screw. Partial number of screws are applied proximally and distal as planned (two screws distally and only one screw proximally). By using C-arm guidance, a two cm incision made over medial aspect of distal femur centered over medial epicondyle. Entry portal was made with the help of curved Awl about 2cm from the articular surface. Initially entry of bone awl is perpendicular to bone and then advance at an angle of 45 degree to femur. An appropriate size TENS nail was pre-curved and then fixed from medial side into the medullary canal of the femur up to lesser trochanter. The final position was then checked under C-arm guidance and remaining screws were put to fix the lateral plate. The wound was then closed in layers, drain and antiseptic dressing was applied. Post-operative wound care and physiotherapy was done according to hospital protocol. Mobilization with Non weight bearing was started from the first post-operative week till 6-8 weeks depending on the fracture pattern and then partial weight bearing after confirmation of beginning of healing process till fracture union. Further, full weight bearing was allowed depending on the progress of fracture healing pattern clinically and radiologically. Patients followed up at every 4 weeks up to 3 months postoperatively and then on monthly until 6 months and on 9th month and 1 year. Based on data final outcome was assessed according to NEERS scoring system. Patients in the age group of 20-80 years were considered for the study with average age of 45.5 years. Males were the majority patients in the study group with 70% of total cases. Right sided fractures were more common than left side. Most common mode of injury in the study group was road traffic accident 65% followed by self-fall 35%. Majority of patients in the study group were the Muller subtype A3 with 75% and rest were C2 Average time period for radiological union was 18.6 weeks in the study population. Average time for full weight bearing in the study population was 20.5 weeks. In our study 45% of the patients had excellent Neer's score, 40% had good, 10% had fair and 5% had poor Neer's score. In our study 3 patients had knee pain (medial aspect of knee over the tens cut ends), 3 had infection at TENS insertion site,2 cases of stiffness, and 1 case of shortening and stiffness of knee. Discussion Out of 20 cases there were 14 males and 6 females. Mean age of the patients was 45.5 years with age ranging from 20 years to 80 years. In our study the mean age of the patients was 45.5 years which was comparable to previous studies done by Lucas et al. [12] (mean age of 39 years), Yeap et al. [13] (mean age of 44 years). Right distal femur was involved in 15 cases and left distal femur in 5 cases. The study included only closed femur fractures. The mode of injury was RTA in 13 cases and 7 cases had history of self-fall. In our study most common mode of injury was RTA (65%) followed by self-fall (35%).This result was comparable with results of study done by Lucas et al. [12] (RTA-79%) and Christian et al. [14] (RTA-69%). Out of 20 cases 4 cases had associated (not in ipsilateral limb) injuries. In patients with associated injuries early rehabilitation were delayed but results were comparable with other cases of the study. In our study we studied only the comminuted metaphseal distal femur fractures Muller type A3-subtypes 15 and C2 subtypes 5 patients. 2 cases were operated under MIPPO technique and rest all were operated on with standard open lateral approach. All patients were approached medial side with mini insicion of 2 cm for medial TENS insertion. Average surgical procedure timing was 119.5 minutes in our study. Our study when compared similar study with Manish singh et al. [15] (92 minutes) and when compared with dual plating studies such as Radwan et al. [16] (148 minutes), Zhibiao Bai et al. [17] (180 minutes) and Mohammed A imam et al. [18] (213 minutes) our had less operative time. Most of our patients had excellent postoperative knee flexion with average knee flexion of 110.2 degrees. 2 cases had an extensor lag of 5 degree while 1 patient had an extensor lag of 10 degrees. Our study when compared with Mohammed A imam et al. [18] Conclusion The high energy trauma in young individuals and low energy osteoporotic fracture in elderly patients is challenging task to treating surgeons, nature of fracture and complexity of fracture makes single column fixation unsatisfactory. So comminuted distal femur fracture needs dual column fixation to achieve bone healing and restore function of the affected limb in shortest time without compromising stability. Dual plating results in extensive soft tissue stripping on both sides of femur resulting in reduced blood supply, delayed union, on-union and failure of implant. So medial column was fixed with least possible damage to periosteum with TENS nail. Medial augmentation with TENS has considerable reduced the operating time and complications. The advantage of lateral LCP and medial augmentation with TENS is active range of motion can be started earlier, stable internal fixation that does not allow varus collapse, mal-union and further implant failure. Maintenance of articular congruity is better. The end functional results depends on age of patient, nature of bone, complexity of fracture and optimum post-operative rehabilitation Though out of purview of our present study with lateral LCP and medial augmentation with TENS have been significantly better The long term results using medial TENS nail needs further study for a longer period in a larger sample.
2021-08-20T18:28:38.834Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "b08e220892cbd9f6bc5309c0795c5464c94b6e96", "oa_license": null, "oa_url": "https://www.orthopaper.com/archives/2021/vol7issue2/PartK/7-2-145-486.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b19da2ac34f98f376bc96ea1ac2532d0f0ada556", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
235541237
pes2o/s2orc
v3-fos-license
LEAKAGE ANALYSIS OF THE RSOR ALGORITHM Abstract: The RSOR algorithm is a recursive algorithm that has been proposed as an alternative to the RLS algorithm for updating adaptive filter parameters. As with other algorithms, the forgetting factor, filter length and relaxation parameter significantly affects the performance of the RSOR algorithm. In this study, using an adaptive FIR filter in system identification mode, the effect of forgetting factor, filter length and relaxation parameter on the leakage phenomenon of the RSOR algorithm was analyzed. For this purpose, firstly, the effect of measurement noise on the adaptive filter output, namely the leakage phenomenon, was explained analytically, and then the influence of the forgetting factor and other filter parameters on this leakage phenomenon was examined. The results obtained from the simulation studies are compared with similar algorithms. INTRODUCTION The RLS (Recursive Least Squares) algorithm is preferred in many applications due to its high convergence speed despite its high computational load (Haykin, 2002, Diniz, 2013. On the other hand, some alternative algorithms have been proposed: These are the RSOR (Recursive SOR) algorithm based on the single-step SOR (Successive Over-Relaxation) iteration (Hatun and Koçal, 2012), and the RI (Recursive Inverse) algorithm based on the one-step gradient iteration (Ahmad et al., 2011a). In these type algorithms, the forgetting factor, which is a positive number and taken as close to 1, is used for tracking parameter changes. If the forgetting factor is chosen less than 1, the parameter tracking capability of the algorithm increases, but the stability and misadjustment value of the algorithm are affected negatively. If the forgetting factor approaches 1, the stability and misadjustment value of the algorithm changes positively, but in this case, the parameter tracking capability decreases. Adaptive filters are used in system identification mode in many applications. In the system identification mode, the estimated values of the adaptive filter parameters converge to their correct values, and the measurement noise can be recovered from the signal estimation error. This result indicates that the system identification process is working correctly. However, in some cases, the measurement noise corrupts the output signal of the filter. In this case, which was called as "the leakage phenomenon", the system identification process does not work properly (Paleologu et al., 2008). The mathematical expression of the leakage signal can simply be derived, but the results obtained between different algorithms may not be the same. The leakage term depends on the forgetting factor, filter length, and other filter parameters, if any. The leakage phenomenon is a fundamental problem which affects the adaptive filtering applications negatively by increasing the misadjustment. For example, the leakage signal causes a residual error and inefficient cancellation of noise signal in adaptive noise cancellation applications, or imperfect rejection of echo signal in echo cancellation applications (Haykin, 2002, Paleologu et al., 2008, Ciochină et al., 2009). The effects of forgetting factor and filter length on the leakage phenomenon of RLS and RI algorithms have been studied in detail (Ciochină et al., 2009, Ahmad et al., 2011b. In this article, it is aimed to perform a similar leakage analysis for the RSOR algorithm. In this paper: The leakage signal for the RSOR algorithm was formulated and analyzed depending on the forgetting factor and the other filter parameters in section 2. The simulation results obtained for the RSOR algorithm were compared with the RLS and RI algorithms in section 3, and some conclusions were given in section 4. System Identification Using The RSOR Algorithm In system identification process given in Figure 1, an algorithm adjusts the parameters of the adaptive filter. The system output and the adaptive filter output are given as The parameter vectors of the system and adaptive filter, and the data input vector are defined as follows, respectively: where is the filter length and ( ) is the input signal. The desired signal ( ) is tracked by the output of the filter, and it is written as where ( ) is the measurement noise with a zero-mean and variance 2 , and independent with the input signal ( ). The error signal ( ) is defined as The estimates for ( ) and ( ) can be computed as but they can be updated using the following recursive equations in practical: The RSOR algorithm solves the normal equation (9) using one SOR iteration during a sampling interval as follows Koçal, 2012, 2017). Equations (11), (12) and (13) are used for implementation of the RSOR algorithm. The parameter is known as the relaxation parameter and should be taken between 0 < < 2 for the stability of the SOR iteration. If > 1 is chosen, the SOR iteration faster converges than the Gauss-Seidel iteration (Golub and Van Loan, 1996). Quantitative Expression of The Leakage Phenomenon for The RSOR Algorithm The RSOR algorithm in scalar updating form (13) can be represented in vector updating form as follows (Hatun and Koçal, 2012). which is based on the following splitting as given in the classical SOR method (Golub and Van Loan, 1996). A convergence analysis for the RSOR algorithm can be performed in a more tractable manner by decomposing the symmetric correlation matrix ( ) to its lower triangle, diagonal and upper triangle parts as follows Defining the parameter estimation errors as a vector: and subtracting the correct parameter vector, from the both sides of (14), the following iteration is obtained for the parameter error vector after some arrangements (Hatun and Koçal, 2012). By rewriting the vector quantity in the second term as and substituting in (18) the following result is obtained for the parameter error vector. A similar analysis gives the following iteration for the RI algorithm (Ahmad et al., 2011b, Salman et al., 2017). The following equation is also given for the RLS algorithm (Ciochină et al., 2009). Considering (18), the error is represented as The usage of (20) in (24) gives an estimate of the leakage term for the RSOR algorithm based on the data used and the filter parameters, and this term ( ), which is caused by ( ), leaks to the output signal of the filter On the other hand, using (6) and (10), the vector ( ) is represented as and thus, the normal equation (9) is rewritten as This equation also gives the parameter error vector in (22) for the RLS algorithm. Assuming is close to 1 and is high enough, the following assumption can be written. Using this assumption, when time index goes to infinity or a large value, as → ∞, the normal equation (27) becomes and consequently, the parameter estimations vector ̂( ) is close to correct parameters , namely, ̂( ) ≅ . Thus, the filter output gives the system output as ̂( ) ≅ ( ), and the measurement noise can be close to the error signal as ( ) ≅ ( ). Thus, according to the leakage definition in (24), the measurement noise does not produce a leakage. Under the assumption (28), the parameter error iteration (20) is reduced to If < 1 or is not high enough, the second term in (27) is not equal to zero vector, and therefore, the time-averaged normal equation (27) cannot be reduced to (29) and the parameter error iteration cannot be reduced to (30). According to (17), the parameter estimations have fluctuations around the correct parameter values. Thus, according to the leakage definition in (24), the measurement noise ( ) causes to a leakage term ( ). If there is a leakage term ( ) ≠ 0, which leaks to the adaptive filter output as given in (25), the measurement noise cannot be close to the error signal, i.e., ( ) ≠ ( ), and thus, an accurate result cannot be obtained by solving the normal equation. The leakage signal is affected from low values, and higher values of (Ciochină et al., 2009, Ahmad et al., 2011b. If no leakage occurs, then the system identification process works properly, thereby letting ( ) = 0. Analysis of The Leakage Signal for The RSOR Algorithm Taking statistical expectation of the normal equation (27), the following is written and using the assumption in (28), this equation can be reduced to Based on the independence assumption, which considers adaptive parameters are statistically independent from the input signal (Haykin, 2002), the following approximation can be written By performing a statistical expectation analysis, it can also be shown that the RSOR algorithm produces {̂( )} = as → ∞, asymptotically. Taking statistical expectation of (20), the parameter error iteration for the RSOR algorithm can be written as Assuming the expected value of the correlation matrix estimation converges to its deterministic value at steady-state: the following approximations can be written Using these approximations in (34), the following result is obtained. Under the assumption in (28), and considering the input signal ( ) is uncorrelated with the measurement noise ( ), the iteration (33) is reduced to If all eigenvalues of the RSOR algorithm are smaller than 1, this iteration converges to zero vector asymptotically as → ∞ Koçal, 2012, 2017). Thus, the expected value of the parameter estimates for the RSOR algorithm converges to the correct parameter vector (3), namely, {̂( )} = as → ∞, asymptotically. A stochastic convergence analysis of the leakage signal can be performed by taking statistical expectation of the leakage term in (24). Considering the independence assumption, i.e., the adaptive parameters are statistically independent form the input signal, the following approximation can be written Because of the input signal is zero mean, the leakage term converges to zero if the parameter error vector convergences to zero vector or not zero vector. A more complicated noise sequence can be also considered as the measurement noise ( ), for example a contaminated Gaussian noise with its mean as where ( ) is known as Bernoulli-Gaussian impulsive noise sequence, the ( ) and ( ) are independent Gaussian noise sequences with zero means and variances 2 and 2 , respectively. The ( ) is a switching sequence which has zeros and ones, and modelled by Bernoulli random process with probability SIMULATION RESULTS In this section, adaptive filtering algorithms were used to identify the impulse response of A low frequency sine wave, ( ) = 0.5 sin(0.01 ) with a sampling interval = 1 sec., was added to the output signal measurements in the first three simulations to see if the measurement noise can be recovered from the estimation error and to observe the leakage signal. Thus, it can be observed whether the error is close to the noise. In the simulations, the RSOR algorithm was compared with the RLS and RI algorithms. The following initial values were used for initial phase of the algorithms: RLS: In the first simulation, the effect of the leakage phenomenon were analyzed and compared for the RLS, RSOR and RI algorithms with different forgetting factors and constant filter length ( = 8). The obtained results were given in Figure 2 for the RLS algorithm, in Figure 3 for the RSOR algorithm, and in Figure 4 for the RI algorithm. The leakage values of the used algorithms for different values (with = 8) were compared in Table 1 In the second simulation, the effect of the leakage phenomenon were analyzed and compared for the RLS, RSOR and RI algorithms with different filter lengths and constant forgetting factor ( = 0.96). The obtained results were given in Figure 5 for the RLS algorithm, in Figure 6 for the RSOR algorithm, and in Figure 7 for the RI algorithm. The leakage values of the used algorithms for different values (with = 0.96) were given in Table 2. In the third simulation, the effect of the leakage phenomenon were analyzed and compared for the RSOR algorithm with different values of relaxation parameter with constant filter length ( = 8) and constant forgetting factor ( = 0.9). The obtained results were given in Figure 8. The leakage values of the used algorithms for different values (with = 0.9 and = 8) were presented in Table 3. Figure 9 for the RLS algorithm, in Figure 10 for the RSOR algorithm, and in Figure 11 for the RI algorithm. The leakage values of the algorithms for different  values were compared in Table 4. In the fifth simulation, a Gaussian noise ( ) = + ( ) with mean = 1 and variance 2 = 0.25 was added to the output signal measurements. The same initial values were used with the previous simulation. The error signal ( ) and the leakage signal ( ) = ( ) − ( ) were given in Figure 12 for the RLS algorithm, in Figure 13 for the RSOR algorithm, and in Figure 14 for the RI algorithm. The obtained leakage values of the algorithms for different values were compared in Table 5. Figure 15 for the RLS algorithm, in Figure 16 for the RSOR algorithm, and in Figure 17 for the RI algorithm. The leakage values of the algorithms for different values were given in Table 6. The following findings were obtained from the simulation results for the RSOR algorithm: -A sine wave is used as the measurement noise in the first three simulations, and it was observed that the measurement noise can be recovered from the estimation error when the variance of the leakage signal is low. -The value of leakage increases if the forgetting factor and the filter length are increase. -The value of leakage decreases if the relaxation parameter is increases. -The leakage value of the RSOR algorithm is lower than the RLS algorithm and higher than the RI algorithm. The RI algorithm has a decreasing step size parameter and therefore produces a lower leakage value in steady state. -A zero mean Gaussian noise sequence was used as the measurement noise in the fourth simulation and similar results were obtained. -In the fifth simulation, a non-zero mean Gaussian noise was used as the measurement noise and similar results were obtained. However, higher leakage values were obtained in all algorithms used compared to the fourth simulation. -A zero mean impulsive noise sequence was used as the measurement noise in the sixth simulation and similar results were obtained. But, although a measurement noise with a higher variance was used, lower leakage values were obtained in all algorithms compared to the fifth simulation. CONCLUSIONS In this paper, a quantitative representation for the estimation of the leakage phenomenon in the RSOR algorithm was presented. The leakage analysis was performed in the system identification setup. It was shown that the quantity of the leakage is proportional to the forgetting factor, the filter length, and the relaxation parameter. The theoretical results were verified by computer simulations. The obtained results have shown that the forgetting factor
2021-06-22T17:55:36.374Z
2021-04-27T00:00:00.000
{ "year": 2021, "sha1": "3a79806c5051ff3e3244a169a30260a2e881cfcb", "oa_license": "CCBY", "oa_url": "https://dergipark.org.tr/en/download/article-file/1326174", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "07a32d9519fa8616f29a71adcbcb05bcf6de0cfa", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Materials Science" ] }
268325287
pes2o/s2orc
v3-fos-license
A single session of a beach volleyball exergame did not improve state anxiety level in healthy adult women This study evaluated the acute effect of the exergame Kinect Sports® beach volleyball on state anxiety level in adult women. Thirty healthy adult women (age: 21 [4] years, body mass: 54.70 [19.50] kg, height: 1.61 ± 0.05 m, and body mass index: 21.87 [5.76] kg/m2, data are expressed as median [interquartile range] and as the mean ± standard deviation) were assigned to play an exergame of beach volleyball in singleplayer mode session (intervention session) for ~ 30 min using the Xbox 360 Kinect® or remained seated (control session). State anxiety was evaluated before and after the intervention and control sessions through the State-Trait Anxiety Inventory. State anxiety obtained in both sessions (exergame and control) was classified as intermediate before (median: 36.00 [IQR: 4.75] and mean = 38.73 ± 7.23, respectively) and after (mean: 34.86 ± 6.81 and mean: 37.66 ± 8.44, respectively). Friedman test found no time significant effect on state anxiety of the sessions (χ2 [3] = 6.45, p-value = 0.092, Kendall’s W = 0.07 “trivial”). In conclusion, the present study showed that there were no significant differences in the state anxiety level after an acute session of the exergame beach volleyball. Introduction Anxiety disorders are one of the most common psychiatric conditions, with an estimated 264 million people with anxiety disorders worldwide in 2015, corresponding to 3.6% of the population [1].Brazil, in 2015, presented the highest prevalence of anxiety disorders in the world, corresponding to approximately 19 million individuals (9.3% of the Brazilian population) [1].Anxiety is defined as a state of excessive concern, anticipation of future, and panic [2,3] and has two classifications: state anxiety (transitory feelings, acute) and trait anxiety (usually feelings, chronic) [4,5]. A mild and infrequent level of anxiety is a normal emotion during life because any individual worries about aspects such as personal or family problems, health, and money [6] that do not meet duration or intensity criteria to enable a clinical diagnosis [7].In this latter case, symptoms of anxiety can impair quality of life and daily activities, such as work performance, relationships, and schoolwork [8].Thus, for an individual with an anxiety disorder, the symptoms do not go away and can worsen over time [6].Therefore, the study of therapeutic tools that might help to prevent and treat anxiety disorders is necessary. In this context, physical exercise can be an important nonpharmacological tool to prevent and treat various outcomes, such as anxiety and sleep [9][10][11][12].Conversely, physical inactivity is associated with the development of mental illness [9][10][11].Indeed, previous studies that investigated acute and chronic effects of physical exercise showed improvements on anxiety symptoms [9,11,[13][14][15][16][17].However, most of the population stays physically inactive [18][19][20][21].Guthold et al. evaluated data from nearly two million participants and showed that globally, in 2016, more than a quarter of all adults were not getting enough physical activity [20].The main reasons presented by the population are lack of time, lack of company, lack of adequate climate, fear of injury, fear of leaving home unaccompanied, lack of motivation, physical tiredness, and lack of enjoyment [22][23][24][25][26]. To overcome these barriers, a new modality of physical exercise emerged, named exergames.Exergame is a modality of active videogame combining exercise and videogame [27][28][29] and can be used in home.The literature has shown that exergames may be more enjoyable than traditional programs of physical exercise [16,30].Exergames can simulate various types of physical exercise, for example, walking/running, sports (e.g., beach volleyball, basketball, bowling, yoga, tennis, table tennis, boxing), calisthenics exercises, and dance [16,17,[31][32][33].In addition, many studies have shown that exergames [16,17,33] elicit physiological responses classifying physical exercise as a moderate intensity according to the guidelines of the American College of Sports Medicine (ACSM) [34].Therefore, it is reasonable to assume that exergames can be a useful tool to manage symptoms of anxiety. Previous studies investigated the acute effects of exergame on state anxiety, and the results were not conclusive.Viana et al. evaluated the acute effect on state anxiety and enjoyment of a session of the exergame Zumba® Fitness in the Xbox 360 Kinect® of moderate intensity in singleplayer mode in young women and found a decrease in state anxiety after session and a high enjoyment level [17].Morais et al. also evaluated the effect of the exergame Zumba® Fitness in the Xbox 360 Kinect® on state anxiety and enjoyment by comparing state anxiety level after the dance exergame session with that after a traditional aerobic exercise (walking/running on a treadmill).The authors found a reduction in state anxiety only after the dance exergame session [35].Conversely, da Silva et al. compared the effect of the exergame Hollywood Workout in the Xbox 360 Kinect® with a traditional calisthenics session on state anxiety level in healthy adult men and found no decrease in state anxiety after both sessions [33]. Notably, the main limitation of these studies was the lack of a non-exercise control group.To the best of our knowledge, no previous study has evaluate the acute effect of an exergame session on state anxiety and included a non-exercise control session in adult women.Recently, Viana et al. conducted a systematic review and meta-analysis to investigate the effects of exergame on anxiety level and found that although exergame interventions resulted in improvements in anxiety levels, they were not superior to the effects from non-exercise control interventions.According to the authors, the existing studies have small sample sizes, different research designs, and different populations [36].Therefore, further studies are warranted to investigate the acute effects of exergames on state anxiety level. Thus, the present study aimed to evaluate the acute effect of the exergame Kinect Sports® beach volleyball on state anxiety in healthy adult women.The secondary aims were to assess affectivity, enjoyment, and exercise intensity elicited by the exergame beach volleyball session.Considering that previous studies showed a decrease in state anxiety in adult women after one session of dance exergame, we hypothesized that one session of the exergame Kinect Sports® beach volleyball in single-player mode would also provide a decrease in state anxiety in healthy adult women [16,36]. Participants A posteriori power analysis was performed using G*Power (version 3.1.9.7; Frans Faul, University of Kiel, Germany) [37].Based on a moderate correlation between de Oliveira et al.BMC Sports Science, Medicine and Rehabilitation (2024) 16:67 repeated measurements (r = 0.5) and a partial eta squared (η 2 p ) of 0.29 (converted to effect size f of 0.639), the sample size of 30 participants promoted a statistical power of ≥ 95%. The participants were recruited through social media (Instagram® and WhatsApp®), direct contact, and announcements on institutional websites.Thirty-four healthy adult women participated in the study (a convenience sample) (Table 1).The inclusion criteria were (i) female sex and (ii) aged between 18 and 40 years.The exclusion criteria were (i) contraindications to physical activity (assessed using the Physical Activity Readiness Questionnaire); (ii) diagnosis of mood and/or anxiety disorders; (iii) use of stimulants (e.g., psychotropic drugs); and (iv) being in the menstrual period. Four participants were excluded from the study because they had an illness with repercussions on mood.The participants were not clinically diagnosed with anxiety, depression, and/or other mental disorders according to a self-report made by each participant.The literature show that anxiety symptoms do not only appear in people with anxiety and can also affect non-clinical populations, impairing quality of life and daily activities, such as work performance, relationships, and schoolwork.In this sense, strategies for managing these symptoms are desirable.Furthermore, although the effect of exercise on anxiety is well established, a large part of the population is physically inactive.Therefore, exercise programs considered more fun such as exergame should be investigated. In this study, we evaluated only young adult women because the literature showed that anxiety is more prevalent in this population than in men [1].Informed consent was obtained from all participants included in this study.All experimental procedures were approved by the Research Ethics Committee of the Federal University of Goiás (approval number: 01970818.8.0000.5083)and followed the principles outlined in the Declaration of Helsinki.The flow diagram of the study is presented in Fig. 1. Study design This was an experimental within-participants study composed of three visits.At the first visit, the participants were submitted to anamnesis, anxiety trait assessment, anthropometric assessment, cardiorespiratory fitness assessment, and randomization of subsequent visits.In addition, all participants were familiarized with a match of the exergame beach volleyball for approximately six minutes.At the second and third visits, the participants played a session of the exergame beach volleyball in singleplayer mode (intervention) or remained seated (control session) for 30 min, and the order depended on randomization.During and at the end of each match, heart rate (HR) and rating of perceived exertion (RPE) were measured to characterize the session exercise intensity.State anxiety and affectivity level were assessed before and after the exergame and control sessions.Furthermore, enjoyment and future engagement possibility were evaluated only after the exergame session.In order to avoid bias, the scales and questionnaires were applied by an experienced researcher with these tools.During data collection, the researcher did not express facial or behavioral reactions to the participants' responses. Experimental procedures Anamnesis and anthropometric assessment The anamnesis was performed through the Physical Activity Readiness Questionnaire (PAR-Q).The PAR-Q contains seven questions to evaluate the general health condition of the participants and whether they were fit to perform exercise.If a participant answered "yes" to one or more questions, the participant was excluded from the study [38].Body mass was measured using a digital balance (Omron, HN-289, USA) to the nearest 0.1 kg, and body height was measured using a wall stadiometer (Caumaq, Brazil) to the nearest 0.1 cm.Thereafter, body mass index was calculated by dividing body mass by body height squared (kg/m²). Cardiorespiratory fitness assessment Cardiorespiratory fitness of the participants was assessed through the Ebbeling test, performed on a motorized treadmill (ATL, Inbramed, Brazil).This test was chosen because of its easy application, low cost and provides a valid and time-efficient method for estimating maximum oxygen uptake (r² = 0.92) [39].The Ebbeling protocol is composed of two four-minute stages [39].During the first four minutes of the protocol, participants walked or ran at a 0% slope with a speed corresponding to an HR range from 50 to 70% of the maximal HR (HRmax) predicted for age.Finally, for the last four minutes of the protocol, the treadmill slope increased to 5% while the speed remained the same [39].Thereafter, HR was measured at the end of the protocol.After the protocol, the predicted maximal oxygen uptake ( V O 2 max ) was esti- mated through the equation below: Where, speed was expressed in miles per hour, HR in beats per minute (bpm), age in years and sex was 0 for women or 1 for men. Exergame and control session The console used in this study was the Xbox 360 (Microsoft®, USA).Xbox has a movement sensor, Kinect® (Microsoft, USA).This sensor allows players to interact with videogames without the necessity of remote control.The exergame Kinect Sports® included six modalities of sports (i.e., beach volleyball, soccer, track and field, bowling, boxing, and table tennis).In the current study, participants played beach volleyball, because several muscles are recruited to maintain a player's performance during match [40].This exergame is inherently competitive with a possible win or lose outcome.Each visit was performed within an interval of 24-72 h (wash-out period).The participants were instructed to visit the laboratory wearing appropriate clothes to perform physical exercise, to refrain from eating two hours before exercising, and to abstain from caffeine, alcohol, and strenuous physical activity on the day of the experiment.The temperature in the laboratory ranged from 21 to 23 °C.Each participant was supervised by an experienced researcher.Furthermore, conversation was minimized during all data collection periods, and the presence of people was restricted only to researchers and the participant involved in the study.During the control session, the participants remained seated for 30 min.The participants were not allowed to read, study, listen to music, or use their smartphones during the sessions.The exergame session lasted approximately 30 min and the number of sets was variable according to participants to reach the same time of control session. State-trait anxiety assessment Anxiety of the participants was assessed through the state component from the State-Trait Anxiety Inventory (STAI), an instrument already translated and validated for Brazilian Portuguese [5,41].Briefly, the STAI is a 40-item self-reported assessment scale made up of two 20-item anxiety subscales (state and trait).The state subscale describes an individual's feelings at a particular time, whereas the trait subscale describes an individual's usual feelings.Each item of the STAI is given a score of level of state anxiety [41].The STAI was answered by participants inside a sound-attenuated room.The STAI was chosen because of its easy application and low cost. Physical exercise intensity assessment Physical exercise intensity was monitored by measuring participants' HR and RPE.HR was monitored using an HR monitor (H10, Polar, Finland).The HRmax was estimated using the following age-predicted HRmax equation: HRmax = 208 − 0.7 × age [42].RPE was monitored using the Borg Scale (6-20) [43].The classification of the exercise intensity followed the criterion adopted by the ACSM [34]. Enjoyment assessment Enjoyment was assessed through the Physical Activity Enjoyment Scale, a scale translated and validated for Brazilian Portuguese [44].This scale has 18 items, and each item has two opposite poles that are separated by a 7-point Likert scale (1 = "I like"; 7 = "I hate"; 4 = "neutral").Scores range from a minimum of 18 (no enjoyment at all) to a maximum of 126 (highest enjoyment level).The scale was applied only after the exergame session [44]. Statistical analysis The Shapiro-Wilk test was used to test data normality.As the state anxiety and affectivity data did not present a normal distribution, the Friedman test was used to evaluate state anxiety level and affectivity level between times (pre-exergame session, post-exergame session, pre-control session, and post-control session).When necessary, Conover's post hoc test was used to identify the differences between times.Kendall's W was used as the effect size for the Friedman test.Kendall's W values were classified according to Cohen's d as "trivial" (< 0.10), "small" (0.10 ≤ to < 0.30), "medium" (0.30 ≤ to < 0.50), and "large" (≥ 0.5) [50].The variables age, body mass, body mass index, future engagement possibility, RPE, number of matches, and game time presented a non-normal distribution.However, body height, trait anxiety, V O 2 max, enjoyment level, mean HR, percentage of HRmax, number of wins, state anxiety after exergame, and state anxiety before and after the control session presented normal distributions.Parametric data are presented as the mean ± standard deviation, and non-parametric data are presented as the median and interquartile range (IQR).All data were analyzed through Jeffrey's Amazing Statistics Program (JASP, version 0.16.4,University of Amsterdam, Netherlands).The level of significance assumed was α < 0.05. Results There were no medical intercurrences during the study.The median number of matches played by the participants was 6 [IQR: 1], while the mean number of wins was 4.73 ± 2.10.The median game time reached by the participants was 29.47 [IQR: 2.30] minutes. Enjoyment and future engagement possibility The participants reported a high enjoyment level after the exergame session (mean: 109.60 ± 11.72), corresponding to 87% of the maximal value provided by the enjoyment scale used.In relation to future engagement in physical exercise, the participants reported a strong to moderate intention for the exergame session (median: 75 [IQR: 30]). Exercise intensity The mean HR presented by the participants during the exergame session was 133 ± 18 bpm, corresponding to 69 ± 9% of their HRmax.The median RPE reported by the participants during the exergame session was 12 [IQR: 5]. Discussion The present study evaluated the acute effect of a single session of the exergame Kinect Sports® beach volleyball versus a non-exercise control session on state anxiety and affectivity in healthy adult women.Additionally, the enjoyment, future engagement, and exercise intensity of the exergame Kinect Sports® beach volleyball were evaluated.Our initial hypothesis was that the exergame Kinect Sports® beach volleyball would provide a decrease in state anxiety.However, contrary to our initial hypothesis, a single session of the exergame Kinect Sports® Beach Volleyball did not improve state anxiety level in healthy adult women.Therefore, our results did not confirm our initial hypothesis. Viana et al. and Morais et al. evaluated the acute effect on state anxiety and enjoyment of a session of the exergame Zumba® Fitness in the Xbox 360 Kinect® in healthy young women.Both studies found reductions in state anxiety after an exergame session [17,35].This fact has been common in the literature on exergames and Fig. 3 Affectivity level before and after a session of the exergame beach volleyball in single player mode and a non-exercise control session.There was a significant difference between times (P-value < 0.001).*Significant difference compared to the time pre (exergame) Fig. 2 State anxiety before and after a session of the exergame beach volleyball in single player mode and control session.There was no significant difference between times (P-value = 0.092) anxiety; in other words, previous studies found significant improvements in anxiety level, however, it was not superior to the effect from non-exercise control sessions [36].In the present study, we found no differences in state anxiety between exergame and non-exercise control sessions.Our results may be influenced by the type of exergame investigated since we used a beach volleyball exergame, while Viana et al. and Morais et al. used a dance exergame [17,35].We expanded the findings of Viana et al. and Morais et al. since we included a nonexercise control session and compared it with an exergame session [17,35]. Recently, de Oliveira et al. evaluated the acute effect of the exergame Kinect Sports® beach volleyball in the Xbox 360 Kinect® in singleplayer mode on the state anxiety of adult men, and the authors found no significant decreases in state anxiety after an exergame session [32].da Silva et al. compared the acute effects of an exergamebased calisthenics session versus a traditional calisthenics session on anxiety level in healthy adult men [33].The authors did not find significant differences between the sessions.In the present study, we also found no significant reduction in state anxiety level after an exergame beach volleyball session.Thus, our findings were similar to previous studies [32,33].The literature has frequently shown that moderate-intensity physical exercise and high enjoyment level after physical exercise can be important factors in improving anxiety level [16,17,51,52,21,[53][54][55][56][57][58][59].In the present study, exercise intensity was classified as moderate, according to HR and RPE obtained during the exergame session, and the participants reported a high enjoyment level after an exergame session [34].Our findings add to the literature that moderate intensity physical exercise (performed through the exergame Kinect Sports® beach volleyball) and high enjoyment level seem to be insufficient to improve state anxiety in healthy adult women.Therefore, the decrease in anxiety seems to be related to the type of exergame. A bias that might influence the results from studies about anxiety is the recruitment of persons with average or lower levels of state anxiety; this situation is called the floor effect [9].Thus, it would not be possible to decrease further anxiety level.Indeed, Knapen et al. showed that individuals with anxiety disorders show a larger decrease in state anxiety after physical exercise [60].In the present study, participants were healthy women and presented intermediate anxiety level, considered normal state anxiety.This fact may have contributed to explaining the results that were found. A secondary aim of our study was to evaluate the affectivity level of the exergame session.We found no significant differences in the affectivity level between exergame and non-exercise control sessions.Nevertheless, there was a significant improvement in affectivity level after the exergame session; however, it was no longer present when compared to a non-exercise control session.da Silva et al. compared the acute effects of an exergamebased calisthenics session versus a traditional calisthenics session on affectivity level in healthy adult men [33].The results showed that the affectivity was classified as "well" after both sessions, with no significant differences between the sessions.Our results are in line with those reported by da Silva et al., since the affectivity level was similar between exergame and non-exercise control sessions [33].The participants' characteristics, self-efficacy, and tolerance to the type of exercise may explain the results found regarding the affectivity level [33,61].Our findings add that exergame beach volleyball is a modality that acutely did not improve affectivity level. The present study also evaluated the enjoyment level and future engagement of the exergame session.Participants reported a high level of enjoyment (corresponding to 87% of the maximal possible score) and rated future engagement after the exergame session as strong to moderate intention.These results may be related to the fact that younger adult individuals are more receptive to innovative technology [18].Physical exercises that provide a high level of enjoyment and future engagement can increase the constancy and adherence to physical exercise programs [22][23][24][25][26]. Indeed, a high enjoyment level has been common among various exergames [16,17,32,33,62].Our findings add that exergame beach volleyball is a modality that presents high enjoyment and a strong to moderate engagement future. Finally, as mentioned previously, exergame beach volleyball evoked a mean HR and RPE response corresponding to a moderate intensity physical exercise based on the criteria established by the ACSM.A previous study found a different exergame intensity (light intensity) using the same exergame protocol adopted in the present study [32].These contradictory results might be related to the fact that those authors recruited healthy and physically active men ( V O 2 max ≅ 52 ml/kg/min), while we recruited only healthy women (estimated V O 2 max ≅ 33 ml/kg/min) [32].Thus, it is possible that the exergame beach volleyball from Kinect Sports® did not challenge the participant's cardiorespiratory fitness level enrolled in the previous study mentioned above [32].Moreover, moderate intensity has been found in a dance-based exergame in adult women [16,17].Thus, according to our results, the exergame Kinect Sports® beach volleyball can be prescribed as an alternative to healthy adult women who seek to reach the recommendation of the quantity of physical exercise proposed by the ACSM [34]. Our study is not without limitations.First, as we used questionnaires and scales, the results rely on the honesty of the participants.Second, we did not investigate a clinical population with a diagnosis of anxiety de Oliveira et al.BMC Sports Science, Medicine and Rehabilitation (2024) 16:67 disorders.Third, this study investigated only healthy adult women and tested only a modality of the exergame Kinect Sports®; therefore, caution should be taken when extrapolating our results for other exergames.Fourth, we did not examine any endocrine response (e.g., increased norepinephrine, serotonin, and beta-endorphins and increased parasympathetic activity) and/or psychological mechanisms (e.g., increased self-efficacy, distraction, and a sense of mastery) that are responsible for reducing state anxiety.Nevertheless, we believe that these limitations do not prevent the conclusions of the present study from being drawn.Future studies should investigate the acute and chronic effects of the exergame Kinect Sports® beach volleyball on state anxiety in individuals with anxiety disorders as well as include participants from both sexes and different ages to better elucidate this matter. Conclusions The present study showed that there were no significant differences in the state anxiety level and affectivity level after an acute session of the exergame beach volleyball.However, the exergame beach volleyball elicited high enjoyment and presented a strong to moderate future engagement possibility. 1 to 4. Overall scores can range from a minimum of 20 to a maximum of 80.A score equal to or lower than 30 indicates a low level of state anxiety, a score ranging from 31 to 49 indicates an intermediate level of state anxiety, and a score higher than or equal to 50 indicates a high de Oliveira et al.BMC Sports Science, Medicine and Rehabilitation (2024) 16:67
2024-03-12T13:03:31.991Z
2024-03-11T00:00:00.000
{ "year": 2024, "sha1": "807ced8259c2a9639ff44124f0b9f3e52469c834", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4c7f0e36b2f2b3539f6d91b5f7a828443a2d58d9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
221491364
pes2o/s2orc
v3-fos-license
Impact of sentinel lymph-node biopsy and FDG-PET in staging and radiation treatment of anal cancer patients To assess the role of sentinel lymph-node biopsy (SLNB) and FDG-PET in staging and radiation treatment (RT) of anal cancer patients. This retrospective study was performed on 80 patients (male: 32, female: 48) with a median age of 60 years (39–89 years) with anal squamous cell carcinoma who were treated from March 2008 to March 2018 at the IRCCS San Raffaele Hospital. Patients without clinical evidence of inguinal LNs metastases and/or with discordance between clinical evidence and imaging features were considered for SLNB. FDG-PET was performed in 69/80 patients. Patients with negative imaging in inguinal region and negative SLNB could avoid RT on groin to spare inguinal toxicity. CTV included GTV (primary tumour and positive LNs) and pelvic ± inguinal LNs. PTV1 and PTV2 corresponded to GTV and CTV, respectively, adding 0.5 cm. RT dose was 50.4 Gy/28 fractions to PTV2 and 64.8 Gy/36 fractions to PTV1, delivered with 3DCRT (n = 24) or IMRT (n = 56), concomitant to Mitomycin-C and 5-FU chemotherapy. FDG-PET showed inguinal uptake in 21/69 patients (30%) and was negative in 48/69 patients (70%). Lymphoscintigraphy was performed in 11/21 positive patients (4 patients SLNB confirmed inguinal metastases, 6 patients false positive and 1 patient SLN not found), and in 29/48 negative patients (5/29 showed metastases, 23/29 true negative and 1 SLN not found). Sensitivity, specificity, positive and negative predictive value of FDG-PET were 62%, 79%, 40% and 82%, respectively. Median follow-up time from diagnosis was 40.3 months (range: 4.6–136.4 months): 69 patients (86%) showed a complete response, 10 patients (13%) a partial response, 1 patient (1%) a stable disease. Patients treated on groin (n = 54) versus not treated (n = 26) showed more inguinal dermatitis (G1–G2: 50% vs. 12%; G3–G4: 17% vs. 0%, p < 0.05). For patients treated on groin, G3–G4 inguinal dermatitis, stomatitis and neutropenia were significantly reduced with IMRT against 3DCRT techniques (13% vs. 36%, p = 0.10; 3% vs. 36%, p = 0.003; 8% vs. 29%, p = 0.02, respectively). SLNB improves the FDG-PET inguinal LNs staging in guiding the decision to treat inguinal nodes. IMRT technique significantly reduced G3-G4 toxicities when patients are treated on groin. www.nature.com/scientificreports/ Accurate staging is crucial for radiotherapy (RT) treatment planning, aiming to select patients with localized disease. Fifteen-25% of patients with anal cancer show inguinal LNs metastases, while 44% of all LNs with positive metastases were reported to be smaller than 5 mm 2 and this is the main reason explaining the low sensitivity of either clinical examination in the detection of positive inguinal LNs. In a retrospective analysis of 270 patients who did not receive prophylactic inguinal irradiation, after a median follow-up of 72 months, the incidence of inguinal metastases was 8% 3 . Two retrospective studies reported the inguinal recurrence rate in patients staged with conventional examinations who did not receive prophylactic inguinal irradiation: Blinde et al. 4 showed inguinal recurrence rate of 0% in T1, 10% in T2 and 20% in T3-T4 patients with a median follow-up of 65 months, and Ortholan et al. 5 reported an inguinal recurrence rate of 12% for T1-T2, and 30% for T3-T4 patients with a median follow-up of 61 months. As a consequence, the prescription of prophylactic inguinal irradiation based only on the T stage would cause an unnecessary irradiation of about 70-80% of T3-T4 patients, submitting them to a risk of severe acute toxicity, especially when using conformal RT techniques 6 . Conversely, about 10% of T1-T2 patients would be undertreated if inguinal irradiation is not prescribed. Thus, a better detection of initial inguinal LNs involvement would be important for selecting those patients who really deserve the inguinal irradiation. Recently, FDG-PET was shown to detect more abnormal inguinal LNs than clinical 7 , ultrasound inguinal 8 or CT 9-11 examination while sentinel lymph-node biopsy (SLNB) has been shown to be superior to both CT and FDG-PET in detecting metastatic LNs 12 . Prophylactic RT including inguinal LNs irradiation was shown to decrease inguinal metastasis in several series 3,5 but is unavoidably associated with increased toxicity. Better patient selection and the use of advanced RT techniques, such as intensity modulated RT (IMRT) and image guided RT (IGRT), may reduce toxicity 13 . In the present study we reported our experience about the role of SLNB compared to FDG-PET in terms of disease staging and the clinical impact of inguinal irradiation with the use of advanced RT techniques. Methods and materials characteristics of patients. Eighty patients with histologically proven anal SCC treated between March 2008 and March 2018 were enrolled in a retrospective study. Patients' ages ranged from 39 to 89 years with a median of 60 years. Of these 32 (40%) were male and 48 (60%) were female. Only 27/80 (34%) were HIV positive, and 21/27 (78%) were male. The main characteristics of patients are summarized in Table 1. Seven patients with limited metastatic disease were enrolled after collegial discussion and approval, despite the protocol violation (3 patients: external iliac LNs, 2 patients: liver (1 patient: liver + bone), 2 patients: retroperitoneal LNs). The protocol was approved by our Institutional Ethical Committee (IRCCS San Raffaele Hospital Clinical Research Office), and all patients signed an informed consent. A retrospective comparison between FDG-PET inguinal LNs evaluation and SLNB results was made. SLnB protocol and treatment. Patients without clinical evidence of inguinal LNs involvement or with discordance between diagnostic imaging and clinical examination were considered for the SLNB protocol. The SLNB is a minimally invasive procedure which consists in submucosal injection of Technetium-99 labelled radio colloid around the primary tumour, patients in whom inguinal tracer uptake is detected undergo radio guided surgical removal of the inguinal sentinel LNs. The methods relative to lymphoscintigraphy, surgical removal and histopathological examination of sentinel LNs have been already described 14,15 . Patients after positioning in supine position on Comby-Fix ® underwent a contrast-enhanced CT and FDG-PET simulation. FDG-PET was used both for staging and target definition. CTV included the GTV (primary tumour and any positive LNs), the ischiorectal fossa, the mesorectum, and the internal and common iliac LNs until the L5-S1 space. As the inguinal LNs irradiation is the standard treatment in anal cancer, all patients enrolled in this study with positive SLNB were treated on groin. Patients who had negative SLNB and negative imaging in inguinal region were considered for RT without groin irradiation, so as to spare inguinal toxicity; patients were also involved in the final clinical decision. PTV1 and PTV2 were defined as GTV and CTV, expanded with a margin of 0.5 cm, respectively. Median prescribed dose was 50.4 Gy in 28 fractions (1.8 Gy/fraction) to the PTV2, and 64.8 Gy in 36 fractions, delivered as sequential or concomitant boost, to the PTV1. RT techniques consisted of 3DCRT or IMRT (Volumetric Modulated Arc Therapy (VMAT) Rapid Arc or Helical Tomotherapy). The planned concomitant chemotherapy schedule was continuous infusion 5-FU 1,000 mg/mq 2 delivered from day 1 to 4 and from day 29 to 32 combined to Mitomycin-C 10 mg/m 2 delivered on day 1 to 29. Analyses. Sensitivity, specificity, area under the curve (AUC), positive and negative predictive values of FDG-PET against SLNB were assessed in the subgroup of patients that performed both examinations. Acute toxicities were scored according to the NCI-CTC for Adverse Events (Version 3); treatment interruptions were also reported. In order to assess the gain in avoiding to treat the inguinal LNs, a comparison in terms of inguinal LNs outcome and toxicity was made between the inguinal RT "IRT" and non-inguinal RT "NIRT" groups, also taking into account RT technique (3DCRT vs. IMRT). The resulting sensitivity, specificity, positive and negative predictive value of FDG-PET were 62%, 79%, 40% and 82%, respectively as reported in Table 3. In the "NIRT" group the percentage of patients with T1-T2 disease was higher than the percentage of patients with T3-T4 (73% vs. 27%, p = 0.002), while in the "IRT" group the rates were similar (53% vs. 47%, respectively), as reported in Table 4. Discussion CRT is the standard treatment for anal SCC. The indication of prophylactic inguinal irradiation is, however, controversial. According to the two largest retrospective studies 4,5 it is clear that treating or not the inguinal LNs based only on the T stage has a large risk of over/under-treatment. A better detection of early inguinal LNs involvement could properly select patients who really deserve inguinal irradiation. FDG-PET detects more abnormal inguinal LNs than clinical examination 7 , inguinal ultrasound 8 , and CT 9-11 , and, more in general, changing the TMN stage in 41% of patients with anal cancer; thus it has been recommended for the initial staging 16 . The low incidence of metachronous metastases and the side effects after radiotherapy could not justify a prophylactic treatment. A refined staging system with precise identification of disease extent could allow individualized therapy, ensuring the accurate coverage of disease while sparing disease-free organs. Feasibility and efficacy of SLNB has been addressed by several reports and the clinical utility of this procedure in improving disease staging and selecting patients for inguinal radiation and changing the therapeutic plan has also been outlined 14,15 . To our knowledge, only the study by Mistrangelo et al. 1 compared FDG-PET and SLNB in 27 patients. Pathologic inguinal uptake was found in 7 patients. SLNB confirmed inguinal metastases in 3/7 (42%) vs. our 4/10 (40%) patients, with 4/7 (57%) false positives vs. 6/10 (60%) in our patients, and did not show inguinal metastases in the 20 patients with no pathologic inguinal uptake. They reported positive and negative predictive value equal to 43% and 100% respectively, quite consistent with the values found in current study (40% and 82%, respectively (Table 3), although, differently from our findings which showed 18% of false PET negative they did not report any false PET negative. Engledow et al. 17 reported pathologic inguinal uptake in 9/40 patients. Fine needle aspiration or conventional histology confirmed inguinal metastases in 7/9 (78%) patients. Despite the relatively small number of patients of these studies and of our, results are quite consistent and show that: a) despite recommended for initial staging, the FDG-PET results on inguinal LNs should be interpreted with caution, showing a large number of false positives; b) SLNB can further improve inguinal staging reducing FDG-PET false positive and false negative rates. In the present study FDG-PET importantly showed a false positive rate of 60% (6/10 patients), and a false negative rate of 18% (5/28 patients) when compared to SLNB. For these reasons, SLNB can be considered as a good tool when comparing new modalities such as FDG-PET. In particular, the reduction of the false negative rate could be very important, especially in case of stage T1-T2, as a positive SLNB would possibly suggest the radiation oncologist to treat the groins even in presence of a negative FDG-PET. Mistrangelo et al. 12 reviewed the literature and found 6 studies reporting SLNB positivity rate stratified by T stage: summing his results to the data from all these studies, SLNB found metastases in 6/27 (22%) T1 patients and in 17/98 (17%) T2 patients. In our population, selecting T1 and T2 patients without considering N stage, SLNB found metastasis in 2/14 (14%) and 6/30 (20%) patients, respectively. Considering patients with T1 and T2 N0 stage at diagnostic imaging, who would probably be selected to skip inguinal RT, SLNB showed metastases in 1/9 (11%) and 1/12 (8%) patients, respectively. In short, with the limitation of the small number of patients in the present study and in literature, SLNB seems to be the most effective procedure to select patients for inguinal RT. Interestingly, with a median OS of 48.5 months (1.3-109.2 months) none of the 26/80 patients, who did not receive inguinal irradiation, had inguinal relapse. Considered that the median time of inguinal recurrence has been reported to be around 16 months 3 , the absence of inguinal relapse in the "NIRT" group indicates that SLNB is likely to be able to identify the true negative patients. Another relevant issue concerns the impact of advanced RT techniques in sparing the normal tissues with/ without treating the inguinal LNs. In the current study, the only significantly toxicity more frequent in the "IRT" group versus "NIRT" group was the inguinal dermatitis. Interestingly, the increase of inguinal dermatitis in the "IRT" group did not translate in an increased rate of treatment suspension, neither of the duration of the suspension compared to "NIRT" group. It is important to note that the median dose to suspension was 37.7 Gy, lower in "IRT" group than "NIRT" group (34.4 Gy vs. 44.7 Gy, respectively). Genito-urinary and gastro-intestinal toxicities were similar in the two groups; in particular G3-G4 diarrhoea occurred in 3/54 and 2/26 patients, respectively. The incidence of G ≥ 3 diarrhoea and genito-urinary toxicity in the Mitomycin-C groups reported by the two more recent phase III trials of CRT for anal cancer was 9% and 2% in the ACT II 18 , 23% and 11% in the RTOG 9811 6 , respectively. Both trials used conventional RT techniques. The use of IMRT seems to reduce the incidence of G ≥ 3 GI toxicity in a range from 0 19 to 11-15.1% 20,21 , with most studies reporting an incidence from 7 to 9% 22,23 , without impairing the efficacy compared with traditional RT techniques 24 . A comparable range of G ≥ 3 GI toxicity, from 5 to 10%, was reported with the use of VMAT 25 and Tomotherapy 26 . IMRT, VMAT and Tomotherapy, seem to reduce also G ≥ 3 genito-urinary toxicity to 0-3% 13,[19][20][21]23,24,27 . Similarly to the two recent phase III trial, all the above reported trials using IMRT, VMAT or Tomotherapy, treated the inguinal LNs, then it is impossible to ascertain whether the exclusion of inguinal nodal area may translate into a further reduction of gastro-intestinal and genito-urinary toxicity.
2020-09-05T05:15:15.179Z
2020-09-03T00:00:00.000
{ "year": 2020, "sha1": "afaad4b1bf36d1d9b1bf6516b9baa7be84f7e35c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-71577-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afaad4b1bf36d1d9b1bf6516b9baa7be84f7e35c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24312854
pes2o/s2orc
v3-fos-license
Three-dimensional Model and Characterization of the Iron Stress-induced CP43′-Photosystem I Supercomplex Isolated from the Cyanobacterium Synechocystis PCC 6803* The cyanobacterium SynechocystisPCC 6803 has been subjected to growth under iron-deficient conditions. As a consequence, the isiA gene is expressed, and its product, the chlorophyll a-binding protein CP43′, accumulates in the cell. Recently, we have shown for the first time that 18 copies of this photosystem II (PSII)-like chlorophylla-binding protein forms a ring around the trimeric photosystem I (PSI) reaction center (Bibby, T. S., Nield, J., and Barber, J. (2001) Nature, 412, 743–745). Here we further characterize the biochemical and structural properties of this novel CP43′-PSI supercomplex confirming that it is a functional unit of approximately 1900 kDa where the antenna size of PSI is increased by 70% or more. Using electron microscopy and single particle analysis, we have constructed a preliminary three-dimensional model of the CP43′-PSI supercomplex and used it as a framework to incorporate higher resolution structures of PSI and CP43 recently derived from x-ray crystallography. Not only does this work emphasize the flexibility of cyanobacterial light-harvesting systems in response to the lowering of phycobilisome and PSI levels under iron-deficient conditions, but it also has implications for understanding the organization of the related chlorophyll a/b-binding Pcb proteins of oxychlorobacteria, formerly known as prochlorophytes. Iron is the most abundant transition metal in the crust of the earth and is an absolute requirement for photosynthetic organisms such as cyanobacteria, because it is needed for many of the redox reactions of the photosynthetic electron transport system. However, in most aquatic ecosystems it can be sufficiently low to limit photosynthetic activity (1,2). This finding is attributed mainly to the low solubility of Fe 3ϩ above neutral pH in oxygenic ecosystems (3). As a result, cyanobacteria and other microorganisms have evolved a number of responses to cope with frequently occurring conditions of iron deficiency (4). One such response is to express two "iron stress-induced" genes, isiA and isiB (5,6), which are located on the same operon. The isiB gene encodes for flavodoxin, which can functionally replace the iron containing ferredoxin (7). The isiA gene encodes for a protein often called CP43Ј, because it has an amino acid sequence homologous to that of the chlorophyll a-binding protein, CP43 of photosystem II (PSII) 1 (8,9). Like CP43, CP43Ј is predicted to have six transmembrane helices, and judged by the conservation of histidine residues, it is likely to bind the same number of chlorophyll a molecules. The major difference is that CP43Ј lacks the large hydrophilic loop that joins the luminal ends of helices V and VI of CP43. For this reason, it has 342 amino acids rather than 472 (see Fig. 1). Under iron stress, the isiA gene is transcribed into two messages, a monocistronic message containing only isiA and a dicistronic message that also contains the isiB gene (10). Although the discovery of the CP43-like iron stress-induced protein was made some time ago (11), its precise function has not been elucidated. There have been at least four postulates. (i) CP43Ј aids the recovery of cells by acting as a chlorophyll store so that PSII and photosystem I (PSI) complexes can be quickly synthesized when iron becomes readily available in the environment (12). (ii) CP43Ј protects PSII from photo-induced damage by acting as a dissipater of excitation energy (13). (iii) CP43Ј is a functional replacement for CP43 in PSII during iron starvation (8). (iv) CP43Ј acts as a light-harvesting complex under iron stress conditions, mainly for PSII (5) but perhaps also for PSI (16). Recently we showed for the first time that a CP43Ј-PSI trimer supercomplex can be isolated from the cyanobacteria Synechocystis PCC 6803 when grown under iron-stressed conditions (17). Here we report a more detailed description of this supercomplex and present a preliminary three-dimensional model of its structure. MATERIALS AND METHODS Growth Conditions-All studies were conducted on preparations isolated from Synechocystis sp. PCC 6803 having a histidine tag attached to the C terminus of the PSII protein, CP47 (18). Cells were grown photoheterotrophically in mineral medium BG-11 (19), containing kanamycin and glucose at 30°C and 70 microeinstein m Ϫ2 ⅐s Ϫ1 illumination. Iron-stressed cultures were obtained by growing cells in the same BG-11 medium but lacking iron-containing compounds. Cultures were harvested after 3 days, and in the case of the iron-starved culture, the cells had a blue shift in their long wavelength absorption band of approximately 7 nm compared with that of normal cells. Thylakoid membranes were isolated using a procedure similar to that described by Tang and Diner (20). The isolated membranes (1 mg chlorophyll⅐ml Ϫ1 ) were solubilized with 1% ␤-D-dodecyl maltoside at 4°C for 10 min and centrifuged at 45,000 rpm using a Beckman Ti70 rotor. The supernatant was then passed through a Ni 2ϩ affinity column. Given that CP47 had a histidine tag, PSII was selectively bound to the column while the non-bound fraction containing PSI was collected. Continuous sucrose density gradients were prepared according to the freeze-thaw method provided by Hankamer et al. (21). The PSI-enriched fraction eluted from the affinity column was layered on top of the gradient and subjected to 12 h of centrifugation in a SW28 rotor at 26,000 rpm. The resulting bands were independently removed for biochemical and structural characterization. The separation of the PSI fraction into discrete populations for estimating molecular masses was also accomplished by size exclusion high performance liquid chromatography using a Phenomenex BioSep SEC S3000 column coupled to a Kontron high performance liquid chromatography system. Elution profiles were monitored at 670 nm and 280 nm to detect chlorophyll protein-containing fractions. Biochemical Characterization-SDS-polyacrylamide gel electrophoresis and Western blotting were performed as described in Hankamer et al. (21). Optical absorption spectra were measured at room temperature using a Shimadzu MPS 2000 spectrometer. Steady-state fluorescence spectra were obtained using a Perkin Elmer LS50 at 77 K and measured with an excitation wavelength of 440 nm. To record fluorescence excitation spectra, the sample was excited between 650 and 700 nm, and the emission was detected at 720 nm. Electron Microscopy and Image Processing-Preparations were neg-atively stained with 2% uranyl acetate on glow-discharged carbonevaporated grids and imaged using a Philips CM 100 electron microscope at 80 kV. The magnification was calibrated at ϫ 51,500. Twenty electron micrographs were taken for each preparation and subsequently calculated to have the first minima of their contrast transfer functions to be in the range of 17-23 Å. Electron micrographs were digitized using a Leafscan 45 densitometer set at a step size of 10 m. Single particle data sets of ϳ3000 (CP43Ј-PSI supercomplex) and 4200 (PSI trimer) were obtained by interactively selecting all possible particles from the micrographs. All subsequent processing was performed within the IMAGIC-5 software environment (22,23). The single particle images were coarsened by a factor of 2, resulting in a sampling frequency of 3.88 Å/pixel on the specimen scale. Reference-free alignment coupled with multi-variate statistical analysis was used to classify each data set to identify initial class averages. These data were then used for iterative refinement, resulting in the improved class averages. The relative orientations of the improved class averages were determined by the angular reconstitution technique (24), allowing for an initial threedimensional reconstruction to be obtained by exact-back projection (25). Reprojections were taken from this initial three-dimensional map to further refine the class averages and identify any atypical views present within the data set. The data converged after several rounds of iterative refinement in this manner, whereby roughly 40% of the class averages were discarded after assessment through cross-correlation functions. The resolution of the final three-dimensional map was determined by Fourier shell correlation (FCS) between two independent three-dimensional reconstructions (26) compensated for the C3 symmetry used (27). Molecular Modeling-Coordinate data sets were obtained from the Research Collaboratory Structural Bioinformatics Data Bank (www. rcsb.org) for the entry codes 1C51 (PSI 4 Å structure (28)) and 1FE1 (PSII 3.8 Å structure (29)). These structural models were visualized using the program Swiss-Protein Data Bank viewer (Glaxo-Wellcome Experimental Research (30)) and modeled into the calculated threedimensional map using the "O"-modeling software package (31). RESULTS Isolation of the CP43Ј-PSI Supercomplex-To isolate the CP43Ј-PSI supercomplex from Synechocystis PCC6803, we used a mutant that had a His tag attached to the C terminus of CP47, kindly provided by Dr. T. Bricker (Louisana State University, Baton Rouge) (18). The mutant was grown photoheterotrophically in the presence and absence of iron in the culture medium. Thylakoid membranes were isolated, and after solubilization with 1% ␤-D-dodecyl maltoside, were passed through a Ni 2ϩ affinity column. PSII was selectively bound to the column via the His tag, whereas the non-bound fraction containing PSI was collected and subjected to sucrose density gradient centrifugation. Fig. 2a shows that in the case of normal cells (Gradient A) two main chlorophyll-containing bands were observed corresponding to monomeric (band 2) and trimeric (band 3) PSI, whereas iron-stressed cells (Gradient B) gave two additional green bands (bands 1 and 4). The SDS-polyacrylamide gel electrophoresis analysis shown in Fig. 2b characterized the various bands and revealed that bands 1 and 4 contained free CP43Ј and CP43Ј plus PSI respectively. Size exclusion high performance liquid chromatography analysis of the solubilized PSI fractions presented in Fig. 3a indicated that the two PSI bands obtained with normal cells corresponded to the approximate molecular masses expected for a monomeric (ϳ356 kDa) and trimeric (1068 kDa) PSI complex (32) with the trimer being the dominant species. The additional peaks observed after iron stress correspond to native chlorophyll-binding CP43Ј (ϳ47 kDa) and a high molecular mass chlorophyllcontaining species of approximately 1900 kDa, indicative of a CP43Ј and PSI supercomplex. Also of importance is that the level of the PSI trimer in iron-stressed cells was significantly reduced compared with that of normal cells when normalized against the monomeric level of PSI. Spectral Characterization-The room temperature optical absorption spectra of the isolated PSI trimer, CP43Ј, and the CP43Ј-PSI supercomplex are shown in Fig. 4. The PSI trimer has a long wavelength absorption maximum at 680 nm as compared with 670 nm for isolated free CP43Ј. As expected, the CP43Ј-PSI band has a maximum absorption at the intermediate wavelength of 673 nm. The high level of absorption in the 450 -500-nm region in the case of CP43Ј is because of its copurification with free carotenoid in the sucrose density gradients (see asterisk in Fig. 2a). Fluorescence measured at 77 K showed that the PSI trimer had an emission maximum at 720 nm, whereas CP43Ј fluoresced maximally at 685 nm (Fig. 5a). However, in the case of the CP43Ј-PSI supercomplex, the emission profile was similar to that of PSI with the exception of some weak emission at approximately 685 nm. Upon the addition of 0.1% Triton X-100, this weak signal at 685 nm changed to the dominant emission (Fig. 5b), indicating that the detergent had uncoupled CP43Ј from PSI and therefore suggesting that in the untreated sample, energy is efficiently transferred from CP43Ј to PSI. Sucrose density gradient analyses showed that indeed the Triton X-100 treatment converted the CP43Ј-PSI band into trimeric PSI and free CP43Ј (data not shown). Further confirmation that CP43Ј within the CP43Ј-PSI supercomplex was functionally coupled to PSI was made by measuring excitation spectra for 77-K fluorescence emission at 720 nm (data not shown). Structure of the CP43Ј-PSI Supercomplex-We have shown previously by electron microscopy and single particle analysis that the CP43Ј-PSI supercomplex is composed of a PSI trimer surrounded by a ring of 18 subunits of CP43Ј (17). The structural model presented in this initial report (17) was obtained by analyzing top views only, but other views were also observed in the electron micrographs including side elevations and views attributed to tilting particles. We have, therefore, taken advantage of these other views to obtain a range of class averages of the supercomplex, and a three-dimensional model has been calculated. Fig. 6a shows nine typical class averages taken from 76 class averages that shows a range of orientations as derived from a 3000-particle data set. All 76 class averages were used to construct the three-dimensional model representing ϳ2200 single particles. This three-dimensional model is shown in Fig. 6b as surface-rendered views and at the same orientation as the class averages given in Fig. 6a. It clearly indicates that the central PSI trimer is surrounded by 18 CP43Ј subunits. According to Fourier shell correlation analysis (Fig. 6c), the three-dimensional model has a resolution of approximately 24 Å. It is quite apparent from the comparison of the class averages that both stromal and luminal views were incorporated into the three-dimensional model. For example, the feature marked with an arrow in the class average numbered 1 in Fig. 6a is displaced to the right within each of the PSI monomers, whereas in class average numbered 4 it is displaced to the left. This finding indicates that these 2 two-dimensional class averages are derived from particles imaged from different sides of the supercomplex. Thus, the relative orientations of these two averages differ by ϳ180°. This difference is present in the three-dimensional model and therefore is a reliable feature within all the class averages used. From this observation, we conclude that the supercomplex can orientate itself with its luminal or stromal surface toward the carbon grid and that the three-dimensional model reflects this fact. Although the present three-dimensional model of the CP43Ј-PSI supercomplex has potential for further refinement by electron cryomicroscopy of non-stained vitrified samples, it does provide a framework in which to model the structures of PSI (28) and CP43 (29) obtained by x-ray crystallography of complexes isolated from the thermophilic cyanobacteria Synechococcus elongatus. The structure of the PSI trimer is now at a resolution of 2.5 Å (32), but at the time of submitting this paper only the 4-Å model of Krau␤ et al. (28) was available in the data base. In the case of CP43, the 3.8-Å data are available (29) and have been used to model the 6-transmembrane helices and the positioning of the tetrapyrrole headgroups of chlorophyll a. We have taken the models of the PSI trimer and CP43 derived by x-ray crystallography and built them into the three-dimensional electron microscopy map of the CP43Ј-PSI supercomplex as shown in Figs. 7 and 8. DISCUSSION As a consequence of iron starvation, Synechocystis PCC 6803 expresses its isiA and isiB genes. Concomitant with this gene expression is a drop in the level of PSI (33) and phycobiliproteins (5). We have found that in addition to these well recognized responses to iron limitation, Synechocystis forms a supercomplex composed of a ring of 18 copies of the CP43Ј protein surrounding a PSI trimer. The CP43Ј-PSI supercomplex was isolated by sucrose density centrifugation, and size exclusion chromatography estimated its molecular mass to be approximately 1900 kDa. This mass is consistent with that predicted by the calculation for a PSI trimer (1068 kDa) plus 18 copies of the CP43Ј protein (846 kDa). Assuming that each CP43Ј subunit binds at least 12 chlorophylls as does CP43 (29), the CP43Ј antenna ring of the PSI supercomplex would contain 216 or more chlorophylls. It is for this reason that the optical absorption spectrum of this supercomplex is significantly different from that of the PSI trimer alone. The chlorophyll a molecules bound within the CP43Ј protein have a long wavelength absorption maximum at approximately 670 nm. Therefore, the long wavelength absorption peak shifts from 680 nm for PSI to 673 nm for the CP43Ј-PSI supercomplex. Some free CP43Ј in the supercomplex preparation could also contribute to this blue shift, but fluorescence measurements suggest that this contamination is not (27) built from coordinate data set IC51 in green, and the 6-transmembrane helices of CP43 from the PSII structure (28) IFE1 in red. Bar represents 5 nm. significant. When isolated, the CP43Ј protein has a relatively high fluorescence yield at 77 K peaking at 685 nm. Although some emission at this wavelength was detected from the CP43Ј-PSI supercomplex, the PSI low temperature fluorescence peaking at 720 nm was the dominating emission. Only after the addition of 0.1% Triton X-100 to dissociate the CP43Ј protein from the PSI trimer was a large fluorescence emission seen at 685 nm from the supercomplex. Therefore, we conclude that the chlorophylls within the CP43Ј ring are excitonically coupled to those within the PSI trimer core. Given that the PSI trimer binds almost 300 chlorophyll a molecules (32), we can conclude that the additional 216 chlorophylls in the CP43Ј ring increases the light-harvesting capacity of the PSI reaction centers within the supercomplex by at least 70%. It has previously been suggested that CP43Ј could act as an additional antenna of PSI (16). The results presented here and elsewhere (17) clearly show that in response to iron deprivation, Synechocystis induces an additional antenna system for PSI. The processing of top views of the CP43Ј-PSI supercomplex indicate that the 18 CP43Ј subunits do not form a perfect ring because of the fact that the PSI trimer is not circular. The three-dimensional model presented in Fig. 6b was constructed using a number of top, intermediate, and side views showing that the supercomplex has a diameter of approximately 330 Å and a thickness of ϳ80 Å in negative stain. Because the hydrophobic surfaces of the supercomplex must have a detergent layer, the true diameter is likely to be slightly less. The two-dimensional class averages (Fig. 6a) and three-dimensional reconstruction (Fig. 6b) reveal rather flat stromal and luminal surfaces that are not expected for the stromal surface since the PSI trimer normally binds extrinsic PsaC, PsaD, and PsaE proteins. However, we are confident that the three-dimensional model is composed of characteristic stromal and luminal views because differences can be observed in the internal density distribution of the PSI monomers within different two-dimensional class averages, indicative of different orientations on the carbon grid (see Fig. 6a). The absence of the expected surface structural features attributed to PsaC, PsaD, and PsaE is emphasized when the x-ray structure of the PSI trimer is modeled into the threedimensional model of the CP43Ј-PSI supercomplex (see Fig. 7). There is a possibility that the extrinsic proteins are dislodged by the uranyl acetate-staining procedure used before imaging in the electron microscope. It would be highly desirable to obtain a three-dimensional model for the supercomplex using non-stained samples and electron cryomicroscopy, which would remove any effects of uranyl acetate. However, there is the alternative possibility that the PSI reaction centers within the supercomplex do not bind all three extrinsic proteins. Under iron-stressed conditions flavodoxin replaces ferredoxin as the PSI electron acceptor. This may lead to modifications in the levels and binding affinities of the three extrinsic proteins. For example, flavodoxin can act as a PSI electron acceptor in the absence of PsaD and PsaE, whereas ferredoxin cannot (35). The modeling of the CP43Ј-PSI supercomplex using the x-ray structures of the PSI trimer and of the PSII CP43, however, does provide a framework to start to understand how the chlorophylls of the CP43Ј ring transfer energy to those bound within the PSI trimer. In Fig. 8, we have modeled the position of the chlorophylls derived from the x-ray crystallography into the CP43Ј-PSI complex. For convenience, we have assumed that the transmembrane helix and chlorophyll organization is the same in CP43Ј and CP43, and that helices V and VI of FIG. 8. Preliminary model of chlorophyll organization of the CP43-PSI supercomplex using coordinates from the 4-Å x-ray PSI trimer structure (27) and 3.8-Å PSII CP43 structure (28). The model is based on the coordinate data sets IC51 (PSI) and IFE1 (PSII). The chlorophylls of the PSI trimer are in green and those of CP43 in red. The modeled chlorophylls of CP43Ј come closest to those of PSI at the regions marked with asterisks. CP43Ј are located closest to the reaction center core as they are in the case of CP43 within the PSII structure (36,37). The resulting model is consistent with our finding that the chlorophyll molecules of the CP43Ј ring and PSI trimer are sufficiently close to facilitate energy transfer to the PSI reaction centers. Of particular note is that there seems to be possible entry points for energy transfer (see asterisks in Fig. 8) corresponding to chlorophyll interdistances of 12-18 Å. The three entry points for each monomer seem to involve chlorophyll a molecules clustered close to helices c and d of the PsaB (single asterisk) and PsaA proteins (triple asterisk) and those probably associated with the PsaJ protein (double asterisk) (32). However, this modeling is preliminary and will be improved with a better resolution of the three-dimensional map of the CP43Ј-PSI supercomplex and by having access to the coordinates of the 2.5-Å x-ray model of PSI (32). The results presented here raise the question of why do cyanobacteria increase the antenna size of PSI under ironstressed conditions? One possibility is that it is a compensatory response to the lowering of the PSI and phycobiliprotein levels (4). With an extra antenna system, PSI can increase its rate of photochemical charge separation. This is important at lightlimiting intensities, a situation which is usually found in the euphotic zone where cyanobacteria live. Our results are not only important for understanding how certain cyanobacteria respond to iron stress but also have implications with regard to the organization of light-harvesting systems of the chlorophyll a/b-containing oxyphotobacteria, formerly known as prochlorophytes (38). These green, oxygenic prokaryotes contain pcb genes that encode for chlorophyll a/b-binding proteins similar to the IsiA/CP43Ј protein. Although there is usually only one isiA gene in phycobilisome-containing cyanobacteria, the number of pcb genes differ between the three known classes of prochlorophytes, Prochloron, Prochlorothrix, and Prochlorococcus and even between different strains of Prochlorococcus (39,40). It would seem highly probable that one or more of these genes will encode chlorophyll a/b-binding proteins, which form a Pcb-PSI structure similar to that described here for the CP43Ј-PSI supercomplex. Indeed, we have recently shown that Prochlorococcus strain SS120 does have an 18-subunit Pcb ring around its PSI trimer (14). In this case, the ring is rich in Chl b with a Chl a/b ratio of approximately 1. It should be noted that the IsiA protein is not only structurally homologous to CP43 and the Pcb proteins but also to the CP47 subunit of PSII and the N-terminal domains of the PsaA and PsaB reaction center proteins of PSI. However, like CP43, CP47 differs from the other members of the chlorophyll-binding 6-transmembrane helical protein superfamily by having a large loop joining the luminal ends of helices V and VI. The function of this special feature of CP43 and CP47 is not yet understood but probably was important for the evolution of the water oxidation activity of PSII. In the absence of a high resolution structure of PSII, little is known about the location of these loops relative to the manganese cluster, which forms the catalytic center for the water-splitting reactions. Finally it is worth noting that Prochlorophytes, like cyanobacteria, have trimeric PSI (15, 34), which would be a re-quirement for the formation of the antenna ring and, indeed, could be the functional significance of the trimerization of the PSI reaction center complex in prokaryotes in general.
2018-04-03T04:00:42.037Z
2001-11-16T00:00:00.000
{ "year": 2001, "sha1": "3479f4e35be7e69ce8036298c31ae4ffc7ed4cfa", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/46/43246.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "17e3a9ee9745b741d496c5d82aad8552cae11c8a", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257895050
pes2o/s2orc
v3-fos-license
Clinical Psychology and the COVID-19 Pandemic: A Mixed Methods Survey Among Members of the European Association of Clinical Psychology and Psychological Treatment (EACLIPT) Background: The COVID-19 pandemic has affected people globally both physically and psychologically. The increased demands for mental health interventions provided by clinical psychologists, psychotherapists and mental health care professionals, as well as the rapid change in work setting (e.g., from face-to-face to video therapy) has proven challenging. The current study investigates European clinical psychologists and psychotherapists’ views on the changes and impact on mental health care that occurred due to the COVID-19 pandemic. It further aims to explore individual and organizational processes that assist clinical psychologists’ and psychotherapists’ in their new working conditions, and understand their needs and priorities. Method: Members of the European Association of Clinical Psychology and Psychological Treatment (EACLIPT) were invited (N = 698) to participate in a survey with closed and open questions covering their experiences during the first wave of the pandemic from June to September 2020. Participants (n = 92) from 19 European countries, mostly employed in universities or hospitals, completed the online survey. Results: Results of qualitative and quantitative analyses showed that clinical psychologists and psychotherapists throughout the first wave of the COVID-19 pandemic managed to continue to provide treatments for patients who were experiencing emotional distress. The challenges (e.g., maintaining a working relationship through video treatment) and opportunities (e.g., more flexible working hours) of working through this time were identified. Conclusions: Recommendations for mental health policies and professional organizations are identified, such as clear guidelines regarding data security and workshops on conducting video therapy. • Rapid change in psychotherapy delivery occurred due to the COVID-19 pandemic. • Clinical psychologists and psychotherapists report challenges (e.g., reluctance among patients) and opportunities, resulting from changes to the work environment.• Data security is crucial as well as access to treatment via video therapy. • National policy and organizational guidance is crucial to support clinical psychologists and psychotherapists in their work. Health care services globally have faced unprecedented challenges due to the COVID-19 pandemic.Alongside the physical health consequences of the COVID-19 virus, mental health problems are also increasing, with reported increases for anxiety, depression, psychological distress and sleeping problems (Bohlken et al., 2020;Liu, Heinzel, Haucke, & Heinz, 2021;Rajkumar, 2020;Salari et al., 2020;Vindegaard & Benros, 2020;Xiong et al., 2020).Furthermore, there has been an estimated additional 53.2 million cases of major depressive disorder and an estimated additional 76.2 million cases of anxiety disorders globally (Santomauro et al., 2021).As a consequence, mental healthcare needs to be prioritized and clinical psychologists and psychotherapists1 play an important role in the prevention and treatment of these adverse consequences of the COVID-19 pandemic.However, as yet little is known about how well clinicians and services have adapted to the increased demand and additional challenges presented by the COVID-19 pandemic, and what might be done to improve mental health care for those who have suffered psychologically as a consequence of the COVID-19 pandemic. Clinical psychologists and psychotherapists had to find rapid alternatives to face-toface treatment such as telephone-based or video therapy (Békés & Aafjes-van Doorn, 2020;Humer, Stippl, et al., 2020), or in-person sessions whilst adhering to their COV ID-19 national containment measures from the start of the pandemic.Prior studies have shown that the implementation of changes to service delivery can take an average of sixteen years to implement in a health care system (Rogers et al., 2017).In contrast, during the pandemic, change in service delivery was rapid and unexpected, and there was little supervision or guidance available for clinicians (e.g., Boldrini et al., 2020;Probst, Stippl, & Pieh, 2020).Moreover, the pandemic itself led to significantly higher stress levels in clinical psychologists and psychotherapists, especially in younger and less experienced professionals (Aafjes-van Doorn et al., 2020;Probst, Humer, Stippl, & Pieh, 2020).Additionally, fear of infection and other issues related to the pandemic itself were also reported by clinical psychologists and psychotherapists (Humer, Pieh, et al., 2020). In the midst of such rapid and unforeseen changes to practice, several reassuring and thought-provoking phenomena have been observed.For instance, preliminary evidence showed video therapy to be more effective than previously expected (Humer, Stippl, et al., 2020).Interestingly, the ability to adapt to conducting therapy via video is related to the individual clinical psychologists' and psychotherapists' attitudes and is influenced by their past experiences with video therapy (Békés & Aafjes-van Doorn, 2020).Further, challenges have been reported by mental health professionals regarding the lack of inter personal interactions, feelings of isolation and other technical issues whilst conducting therapy online (McBeath et al., 2020).The aforementioned studies provide an interest ing, yet heterogeneous, picture of the impact of the COVID-19 pandemic on mental health professionals.However, most of the studies used closed questions and quantitative methods (e.g., Békés & Aafjes-van Doorn, 2020;Boldrini et al., 2020), thus limiting the possibility for participants to provide their own insight into offering psychotherapy during a global pandemic. Professional organizations and other commissions have taken the initiative to provide the public and mental health care professionals with information regarding COVID-19 (e.g., UK 2 , Germany 3 , Austria 4 , Belgium 5 ).However, it is also important for mental health care professionals who work 'on the ground' to share their experiences, in order for organizations to find ways to best support their clinicians.The current survey aimed to gather information 'from the field' to gain an understanding of the experiences of clinical psychologists and psychotherapists working during the COVID-19 pandemic, across different European countries.Members of the European Association of Clinical Psychology and Psychological Treatment (EACLIPT) were consulted; EACLIPT is an association that aims to foster research, education and dissemination of scientifically evaluated findings on clinical psychology and psychotherapy.The current study seeks to provide a first European wide insight into the perceived changes to clinical practice and research of clinical psychologists and psychotherapists, as well as the barriers and opportunities, in order to improve support to people as part of the response to the COVID-19 pandemic.The study also aims to gather information to highlight helpful ways for clinical psychologists and psychotherapists to approach, prioritize and manage their work in the context of the pandemic.Finally, it aims to provide information on how organizations and organizational bodies (such as EACLIPT) can best adapt to pandemic related changes. Figure 1 Country of Origin of Participants Note.Please note that n = 6 participants chose to not comment on their country of origin. Procedure and Measures Socio-demographic information was collected using nine closed questions (e.g., country of origin, place of work and most commonly presenting patient need during the pandem ic).Five open questions were used to gain information on perceived changes in the work place, challenges and opportunities during the crisis, the effect of COVID-19 safety measures on their practice, and other implications.The survey was open for completion between 25 th May 2020 and 1 st September 2020.The last response that was included was submitted on 19 th August 2020. Quantitative and Qualitative Analysis The six phases of thematic analysis (Nowell, Norris, White, & Moules, 2017) were fol lowed by the first two authors (J.A. and S.G), including familiarization with the data (Phase 1), generating initial codes (Phase 2), searching for themes (Phase 3), reviewing themes (Phase 4), defining and naming themes (Phase 5) and producing the report (Phase 6).The third author (J.B.) supervised their work and checked the data during Phase 4, in order to review the themes that had been generated.This enabled research bias to be evaluated and the interpretation of the data to be confirmed.The first two authors screened the answers independently in Phases 1, 2, and 3 and formed their own categories, which were then compared and agreed on and a list of themes per question was finalized.Themes were then listed in terms of frequency for each question.The authors were each based in different countries, therefore all meetings took place over remote platforms. Regarding the overall process, reflexivity is considered as a key aspect of the thematic analysis process (Nowell et al., 2017).Therefore, the first two authors kept their own reflexive journal to document the logistics and methodological considerations as well as their own personal reflections.The precise analysis was then conducted in line of the six step technique by Braun and Clarke (2006). Results The results have been analyzed according to the six-phase method by Braun and Clarke (2006) i.e., familiarization with the data, generation of initial codes, searching for themes, reviewing themes, defining and naming themes, and producing the report.Data are organized and summarized in the results section and the interpretation regarding signifi cance and implications follows in the discussion. Changes in Patients Seeking Help Based on the question that asked if participants were seeing more or less patients, the number of patient contacts (i.e., number of patients seen by a clinician) seemed to remain relatively stable during the first wave of the COVID-19 pandemic, as reported by 42% of the participants who indicated no change in the number of patient contacts.Nonetheless, almost 40% reported to see less patients, while 17% reported to see more patients. Further, 78.8% reported that patients displayed similar psychological problems as they did prior to the COVID-19 pandemic.However, 48.3% also reported that their patients seemed to be more distressed compared to one year before, whereas 28.7% reported no change in their patients' distress.The most frequently reported patients' clinical issues encompassed anxiety (86%), depression (82%), loss of social contacts and isolation (each 39%) (see Figure 2). In terms of clinicians' working practices, most responders reported that COVID-19 had changed their work routines (73.6%), mostly in ways that they perceived to be undesirable.Note.Please note that not all patients were seen in standard psychotherapeutic environments which is why alternative topics are listed as the presenting problem.This refers to other medical conditions, neuropsychological testing and other non-identified topics. Qualitative Results The overarching themes that were identified in the data were: changes to clinical prac tice; changes to other work activities and contexts; the challenges and opportunities; the effect of COVID-19 measures on clinical practice and further reflections.Within these themes, the following categories were found including: Changes to working practices such as online working; psychotherapists reflections on the changes and an exploration of what could be improved and, implications for clinical practice and organizations. Perceived Changes in Clinical Practice Perceived changes in clinical practice were mostly in regards to working online, e.g., conducting video therapy, and working from home.Further, several participants reported changes in treatment frequencies (more/less patients, more sessions per patient), hygiene measures (such as wearing face masks, social distancing in assessments), challenges in providing treatment while wearing personal protective equipment (such as face masks), redeployment and logistical difficulties if patients were not able to use online platforms. Citation regarding hygiene measures: "every patient has to wash first his hands, more disinfection, mouth-nose-protection, plexiglass for breath protection, safety distance, and more time and space are needed between the appointments for disinfection" Citation regarding personal protective equipment: "Wearing masks, me and patient, which is very disturbing while there is no emotional expression." Some participants also reflected on patients' concerns regarding treatment, such as more anxiety and individual differences in motivation to access online treatment.Further, therapists' concerns were also mentioned (e.g., if their hygiene procedure is correct). Citation regarding therapists' concerns: "the first thought in every step is 'how correct is my procedure?'" Overall, changes in patient contact (i.e., less appointments, fewer face-to-face contact, more support for patients) were named. Perceived Changes to Other Work Activities and Contexts Not all participants were necessarily working in clinical practice, and changes in research and teaching were also reported.Participants noted that procedures in the work environ ment were modified according to COVID-19 safety measures, often leading to a lack of contact between colleagues. In an additional question, general aspects of the working environment were covered.Here, once again digitalization was mentioned as a central change, as not only video therapy but also remote meetings with colleagues that had been introduced.Citation regarding digitalization: "No face to face clinics, therefore replying on phone and video contact.Working in isolation more and away from my team to do working from home." Some participants reported that there was an increasing lack of contact between collea gues due to the increasing division in teams as a result of remote working. In answers to this question, participants also highlight the adherence to hygiene measures in the work environment such as social distancing, wearing masks and more cleaning.Several participants mentioned that they were mainly working from home and some were holding therapy sessions outside.Additionally, two participants were engaged in extra activities regarding COVID-19 (i.e., at a phone support line).Three participants said that there were significant changes to their research, such as delays in recruitment, or needing to stop research entirely. Challenges and Opportunities in the COVID-19 Pandemic Era The challenges and opportunities that participants reported were wide ranging, and there was not always a clear distinction between what constituted a challenge or an opportunity (e.g., only replying "tele-therapy"), and sometimes different participants reported the same issue as a challenge, whereas others saw it as an opportunity.There were several participants who listed an opportunity that arose as a result of the COV ID-19 pandemic and also at the same time reported it as a challenge of working during the COVID-19 pandemic (e.g., no commute vs. constant working from home). Change in Work Logistics - The change to a predominantly technology-based work practice appeared to be either a challenge or an opportunity for participants.While several participants reported problems regarding technical knowledge and support or in ternet connection issues, as well as lack of equipment (such as laptops), remote working was also perceived by some as an opportunity to improve their own technology skills. Citation regarding technical factors: "videocalls are more tiring, but effective" A similar pattern of both challenges and opportunities emerged for working logistics: additional childcare, the need to develop new work-related rituals and a higher strain of videocalls were mentioned, as well as no time between meetings.However, several positive aspects were also mentioned, such as less need to travel, more flexibility at work, and the opportunity to access patients who may not have had the possibility to receive treatment otherwise.Some also felt that video therapy works very well, and some had been able to further develop their self-care strategies. Citation regarding working logistics: "The main challenge was man aging childcare alongside working while the nurseries were closed." Citation regarding working logistics: "working from home so less MDT [multidisciplinary teams] working, not being able to provide a service for those with sensory impairments primarily hearing loss, increased competing demands on my time, my own response to COVID and lockdown and depleted resources over time.New ways of working do include being able to offer video or remote access to appointments not requiring people to travel and being able to support people who are shielding" Citation regarding working logistics: "I had to develop new rituals at the end of the work day, digital work exhausts me more than working from the office" Clinical Issues -Interestingly, the therapy-related factors also included both challenges and opportunities.Interventions for some mental health problems appeared to be more challenging to deliver online (e.g., depression, trauma) compared to the pre-pandemic face-to-face settings (e.g., difficulty in finding new options to increase activity, more insecurity in trauma treatment due to a lack of stabilizing measures).Furthermore, par ticipants reported that they were at a greater physical distance to patients during faceto-face interactions, whereas online sessions provided less opportunity for non-verbal feedback and therapeutic engagement.Participants reported having to spend more time preparing for sessions, and there were concerns about a lack of consent and choice of therapy modality for patients.Conversely, less cancellations were noticed.Additionally, participants were able to receive contextual information about their patients by seeing their environment. Citation regarding therapy-related factors: "Less cancellations and non-attendance at sessions.Harder developing rapport and doing ther apy without the same transference or cues." Citation regarding therapy-related factors: "video sessions allow for less non-verbal feedback/assessment (negative for diagnosis and treat ment recommendation); video sessions allow impression of the patient's home environment (important context information and opportunity for the patient to illustrate problems that occur at home = positive)" Citation regarding therapy-related factors: "Reachability was better for some, but worse for others, especially mothers (closed schools) and women in abusive relationships (often had to talk in their car)" Team and Organizational Factors -Several factors were portrayed as rather chal lenging, as participants reported difficulties regarding social factors at work, such as a worsening of team cohesion, staff absence, staff conflicts and social isolation.Although some found this to be a positive as they could decide who they spent time with. Citation regarding social factors: "prevent social isolation also in my staff without forcing collaborators back to work" Furthermore, a number of organizational factors were mentioned.For example, the rapid change of regulations (e.g., weekly changes) and the lack of guidelines and unified standards were reported to make working processes even more difficult. Citation regarding organizational factors: "Trying to keep up with the constant information changes, rapid decision making and trying to look after myself too" Alongside organizational factors, data security issues were also mentioned.This often highlighted the problem of patient confidentiality and keeping data safely stored while working from home. Finally, participants reported difficulty adapting to new ways of working initially, however this appeared to develop into a new and practiced working routine over time. Effects of the COVID-19 Pandemic Measures on Clinical Practice Participants mainly focused on the effects of COVID-19 emergency measures on their clinical practice.The general restrictions of contact, i.e., lockdown, social distancing, restricted entrance to buildings and building closures were mentioned by half of all par ticipants.These were often brought into close relation to other themes such as increased psychopathology in patients. Citation regarding general restrictions: "Restrictions concerning cer tain hours for meeting the patients." Citation regarding general restrictions: "Full Lock-down, both in ef fect on my work directly and how it seeps into clients' existing strug gles" Another topic mentioned was the effect of wearing masks. Citation on wearing masks: "Mask wearing -conceals the faces of both client and counselor, lack of nonverbal cues" Additionally, effects of hygiene measures on treatment were often mentioned, such as wearing protective clothing, opening windows during sessions, no face-to face contacts and short-notice cancellations due to patient concerns about showing COVID-19 symp toms. Citation regarding hygiene measures: "Wearing protection clothingit is necessary and important but it makes work harder" Both general restrictions issued by the state and individual restrictions in the workplace had a significant effect on participants and their work, such as closure of nurseries, Clinical Psychology in Europe 2023, Vol.5(1), Article e8109 https://doi.org/10.32872/cpe.8109quarantine, shielding, and loss of freedom as well as concerns around travelling on public transport. Citation regarding individual restrictions: "Closure of nurseries -hav ing to provide childcare between a working couple means a massive reduction in available working time. " Participants also reported on the effects on patients, such as changes in psychological symptoms and less motivation to seek help or engage in online sessions. Citation regarding effects on patients: "People with mental health conditions hold their breath: that is do not seek help because they are afraid to get COVID-19 and because there is a pause in social life" Citation regarding effects on patients: "COVID-19 measures, at least in Italy, did not help in containing the viruses, as people were terrified by the official information, so did not ask for help or did not dare to go to hospitals, and were hampered from going to parks." Further Reflections Participants shared a variety of interesting insights into prospective changes concerning both mental health professionals and government policies. One major theme was the wish to collect, share and discuss their experiences of using video therapy.This included both concerns (e.g., regarding effectiveness and data security), and desire for specific training in psychotherapy delivered online.Participants also shared that they had a new understanding of the importance of being connected to their colleagues. Citation regarding sharing with colleagues: "More practice in online therapy, share data concern effectiveness of online therapy versus on said therapy" Citation regarding sharing with colleagues: "I think it would be a good idea to set up a section in Clinical Psychology in Europe [journal of the EACLIPT] and invite practicing clinical psychologists to describe their experience with new forms of work. I would be motivated by such an opportunity to contribute, and I would also learn from the experience of colleagues. " In terms of policy implications, it was argued that the importance of mental health should be further promoted at national levels, particularly given the collective impact on mental health.A greater flexibility (e.g., introducing video therapy into health insurance plans) and the possibility of choosing the most suitable treatment modality (e.g., face to face or video) were two major points raised by respondents.Furthermore, the effects of the current pandemic on both research (e.g., regarding long-term effects on mental health), and the healthcare system was mentioned, including implications for research funding.Finally, government policy implications regarding the implementation of policy guidelines, such as closure of nurseries and schools, resulted in a dilemma for many parents who are having to work from home while caring for children. Citation regarding political implications: "Research funding being so FAST and associated huge number of reviews; being on funding panels.Existing research in NHS stopped due to redeployment." Citation regarding political implications: "Acknowledgement of the impact of COVID-19 on the mental health of the population should be acknowledged at a national/international level.Awareness needs to be raised in governments and there needs to be a way to address the increased level of distress that the population will undoubtedly experience." Citation regarding political implications: "Productivity whilst work ing from home is understood although not overtly acknowledged to be more limited when children to be cared for at home too which creates unnecessary guilt when torn between roles.Some managers (not mine) did not stand up to look after staff by leading and issuing clear guidance." Several participants pointed out that there will be long-term consequences of the COV ID-19 pandemic on mental health. Discussion The current study demonstrates that the COVID-19 pandemic brought about unprece dented changes in clinical practice for clinical psychologists and psychotherapists across EACLIPT members in Europe.Changes to the clinical practice of psychologists and psychotherapists were sudden, for example the digitalization of therapy, which was at odds with previous attempts to implement digital mental health approaches in healthcare (Mohr, Riper, & Schueller, 2018).Some opinions and evidence have suggested that this has been a 'black swan' moment, where the COVID-19 pandemic has led to a rapid change in how mental health care is provided, including more opportunities for online working (Wind, Rijkeboer, Andersson, & Riper, 2020), also in low-and middle-income countries (Fu et al., 2020).Additionally, the current study showed that clinical psychol ogists and psychotherapists managed to provide treatment throughout the COVID-19 pandemic, despite the additional challenges of working in this context, to patients who were perceived to be experiencing a greater level of distress.Although challenges were clearly identified in the current study, participants also identified opportunities from working through the pandemic, such as reduction in commuting time, increased work flexibility and accessibility for patients. Despite such a significant change in working context for clinical psychologists and psychotherapists, only one previous study looked at the impact of the pandemic on clinical psychologists and psychotherapists and also used a mixed-methods analysis of qualitative and quantitative data (McBeath et al., 2020).In that study, clinical psycholo gists and psychotherapists who were mostly based in the UK, were recruited via social media and, similar to our results, found that clinical psychologists and psychotherapists were able to cope with the rapidly changing work, and managed immediate problems with imagination and engagement.They also described a significant change of psycho therapeutic treatment, especially in relation to video therapy. Digitalization Even though clinical psychologists and psychotherapists have not yet reached a con sensus regarding whether they plan to continue using video therapy in the long run (Aafjes-van Doorn et al., 2020), the opportunities conferred via video therapy are clearly shown, both in the current study and other research (e.g., Humer, Stippl, et al., 2020).More than ten years ago, Simpson (2009) pointed out the opportunities and challenges of video therapy, naming the lack of research regarding efficacy as one major research goal.Simpson (2009) also pointed out that efficacy might be strongly related to patient and therapist's personality and interpersonal style, as well as therapist skills and experience in the use of technology.A pilot project with university students (Simpson, Guerrini, & Rochford, 2015) and the analysis of the recent, COVID-19 induced changes (Simpson et al., 2021) clearly points out the potential of video therapy if used correctly.It seems likely that the pandemic has shaped therapists' attitudes towards technology and led to a more positive view of it now they are more experienced in conducting treatment remotely, even if prior to COVID-19 they would not have elected to do so (Aafjes-van Doorn et al., 2020).As the success is highly dependent on therapists' overall attitudes and self-con fidence regarding technology and remote therapy (e.g., Aafjes-van Doorn et al., 2020), training courses and supervision in this regard is essential.As Simpson (2009) already pointed out, some barriers that prevent access to psychotherapy and counselling might be tackled with video therapy such as geographical distance between major cities and remote and rural communities, and a lack of adequate or affordable transport between them.Furthermore, video therapy can be used by patients who are immobile (Connolly, Miller, Lindsay, & Bauer, 2020).It might also encourage patients to engage who are indecisive about treatment and worry about stigma.Finally, most studies overall tend to conclude that video therapy will not be the new standard medium for psychotherapy (e.g., Aafjes-van Doorn et al., 2020;Connolly et al., 2020), but a useful addition under certain circumstances and considering specific adaptations such as providing a rationale for video therapy, maintaining therapeutic boundaries and finding a new way of risk management (for an overview see Simpson et al., 2021; for an exemplary analysis of pa tients with Borderline Personality Disorder see Ventura Wurman, Lee, Bateman, Fonagy, & Nolte, 2021). Conducting Therapy With Personal Protective Equipment (PPE) While the changes to working as a result of video therapy clearly brought opportunities, conducting therapy in person while using protective equipment such as face masks pre sented significant challenges.Clinical psychologists and psychotherapists who conducted face-to-face treatment during the pandemic mostly wore face masks and thus covered more than half of their face.Thus, while the disadvantage of video therapy is erased (e.g., no technological difficulties), others might appear: it has been argued both in our study and previous opinion pieces (e.g., Hüfner, Hofer, & Sperner-Unterweger, 2020) that emotions are harder to read if someone is wearing a face mask, which can then cause difficulties in the patient-therapist relationship.Interestingly, initial evidence from basic research has shown mixed findings.Some found that emotions are harder to read when the conversational partner wears a face mask (Grundmann, Epstude, & Scheibe, 2021), while others found in a longitudinal design that participants change which cues they use to detect an emotion, suggesting they adjust to the presence of masks (Barrick, Thornton, & Tamir, 2021).One study of school-aged children who are currently constantly interact ing while wearing masks concludes that masks pose a challenge but, in combination with other contextual cues, are unlikely to dramatically impair social interactions (Ruba & Pollak, 2020).Translating these findings to the clinical context, one can assume that psychotherapy with masks is somewhat more challenging than without masks, but could still lead to a good patient-therapist relationship and successful treatment outcomes.However, more research on the effects of masks on psychotherapy is necessary. Restriction of Contact It is important to acknowledge that clinical psychologists and psychotherapists have not only experienced a significant change in their working logistics, but also in their everyday life outside of work -similar to their patients and all other citizens.As some participants have highlighted, social support at work was less available and the team cohesion diminished, thus personal and work resources were limited.Furthermore, additional tasks added to stress (e.g., working from home without a proper place to work; child and other family care etc.).These factors indicate the importance of social support among the clinical psychologists' and psychotherapists' community in difficult times, and the importance of leadership from professional and governmental organizations. Reflections for EACLIPT It has been an unprecedented working environment for clinical psychologists and psy chotherapists during the COVID-19 pandemic.Clinical psychologists and psychothera pists were required to adapt their approach to work at very short notice during the first wave of the COVID-19 pandemic.However, survey respondents reported that they managed to convert working logistics efficiently and have been providing much needed care for patients ever since, even though the examination of the efficacy of treatment still needs more research.This was often done on an individual basis or by smaller groups of colleagues.An important next step is to collect, share and discuss experiences of, and develop guidelines for, video or phone therapy or intervention.This has been done locally (e.g., UK, Simpson, Richardson, Pietrabissa, Castelnuovo, & Reid, 2021.However, the EACLIPT as an organization has provided a position statement and a summary of national statements on what is needed for both patients and clinical psychologists and psychotherapists as well as future research endeavors regarding mental health 6 by integrating perspectives from a wide range of clinical psychologists and psychotherapists across multiple countries. Reflections for Government Policy and Other Institutions Although the sample of the current survey was limited to EACLIPT members, arguably, this data could be useful to inform the policies of government and other institutions, as the views of clinical psychologists and psychological therapists are represented from across a broad range of occupationals settings, across multiple European countries.The current findings emphasize the importance of including mental health issues in current policy considerations on how to manage the pandemic in the longer term.The long-term effects on mental health as a result of the COVID-19 pandemic are still not clear (e.g., de Figueiredo et al., 2021).Based on previous research on epidemics, further symptom increases in the upcoming one to three years are expected in anxiety, anger, depression, post-traumatic stress symptoms, alcohol abuse, and behavioural changes such as avoid ing crowded places and cautious hand washing (e.g., Kathirvel, 2020).This needs to be considered both in research (e.g., which factors could lead to mental health problems in the long run, de Figueiredo et al., 2021) and in health care (e.g., further flexible inclusion of video therapy into health insurance plans; enlarging mental health treatment provi sion; Kathirvel, 2020).In addition, the uptake of video therapy by clinical psychologists and psychotherapists during the COVID-19 pandemic offers the opportunity to take part 6) https://www.eaclipt.org/?tab=5 in treatment long distance (the therapist in one country, the patient in another) which calls for cross-border guidelines. Limitations and Implications The current study was implemented during the first wave of the COVID-19 pandemic, from May to September 2020.The pandemic is still ongoing and, thus, the situation is continually changing.To keep the questionnaire as short as possible to encourage participants to complete the survey, we did not include detailed information on the sociodemographic background and we did not ask for detailed numbers and facts, e.g.re garding the number of patient contacts before and during the pandemic.We rather opted to assess the personal estimation of change, which relies on the therapist's perception of the number of patient contacts and could include inaccuracies.Additionally, we are aware that only a small number of members completed the survey (i.e., 13% of EACLIPT members) and, thus, the results have to be considered in light of a rather limited and selective sample.However, our results provide a qualitative and quantitative picture of the first abrupt changes to the work of clinical psychologists and psychotherapists as a result of the COVID-19 pandemic. Furthermore, in the current study, responses to open questions were often quite short, which at times limited the scope of interpretation.However, many answers poin ted to similar conclusions as shown above. The current study highlights the tremendous challenges that both patients and clin ical psychologists and psychotherapists have experienced during the pandemic.Conse quently, there are calls for specific training for therapists and clear guidelines regarding the use of technology, data security and solutions to the psychotherapeutic challenges of delivering therapy remotely.However, more research (such as the follow-up to this survey that is underway) is necessary to identify the long-term effects of the COVID-19 pandemic on both patients and clinical psychologists and psychotherapists, and to com prehensively influence policy and future healthcare considerations.
2023-04-02T15:27:46.509Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "12df8c370636d5ff9a07a2d7f2c55aeac00ad5db", "oa_license": "CCBY", "oa_url": "https://cpe.psychopen.eu/index.php/cpe/article/download/8109/8109.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e623a5a2c4dc2d1d21ea2b2f3c47dd95ced75f1d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233986603
pes2o/s2orc
v3-fos-license
Realizing a 14% single-leg thermoelectric efficiency in GeTe alloys This work demonstrates GeTe alloys for efficient thermoelectric waste-heat recovery. INTRODUCTION Thermoelectric technology enables a direct conversion between heat and electricity for both refrigeration and power generation applications. A high thermoelectric conversion efficiency requires a large temperature difference between the hot and cold sides (T = T h −T c ) and a high materials' dimensionless figure of merit, defined as zT = (S 2 T)/( E +  L ), where S, T, ,  E , and  L are Seebeck coefficient, absolute temperature, resistivity, and electronic and lattice components to thermal conductivity, respectively (1). During the past decades, great efforts have been devoted to enhancing thermoelectric materials' zT. Proven strategies are typified by enhancing power factor S 2 / through carrier concentration optimization (2) and band engineering (3) as well as by reducing lattice thermal conductivity through phonon scattering by defects (4), which have led to a notable improvement in various thermoelectrics including PbTe (5), Bi 2 Te 3 (6,7), filled skutterudites (8), half-Heuslers (9), etc. Among known thermoelectrics for midtemperature (500 to 800 K) applications, GeTe-based materials stand out because of their high performance (10). GeTe undergoes a continuous phase transition between a hightemperature cubic structure (c-GeTe) and a low-temperature rhombohedral structure (r-GeTe) at ~720 K because of the slight distortion along the [111] crystallographic direction (11). Such a symmetry breaking leads a notable difference in band structures between r-GeTe and c-GeTe (12,13). The band structure of c-GeTe is very similar to that of PbTe and SnTe, where the valence band maximum locates at L and the secondary valence band at Σ with a small energy offset (13). This leads early researches to mainly focus on c-GeTe (14,15), and indeed, a zT approaching 2 at ~800 K has been realized with decades of developments (14,15). Recently, r-GeTe has been revealed to show an even higher zT at ~600 K (12,16,17), enabled by the rearrangement of the symmetry reduction-induced split bands for a large band degeneracy (12,18). Moreover, this strategy can be manipulated to have a great effect on enhancing zT (of ~0.8) even at temperatures close to 300 K (19,20). These together indicate the high zT of GeTe in the entire midtemperature (500 to 800 K) for an efficient thermoelectric power generation. The low formation energy of Ge vacancies helps understand the nature that pristine GeTe intrinsically comes with massive Ge vacancies, which result in a very high hole concentration (~10 21 cm −3 ) (13,21). A reduction of hole concentration to its optimum is essential for realizing the high zT of GeTe. Existing work usually use Bi (13,22) or Sb (23,24) as electron donors at Ge site, but this unfortunately leads to a decrease in carrier mobility (23,24). Recently, Cu 2 Te is reported to be an extremely efficient agent for decreasing hole concentration with least detrimental effects on carrier mobility (16). A reduction of hole concentration from ~10 21 to ~2 × 10 20 cm −3 can be realized by only 1.5% Cu 2 Te alloying (16). In addition, PbSe alloying was reported to be effective on reducing both hole concentration and lattice thermal conductivity (25). The underlying mechanism for reducing hole concentration in both cases is the suppression of Ge vacancies because of its largely increased formation energy upon alloying. This motivates the current work on a combination of both Cu 2 Te and PbSe alloying for a simultaneous optimization of electron and phonon transport for an extraordinary performance. One further step that has to be taken for realizing an efficient thermoelectric application of GeTe is the fabrication of high-efficiency devices. Both electrically and thermally conductive but chemically inert contacts between thermoelectric materials and electrodes are essential to ensure the high device efficiency offered by the highperformance materials (26,27). Usually, a stack of multiple layers can be applied for a prevention of atomic diffusion, a release of thermal expansion mismatch, and a reduction of contact resistance (28,29). For GeTe-based devices, Fe and Ni were considered as electrodes, yet a direct bond to GeTe results in a notable atomic diffusion thus a notable decrease in conduction and efficiency (29,30). Existing work indicates that alloys (26), compounds (29), and metals (31) can be used as a medium between thermoelectric materials and electrodes for inhibiting atomic diffusion. In this work, an optimized carrier concentration of 1 × 10 20 cm −3 was realized with 2% Cu 2 Te + 10% PbSe alloying. Such a relatively low overall level of doping ensures the high carrier mobility at the S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E 2 of 6 same time. In addition, PbSe alloying notably strengthens the scattering of phonons through mass and strain fluctuations. These together led to extraordinary thermoelectric zT over a broad temperature range. Using thermoelectric SnTe as a diffusion barrier, the Ag/SnTe/GeTe contact successfully enables a negligible extra resistance and a prevention of chemical diffusion, which results in a record single-leg device efficiency of ~14% under a T ~ 440 K (cold side at 300 K). RESULTS AND DISCUSSION The details about material synthesis, characterizations, and property measurements (efficiency measurement setup in fig. S1) are given in the Supplementary Materials. A copper bar was used as a heat flow meter within a temperature difference of 0.5 to 2.5 K (table S1), which is determined by an average of 60 measurements with a relative SD of <3% ( fig. S2) to ensure the large signal-to-noise ratio. A constant thermal conductivity of copper was used to estimate the heat flow, and less thermally conductive materials such as constantan (32,33) Cu 2 Te alloying enables a Hall carrier concentration (n H ) reduction while maintaining a high carrier mobility in GeTe (16) (Fig. 1, A and B), due to the negligible effect of Cu 2 Te alloying on the crystal structure and thus the electronic structure of GeTe (16). A further increase in Cu 2 Te alloying concentration of >2% does not lead to a further decrease in hole concentration according to our results, which can be understood by the limited solubility (34). Therefore, a fixed Cu 2 Te alloying concentration of 2% is used for a further decrease in hole carrier concentration by a further PbSe alloying. The reduction in hole concentration can be understood by the increased formation energy of Ge vacancy through substituting with a larger cation (25). This is evidenced by the redissolution of Ge precipitates into the GeTe matrix ( fig. S3). Two percent Cu 2 Te and 25% PbSe coalloying enables a Hall carrier concentration to be as low as ~3 × 10 19 cm −3 , successfully covering the optimal n H of 6 × 10 19 to 10 × 10 19 cm −3 for GeTe thermoelectrics (depending on working temperatures) (10,17,19). An optimal n H of 10 × 10 19 cm −3 in this work only requires 2% Cu 2 Te + 10% PbSe coalloying. Such a low n H equivalently requires a sole PbSe alloying of more than 35% (25). The realization of an optimal n H at a much lower concentration of alloy impurities (12% versus 35%) in this work ensures the high carrier mobility (Fig. 1B). Further, because of the increase in density-of-states effective mass (m*) and thus an increase in Seebeck coefficient (Fig. 1C), which can be understood by the rearrangement of split valence bands induced by the crystallographic rhombohedral distorsion (19,35), a high power factor (S 2 /) is achieved (Fig. 1D). The alloy defects induce strong fluctuations in both mass and strain, which notably strengthen the phonon scattering for a decrease in lattice thermal conductivity. The existence of alloying-induced lattice strains can be evidenced form the broadening in XRD peaks (5,36). As shown in the inset of Fig. 2B, where  is the "full width at half maximum (FWHM)" of the intense diffraction peaks and  is the Bragg angle, the intercept of the linear fit between  and  corresponds to the contribution of gain boundaries, while the slope stands for the contribution of substitutional defects. It can be seen that the materials in this work show a similar microstrain due to gain boundaries, but microstrain due to chemical substitution (slope) increases with increasing concentration of impurities. Furthermore, because of the resultant strong phonon scattering by alloy defects, lattice thermal conductivity ( L ) can be as low as 0.6 W/m·K in alloys at 300 K (Fig. 2B). Here,  L is estimated by subtracting the electronic (16,25). SPB, single parabolic band model. S4A). Note that the slightly reduced sound velocity ( fig. S4B) acts as another minor contributor to the  L reduction observed. Detailed temperature-dependent thermoelectric properties for (Ge 0.98 Cu 0.04 Te) 1−y (PbSe) y alloys at 300 to 800 K are given in figs. S5 and S6. Because of the simultaneous optimization in carrier concentration with a high mobility and a reduction in lattice thermal conductivity, an extraordinary peak zT of >2.5 with an average [ zT avg = (1 / T ) ∫ T c T h zT(T ) dT ] of 1.8 is realized within 300 to 800 K (Fig. 3). The high zT is further shown to be reproducible as confirmed by repeated measurements under a few thermal cycles ( Fig. 3B and fig. S6). This enables this class of materials to be highly efficient for midtemperature waste-heat recovery (300 to 800 K). Note that the high zT can be realized in a broad composition range of 0.1 ≤ y ≤ 0.2 ( fig. S5D), which is an advantage for mass production. In addition, the compatibility factor is as high as ~6 V −1 and nearly temperature independent ( fig. S7A), benefiting the realization of a high device efficiency (37). The origin of peaking zT at particular temperatures is highly related to the band structure and phonon scattering. The phase transition from a high-temperature cubic structure to a low-temperature rhombohedral structure leads to a notable change in the valence band structure of GeTe (35,38,39), the rhombohedral angle of which ends up to be the critical indicator, and the overall valence band degeneracy maximizes in a rhombohedral structure but close to the cubic structure because of the rearrangement of the symmetry reduction-induced spilt bands ( fig. S8A) (13,38). Such a rhombohedral angle depends on not only temperature but also composition (12). Previous studies (16) revealed that maintaining the pristine crystal structure with sufficient alloying ensures not only ensures a strong phonon scattering but also superior charge transport. In this work, Cu 2 Te and PbSe coalloying induces negligible effect on the crystal structure (i.e., the rhombohedral angle; fig. S8B), leaving the temperature effect to dominate the band structure change. This enables the maximal valence band degeneracy to be realized in the temperature range interested (<750 K) for a high average performance. To demonstrate the high thermoelectric efficiency enabled by the high-zT GeTe alloys, two single-leg thermoelectric devices are fabricated using (Ge 0.98 Cu 0.04 Te) 0.88 (PbSe) 0.12 with a dimension of ~2 mm by 2 mm by 6.5 mm (Fig. 4A; details in the Supplementary Materials). In this work, Ag is found to bond with SnTe much stronger than that with GeTe. In addition, SnTe is confirmed to bond well with GeTe (29) with a negligible chemical diffusion. This enables a robust bonding without any cracks as confirmed by SEM observations taken before and after thermal cycling and long-term stability test (Fig. 4B and fig. S9). In addition, EDS mappings show clear boundaries of the heterostructures, suggesting SnTe as an effective diffusion barrier material. Note that the SnTe diffusion barrier layer is only ~200 m in thickness (~3% of the total length of the leg) and SnTe has a much higher thermal conductivity (8 W/m·K) as compared to that of GeTe alloys here (1 W/m·K) at room temperature ( fig. S5C and fig. S10). Therefore, a large temperature gradient loss due to such a diffusion barrier is not expected. Moreover, SnTe has a good thermoelectric performance in p-type as well, which further guarantees a much larger thermoelectric output as compared to that of metal/ alloy barriers, even if the temperature gradient loss in the barrier is large. SnTe is highly conductive, leading to a negligible (<2%) contribution of the hot-side diffusion barrier layer to the total inertial resistance of entire device even at a T of 440 K. All these features make SnTe as a good choice as the diffusion barrier material here. The total electrical contact resistance (including both electrode and diffusion barrier), measured by a four-probe technique (Fig. 4C), is found to be as low as ~0.2 milliohm. This corresponds to an interfacial contact resistivity ( c ) of only ~8 microhm·cm 2 , which is one of the lowest among reported thermoelectric devices (fig. S11) (26,29,31). Note that the total resistance of contacts at both cold and hot sides is limited to be within 4% of the internal resistance (R in ) of the device, ensuring the high power output and conversion efficiency (27). With a fixed cold side temperature of 300 K, the output voltage (V) as a function of current (I) under different temperature gradients (∆T) is shown in Fig. 5A and fig. S12A. The nice linearity of V-I curves enables a determination of open-circuit voltage (V oc ) (the y intercept) and the internal resistance (R in ) (the slope), respectively. The 7, 8, 12, 14, 28, 43, 47-51). SKD, skutterudite. increase in V oc and R in with increasing ∆T ( fig. S13) can be respectively understood by the increase in Seebeck coefficient and resistivity of GeTe alloys. Figure 5B and fig. S12B show the output power (P) at different ∆T, and the corresponding power density (per sectional area of thermoelectric material) is shown in fig. S14. The maximum output power is ~130 mW (corresponding to a power density of 25 kW/m 2 ) at ∆T ~ 440 K, when the load resistance (R out ) is identical to the internal resistance (R in ). The measured device properties are reasonably consistent with predictions from the thermoelectric material ( fig. S13). Figure 5C and fig. S12C show the current-dependent efficiency () under different temperature gradients (∆T). A measured maximum efficiency ( max ) is realized to be as high as ~14% at ∆T = 440 K (Fig. 5D), which is actually higher than any of the experimental results in various devices reported before. Note that the  max obtained in this work is highly comparable to that of conventional Bi 2 Te 3 devices (6,7,40) and the recently reported MgAgSb one (41) near room temperature (T < 250 K), which can be understood by their comparable zT ( fig. S7) as well as the stable and high compatibility factor of GeTe alloys at these temperatures. These together demonstrate the extraordinariness of GeTe thermoelectrics covering a broad temperature range. Taking into account the contact resistance (assumed to be temperature independent) and compatibility factor ( fig. S7), the predicted maximum efficiency from the temperature-dependent transport properties (37,42) is shown in Fig. 5D as a function of the applied temperature gradient. The measurement reasonably agrees with the prediction, suggesting a rational device design in this work for realizing nearly the full potential of GeTe alloy thermoelectrics for power generation. Note that the larger discrepancy at a higher temperature gradient can be understood by the larger heat loss at these high absolute temperatures (9,43). Because of the phase transition between rhombohedral and cubic structure, one usually concerns the stability of GeTe-based devices. GeTe thermoelectrics are thermal-mechanically more robust than one would initially expect (38). This is firstly evidenced by the successful application of historical p-TAGS/n-PbTe devices (40,44). In addition, a recent work on GeTe devices was found to be stable for 450 thermal cycles (31). Furthermore, we demonstrate in two single-leg devices both a long-term stability (up to 200 hours) at T = 400 K (the hot-side temperature of 700 K, fig. S15) and a thermal cycle stability during heating/cooling (fig. S16). The good stability can be understood by the nature of a continuous phase transition [a smooth change in the rhombohedral angle (12,45)]. Note that the power measurement from the single-leg device excludes the heat exchange considerations because the hot-and coldside temperatures are enforced with heaters and coolers and the temperatures are measured at the leg-exchanger interfaces. This is different from the measurement of a module because the heat exchangers would lead to temperature gradient losses. Therefore, additional efforts are needed in minimizing the thermal contact resistance of the heat exchangers to realize a module efficiency approaching the single-leg efficiency. Summary In summary, Cu 2 Te and PbSe coalloying in GeTe simultaneously enables an optimization of hole concentration and a reduction in lattice thermal conductivity while maintaining a relatively high carrier mobility. This results in an outstanding figure of merit across a broad temperature range. The corresponding device efficiency of 14% is so far the highest for thermoelectric devices operating at a temperature gradient ∆T of <700 K. This work illustrates the extraordinariness of 6 of GeTe thermoelectrics. The strategies developed here for both materials and devices might be applicable for other thermoelectrics. Synthesis Polycrystalline GeTe, Ge 1−x Cu 2x Te, and (Ge 0.98 Cu 0.04 Te) 1−y (PbSe) y alloys were synthesized by melting, quenching, and annealing. The stoichiometric amounts of high-purity elements Ge (99.9999%), Te (99.999%), Cu (99.99%), Se (99.99%), and Pb (99.99%) were melted at 1223 K for 10 hours, followed by quenching in cold water and annealing at 850 K for 48 hours. The obtained ingots were ground into fine powder for XRD. Phase composition and microstructure are characterized by XRD (DX2000, PANalytical Aeris) and scanning electron microscope (Phenom Pro) equipped with an EDS. The dense (>95%) pellet samples with dimensions of ~12 mm in diameter and ~ 1.5 mm in thickness were obtained by hot-pressing at 823 K for 40 min under a uniaxial pressure of ~65 MPa. Transport property measurements The electrical properties including resistivity, Seebeck coefficient, and Hall coefficient were measured under helium. The Seebeck coefficient was obtained from the slope of thermopower versus temperature gradient within 0 to 5 K, where both the hot-and cold-side temperatures were measured by using two K-type thermocouples. The resistivity and Hall coefficient were measured by a four-probe Van Der Pauw technique under a reversible magnetic field of 1.5 T. The thermal conductivity (к) is determined by к = dCpD, where d, Cp, and D are density, heat capacity, and thermal diffusivity. The density was estimated by mass/volume, and the thermal diffusivity of GeTe-based alloys was measured by a laser flash technique (Netzsch LFA457). Both the electronic and thermal transport properties of GeTe-based alloys were performed in the temperature range of 300 to 800 K. The uncertainty in measurements of S, , , and Hall coefficient is about 5%. Longitudinal (v L ) and transverse (v T ) sound velocities were measured on the pellet samples at room temperature, using an ultrasonic pulse receiver (Olympus NDT) equipped with an oscilloscope (Keysight).
2021-05-08T13:13:24.266Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "d9a63b15a9af1e7686d21632acc9341097a69d87", "oa_license": "CCBYNC", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.abf2738?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d0e8095e641bd1131c627cd1db7b8cfa4b0c5e4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
253334487
pes2o/s2orc
v3-fos-license
Sports Image, Attitudes, and Positive Psychological Capital of MZ-Generation Viewers in the 2020 Tokyo Olympic Games Depending on Medal-Winning Status Each generation of modern society has lived in varying environments, and different reactions occur while viewing sports. This study compared and analyzed sports, sports attitudes, and positive psychological capital of two groups according to the presence or absence of medals in the event that most impressed them after watching the 2020 Tokyo Olympics. In total, 328 survey responses were collected, and the differences between the groups were statistically verified using multivariate analysis of variance. The results showed that Group 1, which was impressed by a medal-winning event, had a relatively higher average value of self-efficacy than Group 2, which was impressed by an event in which a medal was not won. In contrast, Group 2 had a higher positive image and more resilience than did Group 1. In addition, different psychological attitudes were found in the results of factors (behavioral image, evaluative image, cognitive attitude, affective attitude, behavioral attitude, and optimism) that were not statistically significant in the differences between the groups compared to those of the previous generation. The results of this study can be used as meaningful data in sports viewing-related studies. Introduction Large sporting events induce positive emotions and images in viewers (Smith, 2006) through the display of games, and fair sportsmanship, and a gamut of entertaining acts (Liao & Pitts, 2006). In addition to gaining information on various sporting events, viewers' psychological, affective, and emotional characteristics are largely influenced by the games played by athletes representing their countries, according to the media effects of the Olympic Games (Lu, Mihalik, Heere, Meng, & Fairchild, 2019). Viewers of the Olympics develop sports-related attitudes and intentions depending on the outcome of the games (Potwarka, Nunkoo, & McCarville, 2014) and learn about the culture and images of the host country (Essex & Chalkley, 1998). Since the 1984 Los Angeles Olympic Games, South Korea has maintained a relatively high Olympic performance level, generally finishing around 10th place. However, compared to previous games, the country did not win as many medals or show as strong of a performance during the 2020 Tokyo Olympic Games. Instead of expressing disappointment over the game results, which had been the focus of viewers in the past, unlike the previous generation, the MZ generation tends to enjoy life, which is causing significant changes in various areas (Park, 2022). In this respect, these MZ generations displayed a trend of valuing the games themselves more than the medals that the athletes may win. With the newer generation, the primary viewers of the 2020 Tokyo Olympic Games are the SPORTS IMAGE ATTITUDES AND PSYCHOLOGICAL CAPITAL OF VIEWERS | C. BUM ET AL. MZ generation, that is, those born between 1980 and 2000, and who are known to emphasize their personal characteristics and importance (Baum, 2020). A generational shift in the viewers of mega-events, such as the Olympics, is changing how such events are perceived. Ultimately, the public reaction to the results of the Olympics is expected to vary according to the generation of viewers. Among the preceding studies related to the Olympics with many issues and values around the world, there are studies comparing the emotional and psychological status of athletes according to medal status and grade (Medvec, Madey, & Gilovich, 1995;McGraw, Mellers, & Tetlock, 2005). In addition, from an economic point of view, there were studies that analyzed the national and corporate interests (Choi, Cho, & Im, 2011;Liu, Kim & Bea 2013) and losses (Chung, 2008) of hosting and broadcasting the Olympics. However, on the contrary, research related to the viewer's psychological perspective according to the presence or absence of an Olympic athlete's medal is very insufficient. Therefore, this study can help build data on sports images, sports attitudes, and the positive psychological capital of viewers depending on whether athletes won medals at the 2020 Tokyo Olympic Games. This may be used as basic data to improve the quality of life of people. Sports image Some images associated with sports include stadium and team images (Khatibzadeh, Kozechiyan, Honarva, & Saghdel, 2018), athlete image (Arai, Ko, & Ross, 2014), national image (Kim, Kang, & Kim, 2014), gender image (Hallmann, 2012), and sports image (Han, 2014). According to Ferrand and Pages (1999), there is an approximately 15% increase in ticket sales of a sports club after it is able to build a positive image. Kaplanidou and Vogt (2007) demonstrated that the image of a host country and city for a sporting event affects tourists' intentions to revisit the place as well as positive perceptions. This study examines sports images with a particular focus on the Olympic Games, which were chosen because viewers could watch numerous games and feel a wide spectrum of emotions. It analyzes changes in sports images perceived by viewers according to four sub-factors (behavioral, psychological, evaluative, and negative) based on whether athletes in both popular and unpopular sports in Korea won a medal. Sports attitude Attitude is formed by an individual's phenomena of interest (Schwarz, 2007) and perceived thoughts (Kayai, Cicicoğlu, & Demir, 2018). It plays an effective role in individuals' value creation (Lee, Whitehead, Ntoumanis, & Hatzigeorgiadis, 2008). In particular, modern people have a positive attitude toward sports. Zaman, Mian, and Butt (2018) examined the sports attitudes of adolescents and college students and found that the majority responded positively to sports attitudes. Such an outcome can play a substantial role in individuals' overall quality of life, for it encourages participation in sports, raises psychological happiness and the value of an individual's life (Güngör & Çelik, 2020), and helps them stay future-oriented during physical activity (Graham, Sirard, & Neumark-Sztainer, 2011). Studies on sports attitudes are important because it is possible to predict participants' behaviors and psychology from multiple perspectives. This study classifies sports attitudes of the viewers of Olympic Games by cognitive, affective, and behavioral sub-factors and analyzes how attitudes affect individuals' sports participation and life value creation. Positive psychological capital Self-efficacy, optimism, and resilience are sub-factors of positive psychological capital created in positive psychology. Self-efficacy refers to positive trust in oneself (Stajkovic & Luthans, 1998). Optimism indicates an individual having a positive outlook for the future and it decreases psychological uneasiness in the face of difficulties (Carver, Scheier, & Segerstrom, 2010). Resilience signifies an individual's ability to recover quickly from past stress or negative experiences. To this end, Bockorny and Youssef-Morgan (2019) argue that a high level of positive psychological capital helps to improve life satisfaction. Furthermore, Li et al. (2014) found that positive psychological capital plays an intermediary role in linking social support and subjective happiness. Finally, Afzal (2016) determined a significant correlation between individuals' positive and negative emotions and relative happiness by measuring positive psychological capital. The aforementioned studies support the notion that positive psychological capital is a key factor in improving quality of life. This study classifies the positive psychological capital that viewers gain from watching sports into self-efficacy, optimism, and resilience to offer viewers information on building a high quality of life. Data collection procedure and participants Data were collected via online and offline (collecting data from respondents in person) survey forms from November 9 to December 15, 2021. Participants were male and female adults aged 20 years or older belonging to the MZ generation (which includes millennials, i.e., those born in the early 1980s to the mid-1990s, and Generation Z, i.e., those born in the mid-1990s to the early 2000s) with experience watching the 32nd 2020 Tokyo Olympics. This study is officially waived from Ethics Approval by the institutional review board (IRB) Committee at Kyung Hee University (reference number: as a research in social science not collecting sensitive personal information. For online data collection, survey forms were distributed and collected via the Naver Office, an online survey platform. For offline data collection, the survey was conducted to collect data from respondents in person at two universities in Gyeonggi-do, Republic of Korea. Data were collected only from participants who voluntarily agreed to participate after the research purpose and the fact that there were no benefits or disadvantages to participating in the survey was explained. A total of 400 questionnaires were distributed online and offline, and 354 responses were received (approximate response rate: 88.50%). However, 26 unfaithfully written, offline survey forms were excluded. Finally, 328 survey responses were collected for statistical analysis. Additionally, based on the question of what the most impressive game at the 2020 Tokyo Olympics was, the survey respondents were categorized into two groups depending on the presence or absence of medals, which was applied as the independent variable (Group 1: Viewers who indicated an event resulting in medals was impressive; Group 2: Viewers who indicated an event without a medal was impressive). Finally, all survey participants reported basic demographic information, such as gender, education, and preferred media device type. The study participants' demographic information is reported in Table 1. Instruments The public perception of international sporting events, such as the Olympics, differs from that of the past. Regardless of the game outcome, the public displayed positive feelings, such as emotion, hope, and encouragement, and tended to value the game higher than the medals. First, in the case of sports images, the factors used in Han (2014) were modified to suit the participants and three sub-factors (behavioral, psychological, and evaluative) were evaluated in 12 questions. Next, in the case of sports attitudes, the factor applied to Park's (2010) study was modified and supplemented, and 13 questionnaires were used for three sub-factors (cognitive, affective, and behavioral). Finally, in the case of positive psychological capital, 12 items were included in the four sub-factors revised and supplemented to suit the participants of this study from the questionnaires used in Liu's study (2017). All scales were applied on a five-point Likert scale (1 = not at all, 5 = very much). Statistical Data analysis Statistical data analyses were implemented via SPSS version 23.0. First, the analysis reported the study participants' social demographic information (e.g., gender, employment, and education). Second, to secure the scale validity of the data collected, exploratory factor analysis (EFA) was conducted for three factors (i.e., sports image, sports attitude, and positive psychological capital). Third, Cronbach's alpha coefficients were calculated to verity the scale reliability of the data collected. Last, a multivariate analysis of variance (MANOVA) was performed to examine the differences in dependent variables between two groups. For all statistical analyses, significance was accepted at p<0.05. Scale validity and reliability The scales applied in the statistical analyses were tested for acceptable validity and reliability in previous research. However, as this study modified and supplemented the measurement tools to meet the research topic and study purpose, exploratory factor analysis (EFA) was implemented three times to ensure statistical clarity. Based on the results of the EFAs with Varimax rotation, eigenvalues greater than 1.0 were retained. Each factor was determined by a structure including three sub-factors: (a) sports image (behavioral, psychological, and evaluative), (b) sports attitude (cognitive, affective, and behavioral), and (c) positive psychological capital (self-efficacy, optimism, and resilience). The specific results of the EFAs and the survey questionnaires are reported in Tables 2, 3, and 4. The reliability of the questionnaire was tested using Cronbach's alpha. The results were verified based on a cutoff value of 0.700 for satisfactory internal consistency for reliability (Nunnally & Bernstein, 1994): (a) behavioral image, α=0.782; (b) psychological image, α=0.813; (c) evaluative image, α=0.790; (d) cognitive attitude, α=0.872; (e) affective attitude, α=0.836; (f) behavioral attitude, α=0.790; (g) self-efficacy, α=0.755; (h) optimism, α=0.857; and (i) resilience, α=0.783. All the measurement tools used in this study showed satisfactory statistical reliability. Multivariate analysis of variance (MANOVA) A MANOVA was conducted to find the differences in sports image, sports attitude, and positive psychological cap-ital of audiences in the 2020 Tokyo Olympic Games based on the type of event (acquisition of medals or not). Homogeneity of covariance was verified (Box's M=76.614, F=1.651, p>0.001), and statistically significant differences between the groups were found (Wilks' Lambda=0.779, F=10.044, p<0.05). Specifically, as reported in Table 5, statistically significant differences between the two groups were found for three factors: (a) psychological image, (b) self-efficacy, and (c) resilience. However, no statistically significant differences were observed for (a) behavioral images, (b) evaluative images, (c) cognitive attitudes, (d) affective attitudes, (e) behavioral attitudes, or (f) optimism. Table 6 reports the mean scores of all dependent variables between the groups drawn from the survey data. Note. 1 = Optimism, 2 = Resilience, 3 = Self-efficacy. Discussion This study examined individuals from the South Korean MZ generation who watched the 2020 Tokyo Olympic Games. Comparative analyses were conducted for sports image, sports attitudes, and positive psychological capital between the two groups, which were divided based on whether the research participants were emotionally moved by events in which the athletes won medals. This research also investigates shifts in psychological responses between the past and present generations. Among the nine comparative factors, psychological image, a sub-factor of sports image, and self-efficacy and resilience, sub-factors of positive psychological capital, exhibited statistically significant results. Furthermore, between the two research groups, psychological image and resilience were greater in Group 2 (no medal group) than in Group 1 (win medal group). However, Group 1 showed higher self-efficacy than did Group 2. Group 2 (M=3.918) scored higher on psychological image than Group 1 (M=3.652). In the past, winning Olympic medals stimulated the development of a positive image for relevant countries (Essex & Chalkley, 1998), developing their economies (Berman, Brooks, & Davidson, 2000) and increasing their exports (Rose & Spiegel, 2011). Today's society, however, is led by the MZ generation, characterized by a strong sense of belief and independence rather than yielding to the social environment (Baum, 2020). Youths in their 20s and 30s watch sports to enjoy the game. People make dedicated efforts to improve the quality of their life (Costanza et al., 2007), however, past generations felt joy and triumph in good outcomes, such as winning medals, but the present generation is moved and inspired by how these games are played rather than the final outcome. Ultimately, this study's findings recognize the characteristics of the MZ generation that is rapidly changing how sports were viewed in the past, when Olympic medals were celebrated as a means of national prosperity. Self-efficacy was higher in group 1 (M=3.578) than in group 2 (M=3.056). According to Fulton, Baranauskas, and Chapman (2021), self-efficacy in athletes increases through their participation in highly competitive games like the Olympics, having a positive effect on winning medals in the future. Self-efficacy in athletes' parents has been shown to increase simply because their children were competing in the Olympics (Arai et al., 2014). This suggests that individuals gain positive psychology in their ability to achieve something by watching others' successes even if they are not actually engaged in the performance. In line with the findings of this study, Dietz-Uhler and Lanter (2008) determined that interest in and passion for sports teams greatly influence viewers ' emotions, and Wann and Branscombe (1993) found that viewers obtain an ego boost from the game results of teams they support. Moreover, Phua (2010) determined that watching sporting events, such as the Olympics, is an excellent measure of viewers' self-esteem. Therefore, the outcomes of the current study indicate that viewers' interest in various sports may be a response to a high level of success, as in winning Olympic medals, and such analysis or results may not be definitively limited to the MZ generation. Finally, resilience was greater in Group 2 (M=3.766) than in Group 1 (M=3.377). Resilience in athletes is the ability to handle unfavorable issues that occur during a game, such as errors, point loss, and unexpected situations (Secades et al., 2016). It includes situations where athletes can return to their usual performance level even when they get lesser time to address these issues (Codonhato, Vissoci, Nascimento, Mizoguchi, & Fiorese, 2018). Additionally, slightly less talented athletes are more likely to experience such issues than are outstanding athletes who win medals. Unlike in the past, when results were strongly emphasized (Lee, 2016), viewers today express, in increasing frequency, their support and praise for athletes who gave their best during difficult games. According to Harker (2019), when individuals focus on another person or team, they feel as if they are in sync with their emotions (e.g., emotional identification). As study results show, the fact that viewers' self-efficacy increases regardless of the outcome of the game they watch could become important information for the current generation to relieve the stress they experience in their everyday lives (Togo, 2018). It also demonstrates the positive influence of watching sports, which has progressively changed over the years. Conclusion The study results show that the positive psychological response of viewing sports has changed over time. The field was afflicted with economic and political instability in the past, and numerous efforts have been made to overcome a lack of elite athletes (Lee, 2016), which led to rapid growth in the sports industry (Gil & Mangan, 2002). However, the public's longing for excellent performances by athletes has persisted. As evidenced by this, a Korean who won the Olympic medal in an unstable colonial environment in the early 1900s was considered a national hero (Podoler, 2021). It is true that the country has implicitly hoped for many gold medals and high rankings (Bridges, 2013). This has caused players to feel burdened; the public's excessive expectations can easily turn into strong criticism. The study results show that these expectations can be constructive for athletes and viewers to think of their own situations, a characteristic of the MZ generation, and help them develop positive emotions. In addition, the lack of statistical differences between the two groups with regard to behavioral image, evaluative image, cognitive attitude, affective attitude, behavioral attitude, and optimism seen in the results of this study can be considered meaningful. Six out of nine factors did not have statistical significance biased to one side depending on the presence or absence of medals, and statistically significant statistical values (psychological image, self-efficacy, resilience) were also higher Note. Group 1 = Viewers who were impressed by watching events where medals were won, Group 2 = Viewers who were impressed by watching events without medals. 1 = behavioral image, 2 = psychological image, 3 = evaluative image, 4 = cognitive attitude, 5 = affective attitude, 6 = behavioral attitude, 7 = self-efficacy, 8 = optimism, 9 = resilience. Statistically significant higher mean scores between groups are indicated in bold. SPORTS IMAGE ATTITUDES AND PSYCHOLOGICAL CAPITAL OF VIEWERS | C. BUM ET AL. in two out of three sub-factors (Group 2). Thus, the Olympic medal no longer has a substantial impact on the purpose of watching the games in modern society; our results indicate that the quality of the game itself, popular sports stars, and fandoms supported by individuals are becoming factors for enjoying the game. Limitations This study compared and analyzed MZ generation viewers of the 2020 Tokyo Olympics by dividing them into two groups depending on whether a team in a sporting event winning medals impressed them. The sample population was limited to the MZ generation and it was possible to focus on recent trends. Nonetheless, other previous generations (e.g., baby boomers and the X generation) were excluded; hence, further research is needed for generalization. Additionally, owing to the COVID-19 outbreak, very few people watched the 2020 Tokyo Olympics. There were limitations in that the proportion of online survey responses was higher than that of offline responses. Furthermore, it is necessary to compare and analyze the public's sentiment, which may have changed since the 2020 Tokyo Olympics. Finally, this study explained the psychological reactions of comprehensive subjects by conducting quantitative research; however, future research that can compare and analyze more in-depth data by simultaneously conducting quantitative and qualitative research would be necessary.
2022-11-05T16:01:12.619Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "9420464ace11ad77431df7d760e386ab31791d67", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26773/smj.221016", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6c6bba33dc13842aa9122bdfe68a5b11c341dfd3", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [] }
53418240
pes2o/s2orc
v3-fos-license
Self-Medication among Adults in Minia , Egypt : A Cross Sectional Community-Based Study Self-medication may be associated with side effects and increases the chance of drug interactions and also affects the adherence to treatment and quality of life. This study aims at determining the pattern of self-medication, identifying knowledge, attitude and self-reported practices concerning the usage of the drugs and identifying demographic factors that could influence self-medication practices among the general population in El-Minia, Egypt. A community-based cross-sectional study was conducted among 422 randomly selected adults using a multi-stage random sampling technique. Data were collected by using a structured interview questionnaire. Respondents who had practiced some sort of self-medication during the past month were 73% of the sample. The commonest cause of self-medication, illness, was perceived as minor (59.7%). The most common perceived illness for self-medication was common cold (90.6%). Older respondents (>40 years) were about twice more likely to practice self-medication than younger ones. Similarly, professionals in their work were 3.4 times more likely to practice self-medication than unemployed individuals. Self-medication is a relatively frequent problem in Minia and interventions at different levels are required. Introduction World Health Organization defines self-medication as "the use of medicinal products by the individuals to treat self-recognized disorders or symptoms.It might also involve the intermittent or continuous use of a medication prescribed by doctors for chronic or recurring diseases or symptoms" [1].Globally, nearly half of all medicines are unreasonably used, and self-medication with antibiotics constitutes a major public health problem due to the irrational medicine use [2]. Serious adverse drug reactions, drug resistance, protracted illnesses and even death are a problem; moreover, the financial costs incurred by individuals and governments are often extremely high, particularly in developing countries where patients often pay for medicines out of their own pockets [3]. Self-medication is widely practiced in many developing countries; the prevalence has been reported to range from 12.7% to 95% [4] [5] [6].The prevalence of self-medication is more in low to middle-income communities and is more common in countries where prescription legislations are not strong enough [7]. Many studies showed that there is a relationship between increased selfmedication activities and many demographic factors such as morbidity, income, education, gender, age and absence of periodic consultations [8] [9]. However, the use of non-prescription medicinal products is multi-factorial, and there is a paucity of adequate information about this issue especially in Upper Egypt.Thus the current study aims at determining the pattern of self-medication, and identifying the predictors of self-medication among adults in El-Minia, Egypt. Study Design and Setting This cross-sectional community based study was conducted in Minia city, Minia governorate, one of the Upper Egypt governorates, located 240 km south of Cairo, Egypt, during the period from September to November, 2016. Study Population and Sampling The required sample size was estimated based on the following conditions: assuming that the expected proportion of the population who practiced selfmedication in Egypt (P) = 50%; tolerated error/margin of error (d) = 0.05; confidence interval (CI) = 95%.The following formula was used [n = p * (1 − p) * (Zα/d)2].The value of Z is found in statistical tables which contain the area under the normal curve.Accordingly, the sample size was estimated and additions of 10% of the sample were added to guard against non-respondents rate [10]. Inclusion criteria: Finally, a total 422 adult respondents were included in the study.Persons who were unable to answer the questions or give incomplete response due to some barriers were excluded from the study.Respondents were randomly selected using a multi-stage random sampling technique.Minia governorate was found to be divided into nine districts from which Minia district was chosen randomly (first stage), then Minia city which represent the urban area in the district was chosen (second stage) which was divided into four blocks (North, East, South and West) from which the second and third blocks were chosen randomly (third stage). The households were selected by systematic random sample by visiting every third apartment building in a randomly-selected direction asking for adults (over 18 years old).Our team met about 460, eligible persons, from whom 422 agreed to be interviewed and participate in the study (response rate 91.7%). Data were collected using a structured questionnaire, filled by interviewing each individual by researchers, consisted of five sections, the first section contained questions regarding socio-demographic information such as sex, age, educational level and employment status; also they asked whether they had any chronic illness.In addition, participants were asked whether or not they have health insurance, and if they have ever practiced self-medication in general, and in the past month in particular. The second section of the questionnaire consisted of questions related to their perception about medications, knowledge about the use of the purchased drug; and practices related to the product purchased. In the third section of the questionnaire the respondents were requested to report on the sources of medication used for self-treatment and sources of information about such medication, also it focused on the health conditions that respondents would self-treat; this section also investigated reasons for selfmedication. The fourth section covered another area which included antibiotic usage patterns, the respondents were asked to indicate how often they practiced selfmedication with antibiotics and how to decide which types of antibiotics were suitable for their medical conditions and how they adjust the proper course of antibiotics.The final section contained questions about respondents' beliefs and attitude concerning antibiotics self-use.This questionnaire had been tested on a small number of eligible persons as a pilot study to test the reliability of the questions and the time needed to conduct an interview.Then, proper corrections and adjustment had been fulfilled. Ethical Consideration All the procedures of this study were reviewed and approved by the Institution Review Board of the Faculty of Medicine, Minia University (Approval No.: 17-037).Prior to data collection, informed consents were obtained from all participants after supplying comprehensive information about the nature and the objectives of the study. Statistical Analysis The Statistical Program Statistical package of social science (SPSS) for Windows version 20 was used for data entry and analysis.Quantitative data were presented by mean and standard deviation, while qualitative data were presented by frequency distribution and compared by Chi-Square test.Risk ratios were estimated by calculating odds ratios (OR), and a multivariate logistic regression analysis was performed.The lowest accepted level of significance was ≤0.05. Results This study included 422 participants; whose ages ranged from 18 to 72 years with a mean (34.5 ± 13.4).There were 45.7% (n = 193) males and 54.3% (n = 229) females.From all the participants, 33.9% (n = 143) currently used medica-tion at the time of study conduction.Eighty-eight out of those 143 (61.5%) thought that they had enough knowledge about the used medication regarding; effectiveness, dose, and side effects. Respondents who had practiced some sort of self-medication during the past month were 308 (73%) of the sample.About 65% of respondents thought that over-the-counter medicines are as effective as those prescribed by the doctor.As for personal behavior if experienced adverse reactions, 60.4% and 26.5% of the participants said that they consult treating doctor and pharmacist respectively.About 64% reported that they discontinue medication on improvement.More than half of the participants (56.9%) agreed that some medical complaints could be assessed and solved by a pharmacist and they said that the most common condition the pharmacist could prescribe drugs is common cold and flu which was reported by 38.8% (Table 1). As regards socio-demographic characteristics, no statistically significant differences were observed between subjects who practiced self-medication and those who were not except for the occupation, where the 40.9% of persons who practice self-medication were professional worker compared to 35% of persons not practicing self-medication (p = 0.0003) (Table 2). Regarding the sources of medicinal products used by respondents who practiced self-medication it was found that drugs purchased from private pharmacies were the most commonly used sources of self-medication, reported by the majority of self-medicated individuals (86.7%), and followed by the use of left-over medicine (79.9%).Those who obtained medications from their relatives or friends constituted 22.7% of self-medicated respondents (Table 3). The commonest source of information about the drugs used for self-medication was the pharmacists, reported by about 92% of respondents.This was followed by respondents' experiences or knowledge from previous episodes (84.7%).Internet and advertisements were the least common source of information, reported by only 6.5% of respondents. The commonest reason for self-medication was that the illness was perceived as minor (59.7%).More than 40% of self-medicated respondents indicated that previous experience with the treatment was a reason for self-medication.About one-third of self-medicated respondents (29.8%) indicated that they did so because they lacked the time to visit formal health care facilities.One-quarter of self-medicated participants indicated that the cost of consultations with the doctor was a reason for self-medication (Table 3). The most common perceived illnesses for self-medication were common cold (90.6%), headache (71.1%), cough (69.5%), sore throat (68.5%), toothache (38.9%) and 50.3% represent other reasons mainly GIT problems.About 62% of the respondents used un-prescribed antibiotics for any illness, 47.3% used antibiotics more than once during the past month prior.About 44% of them decide the type of antibiotic which needed for his illness from previous prescription.About 42% of the participants know the proper dose of antibiotic by asking the pharmacist (Table 4). Thirty-six percent of the respondents were aware that inappropriate use of antibiotics leads to antibiotic resistance and 63.5% knew that antibiotics could cause adverse drug reactions.Furthermore, 59.7% used the antibiotics only till disappearance of symptoms and 73% of participants thought that antibiotics are being effective in treating both bacterial and viral infections (Table 5). Discussion The prevalence of self-medication found in this study was 73%.Our estimates are lower than the figures reported from the previous study conducted in Alexandria in 2009; it was found that 81.1% of participant practiced self-medication [11].Similarly, in Karachi, a study was conducted and it was found that selfmedication was prevalent among 76% of the population [12].On the contrary, another study among Jordanian population showed lower prevalence (42.5%) [13].About 65% of respondents thought that over-the-counter medicines are as effective as those prescribed by the doctor.The finding was approximate to that reported by Hassali et al. [14] in Malaysia (62.7%).As for personal behavior if experienced adverse reactions with medicines, 60.4% and 26.5% of the participants said that they consult treating doctor and pharmacist respectively, this is in accordance with a study conducted in Danish population which revealed that 73% and 35.2% consulting doctor and pharmacist respectively if experienced adverse reactions [15]. In the current study discontinuation of medicine on improvement was reported by 64.2% of respondents.Sallam et al. (2009) reported that 69.3% of participants in his study used medicine until complaint disappears [11]. More than half of the participants (56.9%) agreed that some medical complaints could be assessed and solved by a pharmacist and they said that the common condition the pharmacist could prescribe drugs is common cold and flu which was reported by 38.8%.A study among Indian population reported that 69% of respondent agree to consult the pharmacist when complaining common condition as common cold, headache and flu [16]. Sources of Medications and Information on Self-Medication Likewise findings from several studies [13] [17] [18], our study revealed that drugs purchased from private pharmacies were the most commonly used sources of self-medication, reported by the majority of self-medicated individuals (86.7%), and the use of leftover medicine was also prevalent and reported by about 80% of respondents.Easy accessibility of medicines from the pharmacy without prescription could explain the high percentage of purchasing from private pharmacies as a major source of the practice of self-medication.Also, keeping medicine at home is an important concern lead to increase the possibility of self-medication and mistakes in proper consumption. Regarding the source of information about the drugs used for self-medication, the commonest source of information was the pharmacists, reported by about 92% of respondents.This was followed by respondents' experiences or knowledge from previous episodes (84.7%).The overall sources of information show that self-medication practices among adults in this study are not influenced by advertisements or the Internet; they were the least common source of information, reported by only 6.5% of respondents.which was similar to a study conducted in Saudi Arabia [18], and revealed that 74% of participant had the source of information from pharmacist followed by respondents' experiences (50.8%) and 16.4% from Internet and advertisements.This, however, is in contrast to a study conducted by Chui et al. [19] that showed that more than half of the participants never consulted a pharmacist on how to manage minor disorders. Reasons for Using Self-Medication The current study revealed that the commonest reason for self-medication was that the illness was perceived as minor (59.7%) followed by previous experience with the treatment (40%).About one-third of self-medicated respondents (29.8%) indicated that they did so because they lacked the time to visit formal health care facilities.One-quarter of self-medicated participants indicated that the cost of consultations with the doctor was a reason for self-medication.Above findings comparable to the results of a study performed by Sallam et al., 2009 (11) in Alexandria who found that the commonest cause was the illness perceived as minor (44.5%), followed by previous experience with the treatment (31%) and 24% indicated for other reasons.Another study conducted by Swetha R and Usha R revealed that long wait at clinics (31.6%), mild nature of the illness (27.56%) and financial problems (17.35%) were most common reasons for adopting self-medication among Indian population [20]. The most common perceived illnesses for self-medication were common cold (90.6%), followed by headache (71.1%) then cough by (69.5%).This is similar to a finding by Noori [21] which showed that common cold has a high percent of the symptoms that led to self-medication.However, a study conducted in Sri Siddhartha, India showed that cough, headache and fever were common symptoms for which participants practiced self-medication [20]. Antibiotic Self-Medication Self-medication with antibiotics was reported by 260 of 422 respondents (61.6%).This is in agreement with Khan et al., 2011 who found that 69% of the studied participant used antibiotics without a prescription [22].Some other studies on self-medication with antibiotics have reported prevalence rates of 74% in Sudan [23], 46% in Jordan [24] and 78% in Greece [25]. Among the 260 respondents who used to take unprescribed antibiotics, 47.3% had used antibiotics more than once during the past month.About 44.2% had previous experience in using such antibiotics, 42.3% of the participants know the proper dose of antibiotic by asking the pharmacist.These findings were comparable to the results of a study performed by Widayati et al., 2011 [26] who found that 54% decide the type of antibiotic based on previous prescription and 52% know their information from the pharmacist. Unfortunately, only 36% of the respondents were aware that inappropriate use of antibiotics leads to antibiotic resistance.Contradictory, Noori [21] reported 59.6% in Kufa, Iraq was aware of the antibiotic resistance due to irrational use of antibiotics.More than half (59.7%) believed that antibiotics should be used until relief of symptoms.This was high compared with a study carried out in Lithuania [27], where 10% said that antibiotics should be used until relief of symptoms. The current study revealed that (73%) of the sample thought that antibiotics are effective in the treatment of both bacterial and viral infection.Pavydė et al. [27] showed that Almost half of the respondents incorrectly identified antibiotics as being effective either in treating viral (26.0%) or mixed (bacterial and viral) infections (21.7%). About two-thirds of the respondents (63.5%) knew that antibiotics could cause adverse drug reactions.This was low compared with a study carried out in Lithuania [27], where the majority of the respondents (92.9%) knew that fact. Factors Associated with Self-Medication This result highlights the associated risk between self-medication practice and increasing age (>40 years, (OR = 2.28; 95% CI 1.12 -4.67) and professionally employed participant (OR = 3.44; 95% CI 1.40 -9.09).This is similar to a finding by Moraes et al. [28] which showed that increasing age (OR = 1.24), professional employment (OR = 1.21) were significant predictors of self-medication.Also, we revealed that the bigger the family size the more use of self-medication (OR = 2.04; 95% CI 1.84 -2.33).This is in accordance with Widayati et al., 2011 (26) who found that increasing the family size (>4 members) is an important risk factor for self-use of medicine (OR = 1.2). However we found no significant association between being covered by health insurance and practicing self-medication, this was in line with study conducted by Sarahroodi et al. (2012) who reported no statistically significant difference between self-medication and having or not having medical insurance [29].On contrary, Widayati et al. (2011) [26] found that Indonesian who had perceived access to health care (having insurance) were 1.47 times more likely to selfmedicate than those who reported no access to health care (OR = 1.47). Limitations The most important limitation is that the answers reported by the respondents cannot be validated.Economic conditions not investigated in the current study. Perception and practices among rural residents and the impact of socioeconomic status on self-medication need to be investigated in further studies. Conclusion The results of this study confirm that self-medication is a relatively frequent problem in our community which could result in an increase in drug induced disease and in wasteful public expenditure.This indicates the need for change in the perception and practices towards the safe use of medicines.Interventions at different levels are required in order to reduce the frequency of medication misuse.The community should be educated regarding appropriate use of drugs and the adverse effect of drugs; this would require massive health education aimed at behavioral changes and strict precautions about the irrational use of antibiotics. Table 1 . Perception and practice of self-medication among the study participants in Minia city, September to November, 2016. Variables % (N)Have you ever treated yourself (self-medicated)?(During last month)?Yes No 73.0 (308) 27.0 (114) Do you believe that over-the-counter medicines are as effective as those prescribed by the doctor? Table 2 . Socio-demographic characteristics of the study participants in Minia city, September to November, 2016. *Chi-square test was use. Table 3 . Sources of medications and information on self-medication and reasons for using self-medication during the last month, among adults, Minia city, September to November, 2016 (n = 308). ªNumbers do not add to 100% as respondents might have more than 1 reason. Table 6 . Logistic regression analysis of factors independently associated with self-medication among participants (n = 422).
2018-10-25T01:59:21.609Z
2017-06-14T00:00:00.000
{ "year": 2017, "sha1": "ef331aa2bd316a80466f01c0d5efade08b3a735e", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=76876", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ef331aa2bd316a80466f01c0d5efade08b3a735e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246491782
pes2o/s2orc
v3-fos-license
Participating in Longitudinal Observational Research on Psychiatric Rehabilitation: Quantitative Results From a Patient Perspective Study Background Longitudinal observational studies play on an important role for evidence-based research on health services and psychiatric rehabilitation. However, information is missing about the reasons, why patients participate in such studies, and how they evaluate their participation experience. Methods Subsequently to their final assessment in a 2-year follow-up study on supported housing for persons with severe mental illness, n = 182 patients answered a short questionnaire on their study participation experience (prior experiences, participation reasons, burden due to study assessments, intention to participate in studies again). Basic respondent characteristics as well as symptom severity (SCL-K9) were also included in the descriptive and analytical statistics. Results To help other people and curiosity were cited as the main initial reasons for study participation (>85%). Further motives were significantly associated with demographic and/or clinical variables. For instance, “relieve from boredom” was more frequently reported by men and patients with substance use disorders (compared to mood disorders), and participants ‘motive” to talk about illness” was associated with higher symptom severity at study entry. Furthermore, only a small proportion of respondents indicated significant burdens by study participation and about 87% would also participate in future studies. Conclusions The respondents gave an overall positive evaluation regarding their participation experience in an observational study on psychiatric rehabilitation. The results additionally suggest that health and social care professionals should be responsive to the expectations and needs of patients with mental illness regarding participation in research. INTRODUCTION To improve and develop psychosocial health care and rehabilitation, empirical research is needed. This research relies, for a large part, on information that comes from the people who use social and medical support services. Overall, people with mental health problems seem to have positive attitudes toward psychiatric research and mostly indicate altruistic reasons for participation (1)(2)(3)(4)(5). However, patients' willingness to participate in research is influenced by different variables from three major domains (6): Sociocultural and demographic factors (e.g., age, gender); individual experiences and attitudes (e.g., prior research experience, general attitudes toward research), and clinical factors (e.g., diagnosis, severity of illness). Within this framework, participation is also influenced by specific characteristics of the study (6). For instance, in a study with n = 763 psychiatric patients, Schäfer et al. (4) found a tendency to approve psychosocial (e.g., rehabilitation, role of the family) rather than biological research topics (e.g., genetics, biological treatments). In addition, "invasive" methods like medication trials achieved the lowest acceptance rate (58%), while the highest willingness to participate was found for studies with questionnaires (91%). Aside such general attitudes toward research, studies have, of course, also focused on the extent to which individuals with mental disorders are affected when participating in research projects. Here it has become apparent, that only a minority of participants in psychiatric research become distressed, but without evidence of longer-term harm (7). Although often considered with particular caution, this is as true for trauma-focused research in people with mental illness (8,9). Nevertheless, researchers must of course be mindful that rates of adverse reactions might exceed those of community-based samples (10,11). For evidence-based research on health services and psychiatric rehabilitation, observational studies play an important role, because randomized controlled trials as the scientific gold standard may not be a feasible option under real-world community settings (12)(13)(14). Thus, the present study aims to address (a) what reasons motivate patients to participate in a longitudinal observational study on psychiatric rehabilitation? (b) What are the burdens for the study participants? (c) In how far are reasons and burdens associated with possible moderators such as socio-demographics, prior research experience, symptom severity or diagnostic group? Study Setting Given the lack of systematic empirical research on supported housing for non-homeless people with severe mental illness, we conducted an observational follow-up study to compare clinical and functional outcomes of supported housing (SH) vs. residential care (RC) (15). In this prospective study, we included n = 257 outpatients with severe mental illness, who were intended to enter SH or RC or had just entered one of these rehabilitation programs. The study project has been approved by the local IRB (University of Muenster Ethics Committee, 2017-149-f-S). Study Procedure The underlying observational study comprised three assessment time points across 2 years: Baseline at study entry (t1), intermediate assessment provided one year later (t2), and final assessment after two years (t3). Asides detailed documentation of socio-demographic and clinical variables at baseline, all three assessments included numerous questionnaires [e.g., Social Functioning Scale (16,17); Manchester Short Assessment of Quality of Life (18), Symptom Checklist Shortform (19), Client Sociodemographic and Service Receipt Inventory (20,21), Oxford Capabilities questionnaire-Mental Health (22), Camberwell Assessment of Need Short Appraisal Schedule (23,24), short interview sections on psychosocial care and clinical course parameters, and a short cognitive skills test [Trail-Making-Test (25)]. Many instruments were available for selfcompletion. However, if patients pointed out any obstacles in doing so, the questionnaires were read aloud by the interviewers. Overall, a wide range of topics were addressed, e.g., daily living skills, mental health symptoms, social networks, life satisfaction, health care system, experiences with justice and police, drug use or family relationships. In general, the whole assessment procedure took about 60-90 min, depending on the how many breaks were taken or how detailed some questions were answered. In some cases, the assessment session took up to 2 h or was split into two appointments. Subsequent to the last assessment at the third study time point, participants were given a brief questionnaire to retrospectively evaluate their study participation experience (see below, measures). Participants Apart from age 18-69 years inclusion criteria into the underlying observational study were (i) a severe mental illness, diagnosed as defined by ICD-10 by a psychiatrist which (ii) lasted at least 2 years and (iii) was associated with a handicap that constitutes the right for supported housing according to the German social law IX. Exclusion criteria were non-sufficient German speaking and severe physical illness. Prior to the first study assessment, outpatients were provided with detailed verbal and written study information, and gave their written informed consent to participate in the study. At the beginning of each study assessment, participants were reinformed about the study background and the options to skip parts of the study or to take a break at any time. The participants received a compensation of 10 Euros for the first and 20 Euros for each of the two following assessments. N = 257 individuals were initially included in the observational study, n = 56 dropped out before the final assessment, n = 15 declined to answer the study participation questionnaire, and n = 4 individuals had over 50% missing values. Thus, the present study is based on data from n = 182 participants. These enrolled respondents did not significantly differ from the group of dropped out and excluded patients in terms of gender (p = 0.780), diagnostic group (p = 0.166), psychopathological symptom burden (p = 0.733), or housing Sociodemographic and Clinical Data Sociodemographic and clinical information (e.g., age, occupational status, clinical diagnosis) was obtained by a structured interview and additional medical information provided by staff members of the psychiatric rehabilitation institutions during the baseline assessment. The burden of psychopathology was assessed via self-report by the 9-item Symptom Checklist [SCL-K-9 (19)], both at the first and the last assessment. We report the SCL-K9 mean score (0-4) as a Global Severity Index [GSI, (26)]. Evaluation of Study Participation Subsequently to the final assessment, 2 years after the start of the index study, participants were asked to provide a retrospective evaluation of their study participation by means of a short questionnaire. To make data collection quick and easy, yesno questions were favored. Therefore, the applied questionnaire was compiled from single questions taken from existing scales [Hamburg Attitudes to Psychiatric Research Questionnaire (4,27)] or previous studies, as can be seen in Table 1. Data Analysis In addition to baseline socio-demographic and clinical sample data, frequencies, and percentages were obtained for all items of the study participation questionnaire. Using the response categories (e.g., yes vs. no), appropriate group comparisons were then performed with respect to possible influencing factors. For this, we selected the following variables according to the three domains from the model by Pfeiffer et al. (6): age and gender (sociocultural and demographic factors); prior research experience (individual experiences and attitudes), diagnosis, symptom severity at study entry and after 2 years, distress due to study participation (clinical factors). Additionally, the type of supported accommodation was also included to control for possible influences by the purpose of the underlying study. Depending on variable scaling and distribution, parametric (t-Test) or non-parametric (Mann-Whitney-Test, Chi 2 -Test) statistical tests were applied. Normal distribution assumption for metric data was checked using Kolmogorov-Smirnov test. Data was analyzed using SPSS Statistics, version 25. The general significance level was set to 0.05 and two-tailed. Participant Characteristics Demographic and clinical characteristics of the participants are provided in Table 2. The most prevalent primary diagnosis among the sample were substance-related disorders (29.1%), mood disorders (22.5%), and schizophrenic disorders (19.2%). However, about 75% of the participants also had comorbid psychiatric diagnoses, and about two-thirds had additional (chronic) physical impairments or acute somatic conditions. As could be expected, the average burden of psychopathological symptoms in the participants at baseline and after 2 years was about two standard deviations above the values for a representative population sample [Global Severity Index/GSI: M = 0.41, SD = 0.51, N = 2,057; (19)], but within the range of increased values typically found in psychiatric inpatients [e.g., GSI: M = 1.7, SD = 0.83, N = 2,727; (26)]. Total Sample As can be seen in Table 3, around 71% of all participants indicated that they had never participated in a scientific study before. The main reasons for study participation in retrospect have been to help other people (86.8%), curiosity (85.2%), and a genuine interest in research (73.6%). The reasons "relief from boredom" (41.8%) and "an opportunity to talk about illness" (48.4%) were overall affirmed by almost one in two. The prospect of monetary compensation for participation played a role for more than one-third of individuals (35.7%), and one in ten felt prompted to participate by others (10.4%). More than half of the whole sample (58.8%) did not feel burdened in any way by participating in the study and its repeated assessment procedures. Only <5% (n = 9) of the participants indicated that they had felt "quite a bit" or "very much" burdened. Of these respondents, n = 7 people provided information about what had burdened them the most: too many questions (twice), too long surveys, dealing with uncomfortable issues, questions about sexuality, questions about family, reflecting on one's health status using numbers. At the end of the questionnaire, a large proportion of respondents (87%) indicated that they would like to participate in studies again in the future. Influential Factors To be consistent with the dichotomous yes-no questions and to compensate group sizes, the response categories for the burden item (not at all vs. a little bit/ moderately/ quite a bit/ very much) and the re-participation question (yes vs. no/don't know) have been dichotomized for the following analyses. Moreover, nonparametric Mann-Whitney tests were used, because of partially unbalanced group sizes and non-normally distributed metric variables in several subgroups (KS-Test: p < 0.05). Age For all evaluation items, there were no differences in age between response categories (p > 0.05), except for the participation reason "Other people wanted me to do so". Here, those n = 19 participants who said they had initially attended because of others were on average somewhat older, than those who denied this item Gender A significant difference with respect to gender occurred only for the participation reason "To get relieve from boredom" (Chi 2 = 6.33, Phi = −0.189, p = 0.014). Here, men (50.5%) were significantly more likely to affirm this reason than women (31.5%). Supported Accommodation Type The questionnaire responses did not show any significant differences between persons from Residential Care or Supported Housing (p = 0.339-1.00). Prior Research Participation There were no significant differences between persons with and without prior experience in either the various reasons for participation (p = 0.107-1.00) nor the level of burden experienced (p = 1.00). Among those persons who had never participated in research before, the proportion who would consider a further study participation (90.7%/n = 117) did not significantly differ from the group with prior experience (92.9%/n = 39, p = 0.766). Clinical Diagnosis Analyses of possible differences by diagnosis were at first based on the existing five distinct ICD-10-diagnosis groups (see Table 2). The items of the evaluation questionnaire did not show any significant differences depending on the diagnostic group (p = 0.063-0.931), apart from a tendency regarding the participation reason "To get relieve from boredom" (p = 0.063). However, this trend became statistically significant (Chi 2 = 8.35, Phi = 0.257, p = 0.015) when the analyses were based solely on the three most frequent diagnosis groups (F1-F3, 71% of the total sample). Subsequent post-hoc subgroup analyses with adjusted significance level (0.05/3 = 0.0167) showed the following results: In the substance-use disorders group, it was significantly more often reported to have participated as a relieve from boredom than in the mood disorders group (54.9%/n = 28, vs. 25%/n = 10, Chi 2 = 8.24, Phi = −0.301, p = 0.005). When comparing the mood vs. schizophrenic disorder group, there was only a slight trend (p = 0.088), but no difference appeared between the schizophrenic and substance-related disorders group (45,7%/n = 16 vs. 54.9%, p = 0.511). Since the participation reason Frontiers in Psychiatry | www.frontiersin.org "To get relieve from boredom" has also been associated with gender (see above), possible gender-related effects depending on diagnosis group were exploratively examined, but without significant results (p = 0.120-1.00). Symptom Severity The level of psychopathological symptom severity assessed at the last study interview and thus at the same time point as the evaluation questionnaire, showed no associations with prior research experience, the various reasons for participation, participation burden, or further participation interest (p = 0.199-0.996). However, when the symptom severity from study entry was taken into account, the following significant relationship emerged: Those respondents who agreed with the participation reason "To talk about illness" (n = 88/48.4%) had higher baseline symptom severity scores than persons who denied this reason Burden Due to Study Participation Not any socio-demographic or clinical variable and none of the different reasons for participation were significantly associated with the level of burden due to participation (p = 0.078-1.00). However, among those respondents who felt at least somewhat burdened (n = 73), significantly fewer persons were willing to participate in a study again compared with those who felt not burdened at all (80.8 vs. 94.3%, Chi 2 = 7.96, Phi = 0.206, p = 0.007). DISCUSSION Long-term observational studies provide an important basis for empirical research in psychiatric rehabilitation and mental health service care. However, little known about how people with severe mental illnesses evaluate their actual experience of participation in such observational studies. General Evaluation and Experienced Burden The present results from a 2-year observational study reveal an overall positive evaluation outcome with participants favorably responding with respect to their research experience. For instance, while about 71% of the patients had never participated in a study before, at the end almost 90% indicated that they would consider participating in future research. This proportion of 90% roughly corresponds to the 86% from a study on managed care in severe mental illness (10), but it was even somewhat higher than the 75% from a RCT on antidepressant medication (28). Moreover, in our study this proportion did not differ between persons with and without previous research experience, so it can be concluded that even patients participating in research for the first time ever gained a good impression from their experiences in a longitudinal observational study. In their paper "'That was helpful... no one has talked to me about that before': Research participation as a therapeutic activity, " Lakeman et al. (29) argued that psychiatric research participation involves processes that are frequently therapeutic in nature or often benefit participants, e.g. telling and retelling details about an aspect of one's life to a researcher. The majority of participants was not affected by the repeated comprehensive study assessments and did not feel burdened by them at all. Only 5% reported feeling quite a bit or very much burdened. Despite different scaling and reverse polarity, these results are largely consistent with data form an earlier study by Boothyroyd (10). At that time, n = 523 participants with a severe mental illness were asked to rate their overall research experience after participating in a 12-months managed care study. While 96% of the respondents indicated that it was a (very, somewhat, slightly) positive experience, 4% perceived it as a (slightly, somewhat, very) negative experience. Furthermore, Jorm et al. (7) have performed a systematic review on whether participation in studies that involved (questionnaire) assessment of symptoms, prevalence or risk factors of psychiatric disorders causes distress. The reviews main conclusion was that only "a minority of participants (generally <10 %) experience distress" (p. 919). In our study, the participation burden was neither related to socio-demographic nor to clinical variables, like diagnosis or symptom severity. Although participation-related distress has been found to be associated with poorer mental health and more symptom severity (7), our results are in line with other studies, that found no association between study distress and psychopathology (9,30). Thus, the present findings may indicate that longitudinal observational research in the field of psychiatric rehabilitation can be conducted with a variety of patients with diverse clinical (and socio-demographic) characteristics. Nonetheless, our results also showed that there might remain a risk that persons who felt some form of burden from research participation will not enroll in future studies. Even if this only refers to rather few cases, this finding highlight that (negative) previous experiences with research can have an impact on future study participation [see also (31,32)]. Reasons for Participation and Their Moderators Consistent with other research, our results showed that the most frequently reported reason for initial study participation was to help other people [e.g., (2,27,33)]. Although motivation for study participation is a multi-dimensional construction (6), altruism has been identified to be one of the most relevant factors in medical research settings [e.g., (34,35)]. In the present study, the least frequent indicated reason for research participation was "Others wanted me to do so." It is notable here, that those few people who cited this reason were slightly older than the rejecters. It could be possible, that older people may have felt more urged to participate, because older age in psychiatric patients has not only been found to be associated with a worse understanding of clinical trial proposals (36), but also with a more frequent refusal of research participation (37,38). However, these results are somewhat limited because the respondents in the present study were also somewhat older overall than those who were not enrolled in the retrospective evaluation (see Participants). With respect to the other reasons for participation, curiosity, to help others, research interest, and monetary compensation showed no correlations with one of the influencing variables examined. However, "to get relieve from boredom" was associated with male gender and having substance use disorders (as compared to mood disorders). A higher prevalence of boredom experience among men than woman has also been confirmed by large community-based studies (39)(40)(41). Moreover, for inand outpatients with different mental illnesses the experience of boredom is common and of particular relevance to their quality of life (42)(43)(44)(45). Thus, distraction from boredom was also cited equally often as a hypothetical reason for research participation by patients with depression and schizophrenia (4). Our results confirm this finding, but additionally point to the relevance of boredom distraction as a salient reason for research participation in substance-use disorder patients. Distracting boredom plays a prominent role in managing symptoms of illness for these patients, as boredom has been identified being among the most common aversive experiences linked to withdrawal symptoms and relapse reasons (46)(47)(48). Finally, those persons who agreed with the reason "to talk about my illness" had higher symptom severity ratings at study entry than those who disagreed. Here, one can come to the conclusion that the possible benefit of being able to talk about the illness might have prompted initially more severely ill individuals to participate in the observational study. This could possibly related to social isolation and feelings of loneliness that are common among people with severe mental illness (49) and can be of major concern even under supported housing settings (50). Data form a recent clinical study has shown, that more severe psychopathological symptoms in persons with a mental illness are correlated with greater loneliness, even when objective social isolation, socio-demographic and clinical confounders were controlled for (51). Although we did not assessed loneliness here, the present results can be interpreted in the sense that the greater the symptom burden, the greater the loneliness, and thus participating in a psychosocial research study opens up an occasion to talk to someone "from outside" about one's own mental health problems. This is supported by an interviewstudy by Woodall et al. (52), in which the prospect of talking to other people about their experiences has been found to act as a facilitator for research participation in people with a firstepisode psychosis. Implications for Research Practice The present results have shown that even when participating in a longitudinal observational study with repeated and comprehensive psychosocial assessments, only a minority of participants become distressed, thus strengthening empirical findings from previous studies (7,53). This is of particular importance because during the initiation of psychiatric studies the appropriateness of the research is often questioned with respect to the vulnerability of the targeted patients (54). For instance, in their systematic review on the experiences of vulnerable people participating in research on sensitive topics, Alexander et al. (55) have concluded that although "there is little evidence of harm to participants. . . researchers frequently experience obstacles and the phenomenon known as "gatekeeping" when attempting to conduct research amongst vulnerable populations" (p. 1). Howard et al. (56) identified paternalism as one of the major recruitment difficulties during an RCT of supported employment for people with severe mental illness. Through interviews, the authors were able to figure out, that mental health care coordinators were focusing more on their perception of patient needs than providing patients with the opportunity to decide whether they would like to participate in research. Therefore, the authors have concluded that due to such paternalistic attitude patients were often denied access to research trials (56). Hughes-Morley et al. (57) have conducted a meta-synthesis on factors affecting recruitment into depression trials, and they identified a "protecting the vulnerable patient" theme among the literature. This means, from the perspective of professional gatekeepers, patients with depression were typically seen as vulnerable, with the need to be protected against an additional burden due to research participation. However, the present findings clearly show that the majority of participants do not feel burdened by the repeated comprehensive study assessments. Moreover, the results suggest, that longitudinal observational research can be conducted with psychiatric outpatients with various clinical and sociodemographic characteristics. In addition to general motives such as altruism and curiosity, research participation depends on certain clinical characteristics. For instance, reducing boredom through study participation might be of particular relevance for substance-use disorder patients, and being able to talk about one's illness appears to be a motivator for patients with more severe symptoms. Thus, honorable and well-intentioned, paternalistic gatekeeping and overprotection by health care professionals could hinder patients from fulfilling their personal motives for research participation (55). Besides such barriers for individual participation, this may also lead to constraints in scientific outcomes. If patients are withheld from study participation because of overestimated negative effects, this can impact data variance and thus weaken the strength of empirical results. Limitations Instead of assessing motives for a hypothetical willingness to participate in psychiatric research (1,2,4), the present study asked in retrospect about the reasons for participating in a longitudinal rehabilitation study. However, this comes along with the constraint, that such retrospective assessments may be imprecise due to recall bias (58). Thus, the reasons mentioned for participation might not necessarily have been the initial motives, but were rather influenced by the study experience. If questions on participation reasons would already be implemented at study baseline, then this could not only allow comparison with later retrospective questions on participation reasons, but might also offer interesting insights for detailed analysis of drop-outs. Another limitation of the present study is that the results were gathered as part of a research project on supported housing for people with mental disorders. Although these housing conditions of the index study did not appear to have affected the target parameters, the results should nevertheless be validated in other areas of social psychiatric rehabilitation, such as work, leisure time, or social participation. Moreover, future studies should also be more differentiated in assessing the burden of study participation, instead of just asking about "burden" in general, as it was done in the present questionnaire. For instance, Wenemark et al. (59) have identified five categories of respondent burden in health-related surveys: Cognitive burden (e.g., difficult to understand), unnecessary work (e.g., repetitive questions), distrust (e.g., manipulation), offending questions (e.g., too personal), and distress (e.g., worry, sadness). In addition, a parallel rating by care coordinators regarding the potential burden of study participation on their patients, would allow further interesting analysis of self-and proxy perceptions on this issue. Last but not least, future studies should also ask in more detail about potential benefits from study participation. CONCLUSIONS Outpatients with a severe mental illness gave an overall positive evaluation regarding their participation in a 2-year observational rehabilitation study. The majority reported no burden associated with the repeated comprehensive psychosocial assessments, and there was a high willingness to participate in studies again. Altruism and curiosity were cited as the most important reasons for participation in retrospect, and some of the participation motives were associated with socio-demographic and clinical variables. All in all, the burdens of research participation for patients with mental illness should not be overestimated by health and social care professionals. This is not only important in terms of adequately addressing the patients' needs, but also for generating valid scientific results. DATA AVAILABILITY STATEMENT The dataset presented in this article is not readily available due to formal data protection reasons, the anonymized dataset of the present study is only available from the corresponding author on reasonable request. Requests to access the dataset should be directed to Lorenz B. Dehn, lorenz.dehn@evkb.de. ETHICS STATEMENT The studies involving human participants were reviewed and approved by University of Muenster Ethics Committee, 2017-149-f-S. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS LD conceived and planned the research project, was involved in data collection, performed the statistical analyses, and drafted the original manuscript. MD supervised the study project and participated in manuscript preparation. IS was the project leader of the supported housing study and reviewed the article for publication. TB has oversight for the research activity, and critically reviewed the manuscript draft. All authors read and approved the final manuscript. FUNDING This study on supported housing was supported by a grant from the Stiftung Wohlfahrtspflege NRW (Welfare Foundation North Rhine-Westphalia). The authors confirm, that this foundation had no influence on study design, data interpretation or writing the manuscript. We acknowledge the financial support of the German Research Foundation (DFG) and the Open Access Publication Fund of Bielefeld University for the article processing charge.
2022-02-04T14:24:32.116Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "b18a0a89976852170315f062a47e606461d75afe", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2022.834389/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "b18a0a89976852170315f062a47e606461d75afe", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202773549
pes2o/s2orc
v3-fos-license
Ship-to-Shore Wireless Communication for Asynchronous Data Delivery to the Remote Islands . Nowadays, many people who live in remote islands of Indonesia are still facing difficulties in terms of access to information. In the locations where end-to-end communication is not available, the asynchronous approach can be utilized to send information in the form of digital data. In some areas, we could utilize passenger ships or ferries as physical carriers to deliver digital data to the people in the remote islands which are located at a particular range of distance from the ship’s passing routes. This paper reports the channel performance of long-range WiFi connection oversea at 5 GHz using the real ship’s route at the North Sulawesi province‘s water in Indonesia as a sample scenario. The measurement results showed that the most stable ship-to-shore communication can be achieved in ±15 minutes at the maximum distance between the ship and shore of about 4 km. The maximum channel capacity was 120 Mbps for upload (from ship to shore) and 53 Mbps for download (from shore to ship), which is enough to deliver gigabytes of information to the people at the islands every time the ship passes by. Introduction Although according to the recent data two-third of the world population is now enjoying mobile phone connection [1], yet many people, especially at the small and remote islands remain completely cut off. These islands generally have a little population and low income, hence making them economically challenging for building out telecommunication infrastructure [2]. This situation brings difficulties for the inhabitants to communicate and to access information from the outside worlds. In such locations where the end-to-end communication is not available, the asynchronous approach can be utilized to send information in the form of digital data. This approach usually uses vehicles that physically carry a computer with a storage device and a limited telecommunication module (usually WiFi) between remote areas in order to effectively create a data communication link [3,4]. In some areas, such as in North Sulawesi province of Indonesia, we could utilize passenger ships or ferries as physical carriers to deliver digital data to the people in the remote islands which are located at a particular range of distance from the ship's passing routes. As the ships usually have regular routes and schedules, we could also expect that the data can be delivered in a timely manner. This paper reports the channel performance of long-range WiFi connection oversea at 5 GHz using the real ship's route at the North Sulawesi province's water in Indonesia as a sample scenario. Sample scenario & experiment As the solution proposed in this paper utilizes passenger ships as physical carriers to deliver digital data to the remote islands, it is necessary to determine a sample scenario in which the solution will be tested. To do this, a continuous GPS tracking from the deck of a passenger ferry traveling its typical route from Manado to Tahuna was conducted. The data showed that the ship was cruising at an average speed of 13.32 knot and passing by a small island of Makalehi (coordinate 2°43'38.2"N 125°10'38.4") at about 2.17 km of minimum distance to its shore. This island has a population of 1,287 in an area of 4.2 km2 and situated at the border of Indonesia's sea territory. Makalehi island was chosen as the sample of the scenario as it is categorized as one of the small and outer islands by the Government of the Republic of Indonesia, and currently has no reliable telecommunication infrastructure to provide any data connectivity. Figure 1: Configuration of equipment To simulate the sample scenario, the experiment was focused to measure the channel capacity of longrange WiFi connectivity between a moving station on a chartered boat cruising at the same speed of passenger ship mentioned earlier (±13.32 knot), and passing by perpendicularly to a base station at the shore at least 2.17-km distance. The experiment was done at the Wori waters near Siladen island, while the base station was situated at a fishing dock of Bawoho village at the north of Manado city (coordinate 1°34'58.1"N 124°49'03.1"E). Long-range WiFi antennas The equipment of the long-range WiFi antenna system was separated into two subsystems: a) ship side, and b) shore side. The ship side used a dual polarity omnidirectional antenna (airMAX® Omni® AMO-5G13 from Ubiquiti Networks) with 13 dBi gain at 5 GHz, while the shore side used a sectoral antenna airMAX® AM-5G19-120 from Ubiquiti Networks) with 19 dBi gain at 5 GHz. Both antennas were elevated at 2 meters from the floor using iron tripods. Each of these antennas was connected to a wireless access point (Rocket®M AirMAX® Base Station from Ubiquiti), monitored remotely using a laptop connected to the access point using UTP cable. Due to the lack of power source at the boat and at the shore, all equipment at each side was powered by a deep-cycle 12V 100Ah battery (Luminous®) with an inverter. Figure 1 shows the configuration of the equipment. Figure 2 shows the GPS tracking and timing of the ship's location while measuring the channel performance. The shore's base station is located at the mainland and showed as a white asterisk. The boat was cruising from north to southwest toward the channel between Tongkaina Cape and Bunaken Island. Figure 2: Ship's location GPS tracking The measurement was done in the duration of 52 minutes and the ship's distance to the base station is shown in Figure 3. As can be seen from this figure, the ship's closest position occurred 39 minutes after the measurement began ( Figure 4). Clearly, the signal levels changed proportional to the distance between the ship and the base station, as shown by Figure 5. Only for comparison, in Figure 5 we also included signal level's data from a directional antenna that was included in the experiment. With a more focusing on the results from the omnidirectional antenna, the data of wireless channel capacity, which is the maximum amount of traffic or signal that can move over the communication channel [5], are shown in Figure 6 and 7. Tx channel capacity defines the capacity of data transmission from ship to the shore, while Rx channel capacity means the capacity from shore to ship. As shown from these data, the channel capacity of both Tx and Rx were increased significantly between 11:00 to 11:15 of the measurement time. The maximum Tx channel capacity was 120 Mbps and the maximum Rx channel capacity was 53 Mbps. Conclusions The measurement results showed that the most stable ship-to-shore communication can be achieved in a duration of ±15 minutes at the maximum distance between the ship and shore of about 4 km. The maximum channel capacity was 120 Mbps for upload (from ship to shore) and 53 Mbps for download (from shore to ship), which is enough to deliver gigabytes of information to the people at the islands every time the ship passes by. This shows that this asynchronous approach using ship-to-shore wireless communication can be used effectively for data delivery to the remote islands.
2019-09-17T01:07:55.737Z
2019-06-30T00:00:00.000
{ "year": 2019, "sha1": "4e0b4e778669fbb57f11a716cac6eb66b221f668", "oa_license": "CCBY", "oa_url": "http://seps.unsrat.ac.id/journals/index.php/joseps/article/download/13/12", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8d0abcabb2f275db3836426d00610e39c699aa92", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
232365882
pes2o/s2orc
v3-fos-license
Population genetic structure of the Asian bush mosquito, Aedes japonicus (Diptera, Culicidae), in Belgium suggests multiple introductions Background Aedes japonicus japonicus has expanded beyond its native range and has established in multiple European countries, including Belgium. In addition to the population located at Natoye, Belgium, locally established since 2002, specimens were recently collected along the Belgian border. The first objective of this study was therefore to investigate the origin of these new introductions, which were assumed to be related to the expansion of the nearby population in western Germany. Also, an intensive elimination campaign was undertaken at Natoye between 2012 and 2015, after which the species was declared to be eradicated. This species was re-detected in 2017, and thus the second objective was to investigate if these specimens resulted from a new introduction event and/or from a few undetected specimens that escaped the elimination campaign. Methods Population genetic variation at nad4 and seven microsatellite loci was surveyed in 224 and 68 specimens collected in Belgium and Germany, respectively. German samples were included as reference to investigate putative introduction source(s). At Natoye, 52 and 135 specimens were collected before and after the elimination campaign, respectively, to investigate temporal changes in the genetic composition and diversity. Results At Natoye, the genotypic microsatellite make-up showed a clear difference before and after the elimination campaign. Also, the population after 2017 displayed an increased allelic richness and number of private alleles, indicative of new introduction(s). However, the Natoye population present before the elimination programme is believed to have survived at low density. At the Belgian border, clustering results suggest a relation with the western German population. Whether the introduction(s) occur via passive human-mediated ground transport or, alternatively, by natural spread cannot be determined yet from the dataset. Conclusion Further introductions within Belgium are expected to occur in the near future, especially along the eastern Belgian border, which is at the front of the invasion of Ae. japonicus towards the west. Our results also point to the complexity of controlling invasive species, since 4 years of intense control measures were found to be not completely successful at eliminating this exotic at Natoye. Supplementary Information The online version contains supplementary material available at 10.1186/s13071-021-04676-8. Background As a result of globalisation and international trade, nonnative species are being introduced into Europe, which may eventually establish reproducing and overwintering populations in new territories. The introduction of potential disease vectors is of major concern since these constitute a threat to human and animal health. Mosquitoes (Diptera: Culicidae), such as Aedes species, are regularly introduced together with the worldwide transport of used tyres, ornamental plants and water-holding machinery [1]. The Asian bush mosquito (Aedes japonicus japonicus (Theobald, 1901), generic name following [2]), is a competent vector in the laboratory for a number of arboviruses [3], including the West Nile [4], Japanese encephalitis [5], chikungunya [6], dengue [6] and Zika viruses [7,8]. Originally restricted to East Asia, the species is well adapted to the temperate climates of Europe where it is now well-established [9,10]. The arrival and spread of this species in central and western Europe has been attributed to its broad ecological tolerance, adaptability, low grade of specialisation in the choice of breeding sites and to its eggs withstanding desiccation and low temperatures [11][12][13][14]. The expansion and colonisation of new territories by the species is primarily passive and associated with human activities [1,15]. Since the first detection of Ae. japonicus in Belgium in 2002 (at Natoye, municipality Hamois, Namur province) [11], successive monitoring projects have surveyed the introduction and spread of this and other exotic mosquito species [16][17][18]. Natoye was the first place in western Europe where the species was found to be established [11]. Subsequently it was found in Switzerland, Germany and France in 2008 [19][20][21], in Slovenia and Austria in 2011 [22], in Hungary in 2012 [23], in Croatia and the Netherlands in 2013 [24][25][26], in Italy and Lichtenstein in 2015 [27] and in Spain and Luxembourg in 2018 [28,29]. The population at Natoye is the only one in Europe with a well-documented introduction pathway. Aedes japonicus was most likely introduced through the second-hand tyre trade located at this site [11]. The exact origin, however, is unknown since imports arrived from various locations, including countries already colonised by the species, like the USA [30]. Its presence at Natoye was confirmed in 2003, 2004, 2007-2009 and 2012-2014, but the species was never caught outside a radius of 3.5 km around the premises of the tyre trading company [16-18, 30, 31]. Therefore, the population was considered to be established but not expanding. From 2012 to 2015, an intensive control campaign aimed at eliminating the species from Natoye (mainly mechanical source reduction and the use of larvicide), and since the species was not detected in 2015 and 2016, it was assumed to be eliminated [32]. However, in 2017-2019 Ae. japonicus re-appeared [Deblauwe et al., Monitoring of exotic mosquitoes in Belgium (MEMO): Final Report Phase 7 Part 1: MEMO results. Antwerp: NEHAP, unpublished report, 33,34], raising the questions of whether new specimens had been introduced, and from where they originated, i.e., whether they represented undetected survivors of the elimination and/or involved new colonisers from other source populations. In contrast to the situation at Natoye, Ae. japonicus has rapidly spread throughout the southwest region of Germany, following its first observation in 2008 in the federal state of Baden-Wuerttemberg [35][36][37]. Its introduction pathway, however, is not clear. As the species has been monitored in Germany since 2010, its continuous spread and increasing population densities could be tracked [38,39]. Aedes japonicus was subsequently detected in 2012 in the western region of the country (southern North Rhine-Westphalia and northern Rhineland Palatinate) [40], in 2013 in the northern part (southern Lower Saxony and northeastern North Rhine-Westphalia) [41] and finally in 2015 in the southeastern region (Upper Bavaria) [15]. It is now considered to be well-established and no longer eradicable [38]. The western German population has been spreading since 2012, and it was predicted that the species would cross the border with Belgium in the near future, possibly as early as 2016 [39]. Therefore, Ae. japonicus was monitored in Belgium between 2017 and 2019 at the Belgian-German parking lot of Lichtenbusch and along the road and highway between two cemeteries (Raeren and Rocherath) along the German border. Specimens were collected in an allotment garden in Eupen (province of Liège, Belgium) (Deblauwe et al., unpublished report). Aedes japonicus was also detected in 2018 during the monitoring of Aedes koreicus, another non-native mosquito species that has established in Belgium, in that same period at an industrial area in Maasmechelen (province of Limburg, Belgium) (Deblauwe et al., unpublished report). Hence, elucidating the relationships between these Belgian specimens and the western German population is of great interest to understand the introduction events in Belgium, and it might help customising surveillance and control efforts in Belgium. complexity of controlling invasive species, since 4 years of intense control measures were found to be not completely successful at eliminating this exotic at Natoye. Keywords: Aedes japonicus japonicus, Introduction, Invasive mosquito, Population genetics, Temporal changes, Microsatellites, Nad4 haplotypes To uncover the relationships between the geographically separated European populations of Ae. japonicus, several population genetic investigations have been conducted in the past [15,24,38,42,43]. Highly polymorphic DNA regions were used in these studies, such as those associated with microsatellites and the mitochondrial NADH dehydrogenase subunit 4 (nad4) locus. These DNA markers enabled researchers to study the population genetic structure of Ae. japonicus [15,24,38,42] and the changes in allelic frequencies through space and time [43,44], and revealed several independent long-distance introductions into Europe [42]. Only a few Belgian specimens (N = 18) collected at Natoye in 2008 and 2010 were included in these population genetic analyses, revealing that the Natoye population had the lowest genetic diversity of all populations examined [24,42]. In Germany, the most recent study included specimens from the four above-mentioned geographically isolated populations (i.e. the southwestern, western, northern and southeastern populations), and identified two population clusters based on microsatellite data [43]. The specimens sampled in the west and southwest of Germany had high probabilities of belonging to each identified genotype group, respectively; those sampled in the north and southeast of Germany had mixed assignment probabilities. This latter study suggested that the western German cluster still had a uniform make-up, while admixture has occurred over time between the three other German populations, compared to previous results [15], with a human-mediated carry-over of individuals between regions [43]. The objectives of the present study were to determine: (i) if the mosquito specimens collected along the Belgian border were introduced from the nearby existing western German population, and (ii) if the population at Natoye resulted from a new introduction event and/or from a few undetected specimens that escaped elimination. To answer these questions, population genetic variation at nad4 and seven microsatellite loci was surveyed in two ways: (i) a comparison of allelic frequencies and haplotypic diversities between populations from Eupen, Maasmechelen, and reference material from Germany, to assess if the Eupen and Maasmechelen populations are linked to those from the western part of Germany, and (ii) a comparison of the genetic composition and diversity of the population at Natoye between 2012-2013 and 2017-2019, to assess potential effects of the elimination campaign. Sampling In total, 292 Aedes j. japonicus specimens from Belgium and Germany were incorporated in the present study (Table 1). Of these, 224 specimens were collected in the framework of successive projects undertaken to monitor the introduction and establishment of exotic mosquito species in Belgium [17,18,45]. Among these 224 specimens, a subset (N = 52) collected in 2012 and during a survey from 2013 to 2016 at Natoye was incorporated in this study to investigate the temporal fine-scale genetic structure changes at that location. During the latest monitoring project (Monitoring of Exotic Mosquito Species in Belgium [MEMO], 2017-2019), Ae. japonicus eggs, larvae and adults were collected at Natoye (location: used tyretrade company, coordinates: 50°20′20.2″N, 5°02′43.7″E), Eupen (allotment garden) and Maasmechelen (industrial area) ( Table 1) [Deblauwe et al.,unpublished report,33,34]. Eggs collected in 2012 (N = 9) were reared in the laboratory to adults for morphological identification, while eggs collected in 2017 (N = 6) were identified by mitochondrial cytochrome oxidase I gene (COI) DNA barcoding [46], following [47] (GenBank accession numbers: MT418505-MT418508, MT418510, MT418511; 100% Barcode of Life Data System [BOLD] similarity percentages). Before species identification, eggs and larvae were transferred to absolute ethanol and stored at room temperature, while adults were stored dry at − 20 °C. Larvae and adults were morphologically identified following keys and species descriptions [48,49]. Further, Ae. j. japonicus reference specimens (N = 68) from well-identified German population clusters based on microsatellite data [43] were included. These specimens were collected by visiting cemeteries in 2016 and 2017 (Table 1). They comprised larvae that were reared to adults in the laboratory and subsequently morphologically identified using a standard key [50]. Specimens from the southeastern German population were not available for the present study. DNA extraction and PCR amplification DNA was extracted from legs, abdomens or eggs using either the NucleoSpin ® Tissue DNA extraction kit (Macherey-Nagel, Düren, Germany) or the QIAamp DNA Micro kit (Qiagen, Hilden, Germany), following the manufacturers' protocols, except that the elution volume was set to 70 µl. A fragment of the nad4 locus was sequenced using published primers and PCR cycling conditions [52]. The PCR reaction was carried out in a final volume of 20 µl, with each reaction mixture containing 2 µl of DNA template, 2 µl of 10× buffer, 1.5 mM MgCl 2 , 0.2 mM dNTP, 0.4 µM of each primer and 0.03 U/µl of Platinum ™ Taq DNA Polymerase (Invitrogen ™ [Thermo Fisher Scientific], Carlsbad, CA, USA). PCR products and negative controls were run in a 1.5% agarose gel, using a UV transilluminator and the MidoriGreen ™ Direct (NIPPON Genetics Europe GmbH, Düren, Germany) method. Positive PCR amplicons were subsequently purified using the ExoSAP-IT ™ protocol, following the manufacturer's instructions, and sequenced in both directions on an ABI 3230xl capillary DNA sequencer using BigDye Terminator v3.1 chemistry (Thermo Fisher Scientific, Waltham, MA, USA). The quality of the sequencing output was checked with Geneious ® R11 (Biomatters Ltd., Aukland, New Zealand), following which strands were trimmed, corrected, translated into amino acids and assembled using the same software. Consensus sequences were extracted and aligned using ClustalW in Geneious ® R11 (https:// www. genei ous. com). Specimens were genotyped for seven microsatellite loci developed for Ae. japonicus [53], using the two multiplexes presented in [53], except for the OJ5F primer which was redesigned according to [44,54]. The PCR reactions were carried out in a final volume of 10 μl, containing between 0.08 and 0.20 μl of each 10 μM diluted primer, 5 μl Multiplex Taq PCR Master Mix (Qiagen) and 2 μl of DNA. PCR conditions started with an initial activation step at 94 °C/15 min; followed denaturation (94 °C/30 s), annealing (54 °C/30 s) and extension (72 °C/30 s) for 30 cycles; and a final extension step at 60 °C for 30 min. PCR products were sized on a 3130XL Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) using 2 μl of PCR product, 12 μl of Hi-Di ™ formamide (Applied Biosystems) and 0.3 μl of GeneScan ™ 500 LIZ size standard (Applied Biosystems). Length variation visualisation and determination were performed using Geneious ® R11. 14:179 Nad4 data analysis Available nad4 sequences (N = 48) were downloaded from GenBank and then aligned with the nad4 consensus sequences generated in this study, as well as with one outgroup sequence of Aedes aegypti, using Geneious ® R11. A rooted Neighbour-Joining (NJ) tree was constructed based on the HKY distance model implemented in Geneious ® R11, with branch support assessed by 1000 bootstrap replicates. We performed a pairwise comparison of nucleotide frequencies between populations using Wright's F-statistics, as implemented in Arlequin v3.5 [55] (1000 random permutations for significance, with subsequent standard Bonferroni correction). The haplotype frequencies, the mean number of pairwise nucleotide differences (k) and average gene diversity over nucleotide positions (H) were calculated. A haplotype network was constructed using the minimum spanning network method (Minspnet in Arlequin v3.5), with default settings. Microsatellite data analysis A multilocus Bayesian cluster analysis was performed using Structure v2.3.4, without prior information on geographic origin [56,57]. A burn-in of 100,000 iterations and 1,000,000 Markov chain Monte Carlo method was applied. Each potential number of genotypic clusters (K; ranging from 1 to 10) was run ten times. The Markov chain convergence was checked between each ten iterations for each K. The results and visual output of the ten iterations for each K value were summarised using the web application CLUMPAK [58] (http:// clump ak. tau. ac. il/ index. html) and the software DISTRUCT v1.1 [59]. The optimal number of clusters was assessed following [60]. The presence of null alleles was tested with Micro-Checker v2.2.3 [61]. Heterozygosities (H e , H o ) and inbreeding coefficient (F IS ) per population were estimated using Genetix v4.05 [62], with 1000 permutations to calculate P values. The number of alleles (N), mean number of alleles per locus (N A ) and number of private alleles (P A ) per population were estimated using GenAlEx v6.51b2 [63]. Allelic richness (A R ), as a standardised measure of the number of alleles per locus independent of the sample size, was calculated using FSTAT v2.9.4 [64]. Pairwise F ST values between populations across all loci were estimated in Arlequin v3.5 (1000 permutations for significance, and subsequent standard Bonferroni correction). To further investigate the putative origin of the specimens collected along the Belgian border, a principal coordinates analysis (PCoA) was performed with GenAlEx v6.51b2 [63], based on Nei's genetic distance and pairwise population F ST values. Results The nad4 fragment was scored in 278 specimens ( Table 1). The sequences were deposited in GenBank (accession numbers: MT462702-MT462979). The nad4 sequence alignment showed 15 transitions, all of which were silent. One new haplotype was discovered at Natoye (2019), and was named H47 (GenBank accession number: MT462840), in continuation of the numbering of nad4 haplotypes within the species [24,43]. Heteroplasmy was identified based on the observation of double peaks in the sequence chromatograms. Because of these double peaks at specific nucleotide locations, as observed in previous studies within Ae. japonicus [24,43], 103 individuals could not be assigned to single haplotypes ( Table 2). The amplification of nuclear insertions of mitochondrial origin (NUMTs) is considered to be unlikely because the detected polymorphic sites are located in the third codon position and are synonymous. Contaminations during laboratory procedures are also excluded since particular attention was given to avoid cross-contaminations, with repeated DNA extractions and PCR reactions performed under appropriate laboratory conditions. However, since sequencing is not the best way to reveal heteroplasmy in the mitochondrial genome, further investigations would be required. Additionally, seven polymorphic microsatellite loci were scored in 292 specimens of Ae. japonicus (Table 1; 224 from Belgium and 68 from Germany). The number of alleles per locus and per population varied from 4 to 11, and from 15 to 37, respectively. The mean H e ranged from 0.381 to 0.678, and the mean H o from 0.384 to 0.609 (Table 3). Micro-Checker v2.2.3 did not detect null alleles. The microsatellite database is available from the Dryad Digital Repository ( https:// doi. org/ 10. 5061/ dryad. p5hqb zkmw). Geographic analysis: introduction source The NJ tree based on nad4 displayed an unresolved topology. Likewise, the minimum spanning network revealed no association between haplotypes and geography. The number of haplotypes per location varied from one (Eupen and Maasmechelen) to five (southwestern Germany). Haplotype H1 was encountered at almost all locations, and usually in higher frequencies (except at Natoye and in northern Germany), as elsewhere in the world [43,52]. Bayesian cluster analysis of the microsatellite data identified two (highest posterior probability for K = 2) and six (second highest posterior probability for K = 6) genotypic clusters (Fig. 1, Additional file 1: Fig. S1). At K = 2, the specimens from Natoye are separated from all others (Additional file 2: Fig. S2), and pairwise significant nad4 and microsatellite F ST between Natoye and the other populations were 0.339 and 0.116 (P < 0.0005), respectively. At K = 6, four genotype groups corresponded with geographical populations, with different degrees of admixture: (i) Maasmechelen; (ii) northern and southwestern Germany; (iii) western Germany and Eupen; and (iv) Natoye (Figs. 1, 2). While for nad4, the F ST values between Eupen and western Germany were not significantly different from zero, those for the microsatellites were almost all significant (Table 4). Three nad4 haplotypes were found in Eupen (H1, H5, H6), which also occurred in the western and southwestern German populations (Table 2). Eupen did not show a heterozygote excess (P > 0.05) using Bottleneck, nor did the German populations. Based on the microsatellite loci, Maasmechelen displayed significant pairwise F ST values with all other populations (F ST of 0.237; Table 4), except with the population at Eupen in 2019. On the PCoA (Fig. 3), Maasmechelen stands apart from all other populations; it also had the lowest allelic richness (2.143; Table 3) and only one nad4 haplotype (H1; Table 2). Temporal analysis at Natoye Bayesian cluster analysis based on the microsatellite data identified two admixing, genotypic clusters at Natoye (highest posterior probability for K = 2) (Fig. 4, Additional file 3: Fig. S3): the first one including the individuals collected in 2012-2013 and the second one including the individuals collected in 2017-2019. The first cluster has a predominant genotypic signal "red", whereas the second cluster has a predominant genotypic signal "green" (Fig. 4). The Table 5). The Bottleneck results indicated that the population of Natoye showed a significant heterozygosity excess in 2012-2013, but not in 2017-2019; however, F IS estimates were significant in both cases (Table 3). In 2012-2013, the allelic richness was also lower, and there were fewer private alleles than in 2017-2019 (Table 3). Considering all nad4 data at Natoye, the most common haplotype was H9 over all years, detected 82 times, followed by H1 (N = 15). Three haplotypes were each only detected in 1 year at Natoye, namely H23 in 2012, H5 in 2017 and H47 in 2019 (Table 2). Discussion The present results indicate that the Natoye population is significantly differentiated from all other populations considered in this study, both for nad4 and for the microsatellite data, with a high prevalence of nad4 haplotype H9 (80.4%, excluding individuals displaying potential mtDNA heteroplasmy). This haplotype also occurs in the USA, Germany, Austria, the Netherlands and Slovenia [15,24,42,44], but has never been found in such high frequencies, except in the population of Pennsylvania Table 2 Number of specimens assigned to each nad4 haplotype at the different collection locations in Belgium and Germany a Unless specified otherwise, locations are in Belgium b Naming of haplotypes according to [24,43] [44]. However, as the nad4 data did not show any geographical relationships, the source area(s) of the original introduction at Natoye remain elusive. The lack of structure also observed in previous studies is likely linked to the randomness of international introduction events [42][43][44], with specimens possibly originating from diverse populations. The Natoye population also showed a clear difference between its genotypic microsatellite make-up in 2012-2013 and 2017-2019, i.e. before and after the elimination campaign which started in 2012 and ran till 2015 (no specimen was caught during routine surveillance in 2015-2016), as suggested by the significant F ST values and the Bayesian clustering in Fig. 4. This was, however, not accompanied by a difference in the nad4 data ( Table 2). In 2017-2019, the population had an increased Table 3 Descriptive statistics of the genetic diversity within each population, and between sampling periods at Natoye, Belgium Table 3). Between 2017 and 2019, 59 specimens displayed one or more private alleles, while only three specimens were recorded with a private allele in the time period 2012-2013. These latter results would indicate that there may have been one or multiple additional new introduction(s) from external source(s) at Natoye, which occurred after the elimination campaign. Multiple introductions seem to be common to pests associated with human-mediated transport [44,67], which has an impact on the genetic composition of populations. While the present genetic study cannot provide further insights on the possible origin(s) of the new introduction event(s), the investigation of the trading history at the Natoye company indicates that tyres are regularly imported from an area in Germany colonised by Ae. japonicus only in 2017 (Elz, in the federal state of Hesse) ( [39,51]; personal comment H. The Natoye population present in 2012-2013 is, however, believed to have survived since a shift in the genetic signature before and after the elimination campaign was identified based on microsatellite data, but without complete replacement ( Fig. 4; shift in the frequency of individuals from predominantly red in 2012-2013 to predominantly green in 2017-2019). The forest next to the premises of the tyre-trading company, where Ae. japonicus was collected during different monitoring projects, might have acted as a refuge [30]. Indeed, in its natural distribution range in East Asia, the species is usually found in forested areas [68], with breeding sites mainly distributed in urban and suburban area, while adults are more distributed in the forest [69]. Even if tree holes and other breeding sites had been neutralised during the elimination campaign (2012-2015) within the surrounding forests (by mechanical removal and larviciding of breeding sites, or filling tree holes with sand), a residual population could have survived at a low density, below the detection limit. The field monitoring results at Natoye also indicated a strong species abundance increase in 2019 (N collected = 1725, whole season) compared to 2017 (N collected = 31, collected over half a season) and 2018 (N collected = 251, whole season), with evidence of a spread in the southwest direction in 2019 using the forest as a 'shrub-corridor' (Deblauwe et al., unpublished report). Several studies indicate that Ae. japonicus uses forest edges to spread [9,70,71]. This southwest spreading pattern was also observed in 2012 [17], but the current spread seems to be faster than in the past (Deblauwe et al., unpublished report). A new control campaign at Natoye was started in 2020 to control population density. A few individuals collected in Natoye in 2008 and 2010 (N TOT = 18) were previously analysed based on the same set of microsatellite loci in [42] and showed the lowest genetic diversity of all populations examined in the latter study, which included samples from Germany, Switzerland, Austria and Slovenia [42]. Although this low genetic diversity may be biased by the limited sampling, the genetic diversity estimates of Natoye in the present study covering the period 2012-2013 are in line with this previous finding when compared to expanding German Ae. japonicus populations (A R = 3.286; Table 3). The individuals from Natoye (N 2012-2013 = 52) were collected over the whole activity season of the mosquito, and also from a 2-km-wide perimeter around the tyre company site, which minimises the risk of biases due to relatedness. Considering that both the sample sizes and the number of DNA markers used to investigate the genetic diversity of the Belgian populations along the border between Begium and Germany were limited, the results should be interpreted cautiously and should consider information collected in the field during the monitoring campaign. Additionally, the observed possible relatedness of the specimens cannot be dismissed without some reflection. For example, despite intensive monitoring efforts during the whole season, adult Ae. japonicus were only trapped twice at Maasmechelen, i.e. on 19 June and 3 July 2018 (using a Frommer updraft gravid trap (John W. Hock Co., Gainesville, FL, USA) (Deblauwe et al., unpublished report). Since these two trapping dates are close to each other, it is possible that the specimens derived from the same single introduction and eventually reproduced on site. The observed population genetic structure might therefore result from a strong genetic drift (Fig. 1). This assumption is supported by the presence of only one nad4 haplotype at Maasmechelen (H1). It is therefore not possible to make any further inferences about the potential origin of these specimens. Despite the extensive sampling efforts in the allotment garden at Eupen (Deblauwe et al., unpublished report), only a few Ae. japonicus specimens were collected-once in 2017 (September), seven times in 2018 (June, July, August and September) and three times again in 2019 (May, June and July). The larvae collected in 2017 were most likely siblings as they were collected on the same date and at the same spot. In 2018, all life stages were collected in and around the allotment garden, while only larvae were found in 2019. Considering the monitoring efforts, these results indicate summer reproduction but the species is not believed to have established and overwintered yet, which rather points to multiple introductions at Eupen. Population clustering results based on microsatellite data at K = 6 and the PCoA suggest a relation between Eupen and the population of western Germany (Figs. 1-3), which is in agreement with the prediction that the species might cross the border with Belgium [39]. Whether this occurs via passive human-mediated ground transport or, alternatively, by natural spread cannot be determined as yet from the current dataset. To further investigate the population genetic relationships, gain insight in the introduction pathways and investigate changes in the allelic frequencies over time in the frame of surveillance and elimination programmes, thorough sampling of all Ae. japonicus populations, including representatives of its native and invasive ranges, in additional to the use of genome wide genetic data, would be required. Conclusion Considering the international movement of goods and people, the colonising behaviour of Ae. japonicus in Germany, its recent establishment in Luxembourg, the increasing population densities in Germany and Belgium [13,14] and the relatedness of the population in Eupen with the one across the border in the western part of Germany, it is to be expected that further introductions will occur into Belgium. The eastern border of Belgium is at the front of the invasion of Ae. japonicus toward the west, while the present results also show that the elimination campaign undertaken over years at Natoye was not completely successful, which underlines the complexity of controlling invasive species. Sensibilisation along the German and Luxembourg border and control through larviciding and mechanical removal of breeding sites at the tyre-trading company in Natoye could help keeping densities and spread as low as possible.
2021-03-27T06:16:34.885Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "fb3b0fcf84f92812c6a45d21f0f0d4ed3dd567ca", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-021-04676-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f32b9d53c84909e2a4f7d6bb3c88d3519aef786", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
248100164
pes2o/s2orc
v3-fos-license
Evaluation of Therapeutic Vancomycin Monitoring in Taiwan ABSTRACT This study aimed to evaluate whether trough level-guided monitoring can be replaced by area under the concentration–time curve (AUC) and MIC ratio-guided monitoring (AUC/MIC ratio = 400) in patients infected with methicillin-resistant Staphylococcus aureus (MRSA) with a vancomycin MIC = 1 mg/L in Taiwan. In this retrospective study, patients treated with vancomycin for Methicillin-resistant Staphylococcus aureus (MRSA) infection were recruited from a teaching hospital in Taiwan from January 2016 to December 2017. Average trough concentrations were adjusted based on the average daily vancomycin dose, and the AUC/MIC ratio was calculated using the AUC/MIC conversion formula to analyze the correlation between trough or AUC/MIC ratio, nephrotoxicity, and clinical efficacy. As the primary outcome, the overall mean adjusted vancomycin average AUC/MIC ratio was 526.87 for a total of 102 patients. A total of 67% and 76% of the patients attained an AUC/MIC of ≥400 when the adjusted vancomycin trough concentrations were 10 to 15 mg/L and 15 to 20 mg/L, respectively. Additionally, 81.37% of the total study population had MRSA isolates with a vancomycin MIC of ≤1 mg/L. Moreover, in the subgroup, 92% of the patients attained an AUC/MIC of ≥400 on receiving vancomycin in the 10 to 15 mg/L trough range. An AUC/MIC of ≥400 was attained in patients infected with MRSA strains, who were treated by maintaining the vancomycin trough concentrations at 10 to 15 mg/L. Moreover, these patients demonstrated a lower incidence of nephrotoxicity. These findings support the use of the AUC/MIC ratio as a useful marker for the therapeutic monitoring of vancomycin owing to the clinical efficacy and safety of vancomycin in Taiwan. IMPORTANCE Since 2020, the Infectious Diseases Society of America (IDSA) updated vancomycin guidelines, and vancomycin AUC therapeutic drug monitor was updated to AUC/MIC in the United States. But acceptable rate of infection physicians in Taiwan was low. That is why this study evaluated in Taiwan. infections (5). Although the clinical efficacy and safety of vancomycin with AUC/MIC of $400 are yet to be proved, some studies proposed that maintaining a high AUC/MIC ratio can significantly lower the incidence of death and treatment failure (6). In addition, in 2014, Neely suggested that for patients infected with bacterial strains having vancomycin MIC = 1 mg/L, approximately 50% attained an AUC/MIC of $400 without the need of the vancomycin trough concentration reaching 15 to 20 mg/L (7). It has been acknowledged that if vancomycin MIC is $1 mg/L, higher trough concentrations of vancomycin (.15 mg/L) are needed to attain an AUC/MIC = 400, which may increase the risk of acute kidney injury (AKI) (4,5). Other risk factors include the concomitant use of other antibiotics or nephrotoxic medications (e.g., nonsteroidal anti-inflammatory drugs), prolonged hospitalization duration, higher costs of medical care, and increased mortality rate (4,8). According to IDSA's latest guidelines released in 2020, the United States has already adopted AUC/MIC ratio as an indicator of therapeutic monitoring of vancomycin; however, studies on relevant pharmacodynamics and AUC/MIC conversion are yet to be published in Taiwan. In this retrospective study, we aimed to evaluate whether vancomycin trough concentrations of 10 to 15 mg/L were associated with increased attainment of AUC/MIC of $400 in patients with suspected MRSA infection in Taiwan. RESULTS Study population and vancomycin dosage characteristics. Between January 2016 and December 2017, a total of 329 patients receiving vancomycin were identified. Of these, 227 patients were excluded due to one of the following reasons: unknown MIC, unstable renal function, unknown height, no MRSA infections, inappropriate time of blood sample collection, and/or undergoing hemodialysis. A total of 102 patients were included in the study (Fig. 1). Patient features, ward type, and analyzed vancomycin dosing characteristics, such as the drug dose, MIC value, and derived AUC/MIC results, are provided in Table 1. The included patients had an average age of 66.5 years, with 25.49% admitted to the intensive care unit. A total of 81.37% of the patients were infected with MRSA strains with a vancomycin MIC of #1 mg/L. The top four MRSA isolates were obtained from the samples of blood in 50% of the patients, followed by that of sputum (13.73%), wound (12.75%), and pus (12.75%) ( Table 2). An AUC/MIC of 1,015 mg/L was considered as the reference value in the control group for comparison as well as the comparative endpoint of this study. Primary outcome. The overall mean AUC/MIC was 530.72, with an AUC/MIC ratio of 526.87 adjusted using the average daily dose of vancomycin. A total of 67 of 102 patients (65.69%) attained an AUC/MIC of $400 as shown in Table 1. After the trough concentrations were classified into four trough ranges based on the adjusted vancomycin troughs, the study found that the percentage of patients in the 10 to 15 mg/L and 15 to 20 mg/L trough ranges that attained an AUC/MIC of $400 were 76% and 67%, respectively. As represented in Figure 2a, a trough of ,10 mg/L was significantly associated with a decreased likelihood of attaining an AUC/MIC of $400, whereas this likelihood was much more in high trough concentrations of $10 to 15 mg/L (Kruskal-Wallis test). In the study population of 102 patients, 76% of patients in the 10 to 15 mg/L trough range attained the AUC/MIC of $400, which was 9% higher than the percentage of those in the 15 to 20 mg/L trough range. The difference between the percentage of patients in the 10 to 15 mg/L trough range attaining AUC/MIC of $400 and that in the ,10 mg/L trough range was statistically significant (Mann-Whitney U test [two-tailed] was performed for nonparametric data with a significant difference of P = 0.002) (Fig. 2b). Secondary analysis based on subgroups. Of the 102 patients, the MRSA strains isolated from 82 patients had an MIC of vancomycin = 1 mg/L. The demographic and vancomycin dosage characteristics of these patients, such as the dosage, MIC value, and derived AUC/MIC, were analyzed and are presented in Table 3. As represented in Figure 3a, a trough concentration of ,10 mg/L was associated with the decreased likelihood of attaining AUC/MIC of $400. (Fig. 3a). Of the 82 patients (which is 80.39% of the total population), 64 attained an AUC/MIC of $400, and the adjusted vancomycin average AUC/MIC was found to be 576.89. Approximately 92% of patients attained an AUC/MIC of $400 in the 10 to 15 mg/L trough range, which was 22% higher than the percentage of those in the 15 to 20 mg/L trough range that attained the same AUC/MIC. Moreover, as shown in Figure 3b, the difference between the 10 to 15 mg/L and ,10 mg/L trough groups was statistically significant (P , 0.05). Secondary outcome. While analyzing safety and clinical efficacy, no statistically signifi- (Tables 4 and 5) was observed. Patients with an AUC/MIC of $400 were more prone to AKI than patients with an AUC/MIC of ,400, and the incidence was higher by 11% to 12% (11.94%:5.71% and 12.5%:0%; Table 4 and 5). Based on analyses in different trough ranges, the incidence of AKI in the .20 mg/L vancomycin trough range was the highest (Fig. 4). DISCUSSION The relationship between the trough concentration and AUC/MIC as well as the correlation of trough concentration or AUC/MIC with renal toxicity and clinical efficacy were analyzed in this retrospective study. Our study results indicated that the percentage of patients in the 15 to 20 mg/L vancomycin trough range attaining AUC/MIC of $400 was not higher than the In terms of side effects, a recent retrospective comparison of trough-guided versus AUC-guided vancomycin dosing in approximately 1,300 adults across four hospitals within the Detroit Medical Center was conducted. The study found that vancomycin therapy using AUCs can both preserve efficacy and reduce nephrotoxicity (11). In our study, the incidence of AKI was higher when vancomycin trough concentration was higher than 15 mg/L, especially when the trough was .20 mg/L with concomitant use of nephrotoxic medications. In terms of efficacy, there is insufficient data for a positive correlation with bacterial eradication. Owing to the clinical use of vancomycin therapy, routine cultures to confirm bacterial eradication may not be performed, except in cases of bacteremia or infective endocarditis (12). Our study did not find any significant difference in the time required for bacterial eradication, and we speculate that differences in time for bacterial eradication cannot be precisely analyzed because there is no fixed time for specimen collection. Moreover, there is no specimen collection in some clinical practice. Therefore, further studies in this field are warranted. We provided key insights to clinicians that AUC/MIC of $400 was attained when vancomycin trough concentration was maintained at 10 to 15 mg/L, which significantly lowered the incidence of vancomycin-associated renal toxicity. In the future, guidelines on AUC/MIC-guided treatment with vancomycin can be revised for patients infected with MRSA strains with vancomycin MIC = 1 mg/L so that better efficacy and safety evaluations can be performed in clinical practice in Taiwan. This study has certain limitations. For example, the sample size obtained during 2 years is small, and the interval between blood collections for trough measurement and the interval between specimen collections were inconsistent. Moreover, the currently used AUC formula may have underestimated the real AUC value, and the concomitant use Evaluation of Therapeutic Vancomycin Monitoring Microbiology Spectrum of nephrotoxic medications was not discussed in the analysis of AKI. We hope that relevant future studies can overcome these limitations and offer optimized analyses. MATERIALS AND METHODS Study design and population. In this pharmacokinetic study, data were collected by retrospectively reviewing the medical records of patients admitted to the Taipei Medical University Hospital (TMUH) who Data collection. Tables represent the relevant collected patient data, including basic patient data, ward type, serum creatinine levels, site of bacterial culture, bacterial strains, initial and adjusted doses of vancomycin, number of days for which vancomycin therapy was administered, trough concentrations, MIC value, and the concomitant use of nephrotoxic medications. Calculations and definitions. The Cockcroft-Gault equation was adopted to evaluate renal function: ClCr = ((140 2 age) Â weight)/(72 Â Scr) (Â 0.85 if female) (Scr = 0.8 if age .65) (14). AKI is defined as variations of over 0.5 mg/dL between consecutive serum creatinine measurements or an increase of 50% in serum creatinine level compared with the pre-dose level. If the actual body weight (ABW) is less than the ideal body weight (IBW), then ABW measurement is inserted into the equation; if ABW is greater than IBW but ABW is ,130% of IBW, then IBW is entered into the equation; if ABW is .130% IBW, then the adjusted ABW is Considering additional fluctuations in drug concentrations in the human body, if the concentration measurements at each instance of blood collection are considered for calculations of individual AUCs in the same patient, the actual drug concentration in the same patient might not be accurately represented. Hence, average vancomycin daily dose (Fig. 5a) was used as the adjusted vancomycin daily dose to calculate the AUC. Further, the adjusted (average) vancomycin trough was calculated (Fig. 5b). The calibration equation is shown in (Fig. 5c). Outcome analysis. The primary outcome was to determine whether there is an association between attaining a vancomycin trough within a specified range and reaching a calculated vancomycin AUC/MIC of $400. Patients were classified based on their troughs in the following ranges: ,10 mg/L, 10 to 14.9 mg/L, 15 to 20 mg/L, and .20 mg/L. Trough concentrations were based on the adjusted vancomycin doses and trough concentrations. Attainment of the calculated target AUC/MIC of $400 was compared among the Evaluation of Therapeutic Vancomycin Monitoring Microbiology Spectrum groups to analyze the percentage of patients attaining an AUC/MIC of $400 in each trough range. The secondary outcome was to determine the corrected average vancomycin trough associated with the development of AKI, compare and correlate the mean predicted AUC/MIC and trough concentrations between patients who attained different trough concentrations in each trough range, and assess the time required to attain the first negative culture after treatment. Statistical analysis. Descriptive statistics were used to determine the primary objective, and twoway Kruskal-Wallis test, followed by multiple comparison tests were performed for AUC comparisons among the different trough concentration groups. Only the statistically significant values are presented in the figures. Apart from the Kruskal-Wallis test, Mann-Whitney U test was performed to compare the recommended 10 to 15 mg/L group with the 10 mg/L group. Significant differences between the groups, indicated by p , 0.005, were determined. The analyses were performed on GraphPad prism (GraphPad prism version 9, version 9.3.1 for Windows, Inc., San Diego, CA, USA). ACKNOWLEDGMENT This study was supported by the Department of Pharmacy, Taipei Medical University and the Department of Pharmacy, Taipei Medical University Hospital, Taiwan. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. There are no conflicting interests relevant to the research presented in this study.
2022-04-13T06:24:02.709Z
2022-04-12T00:00:00.000
{ "year": 2022, "sha1": "18f0300a6fc7205a8f67fc6d07fa460bc908f857", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/spectrum.01562-21", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "e8e316a18883a8ec02854837e7cda8b0271ebe0b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3784273
pes2o/s2orc
v3-fos-license
A different immunologic profile characterizes patients with HER-2-overexpressing and HER-2-negative locally advanced breast cancer: implications for immune-based therapies Introduction The clinical efficacy of trastuzumab and taxanes is at least partly related to their ability to mediate or promote antitumor immune responses. On these grounds, a careful analysis of basal immune profile may be capital to dissect the heterogeneity of clinical responses to these drugs in patients with locally advanced breast cancer undergoing neoadjuvant chemotherapy. Methods Blood samples were collected from 61 locally advanced breast cancers (36 HER2- and 25 HER2+) at diagnosis and from 23 healthy women. Immunophenotypic profiling of circulating and intratumor immune cells, including regulatory T (Treg) cells, was assessed by flow cytometry and immunohistochemistry, respectively. Serum levels of 10 different cytokines were assessed by multiplex immunoassays. CD8+ T cell responses to multiple tumor-associated antigens (TAA) were evaluated by IFN-γ-enzyme-linked immunosorbent spot (ELISPOT). The Student's t test for two tailed distributions and the Wilcoxon two-sample test were used for the statistical analysis of the data. Results The proportion of circulating immune effectors was similar in HER2+ patients and healthy donors, whereas higher percentages of natural killer and Treg cells and a lower CD4+/CD8+ T cell ratio (with a prevalence of naïve and central memory CD8+ T cells) were observed in HER2- cases. Higher numbers of circulating CD8+ T cells specific for several HLA-A*0201-restricted TAA-derived peptides were observed in HER2+ cases, together with a higher prevalence of intratumor CD8+ T cells. Serum cytokine profile of HER2+ patients was similar to that of controls, whereas HER2- cases showed significantly lower cytokine amounts compared to healthy women (IL-2, IL-8, IL-6) and HER2+ cases (IL-2, IL-1β, IL-8, IL-6, IL-10). Conclusions Compared to HER2- cases, patients with HER2-overexpressing locally advanced breast cancer show a more limited tumor-related immune suppression. This may account for the clinical benefit achieved in this subset of patients with the use of drugs acting through, but also promoting, immune-mediated effects. Introduction Preoperative or neoadjuvant chemotherapy (NC) is currently considered to be the standard of care for locally advanced and inoperable breast cancer. One of the main advantages of this approach is the reduction of tumor size, which increases the possibility of performing smaller resections of operable tumors with better cosmetic outcomes [1,2]. Other potential benefits of NC include an early assessment of response to chemotherapy and the possibility of obtaining prognostic/predictive information, based on the pathologic response to therapy [3]. Although breast cancers that overexpress human epidermal growth factor receptor-2 (HER2) are characterized by a poor prognosis [4,5], higher rates of complete responses are currently achieved in HER2 + patients by standard chemotherapy, mainly in association with trastuzumab [6,7], in comparison with HER2patients. Like other monoclonal antibodies used in anticancer therapy, the activity of trastuzumab is largely dependent on immuno-mediated mechanisms. In fact, besides triggering antibody-dependent cytotoxicity (ADCC), trastuzumab also enhances HLA class I-restricted presentation of endogenous HER2 antigen via the proteasome pathway, and sensitizes HER2-overexpressing tumors to killing by MHC class I-restricted HER2-specific cytotoxic T lymphocytes (CTLs) [7]. Intriguingly, other drugs used in NC regimens have also been shown to enhance antigen-specific immune responses in both in vitro and animal models. In particular, taxanes have immunostimulatory effects against tumor cells and suppress cancer not only through inhibition of cell division [8,9]. Indeed, hosts immune functions are highly enhanced after docetaxel treatment [10], and paclitaxel plays a positive role in controlling tumor growth, probably through the induction of IL-8 [8]. Furthermore, taxanes induce macrophage-mediated tumor death, stimulate the production of pro-inflammatory cytokines (TNF-α, IL-12, and IL-1), and increase lymphokine activated killer (LAK) cell and natural killer (NK) cell antitumor activity [10,11]. Given the evidence that tumor cells may be immunogenic, more than 60 TAAs have been identified and, as observed for other tumors, breast cancer cells were also shown to express TAAs [12,13]. Moreover, convincing data demonstrate that spontaneous antitumor responses to TAAs may harness host's immune system to fight against cancer, underscoring the need of a retained or only minimally compromised immunological proficiency particularly in patients treated with chemotherapeutic regimens including immunomodulating drugs. Nevertheless, only limited information is available on the extent of spontaneous T cell responses to breast cancer-associated antigens in patients with locally advanced tumors. Considering that breast cancer patients may show different types and extent of tumor-related immune dysfunctions [11,14,15], we reasoned that the efficiency of the host immune system could influence the responses to current NC regimens. Therefore, in the present study, we have carried out an extensive immunologic profiling of patients with locally advanced breast cancer at the time of diagnosis, as a first step towards a better understanding of the possible role of antitumor immune responses in mediating the clinical outcome of NC. The results presented herein demonstrate that patients with HER2 + and HER2breast cancer have a different basal immunologic profile. In particular, our data are consistent with a more limited tumor-related immune suppression in patients with HER2-overexpressing tumors, an observation that may at least in part account for the clinical benefit achieved in this subset of patients by drugs acting through immune-mediated effects. Patients and healthy donors Our analysis included 61 patients with histologically confirmed locally advanced breast carcinoma (defined as not susceptible of conservative surgery at diagnosis; UICC, International Union Against Cancer, stage II to III; Table 1). HER2 status was assessed by immunohistochemistry and chromogenic in situ hybridization (CISH) or fluorescence in situ hybridization (FISH) in the case of IHC 2+. All patients had the following clinical features: Eastern Cooperative Oncology Group performance status of 0 or 1; baseline left ventricular ejection fraction measured by ultrasonography greater than 50%; adequate organ function (bone marrow function: neutrophils ≥2.0 × 10 9 /L, platelets ≥120 × 10 9 /L; liver function: serum bilirubin <1.5 times the upper limit of normal (ULN), transaminases <2.5 times ULN, alkaline phosphatase ≤2.5 times ULN, serum creatinine <1.5 times ULN). This study was carried out according to the ethical principles of the Declaration of Helsinki and approved by the local ethics committee. All patients gave written informed consent. Heparinised blood and sera were also collected from 23 age-matched healthy women as controls. Patient's and donor's HLA genotyping was performed by PCR sequencing based typing with primers specific for both locus A and B [16]. Peripheral blood mononuclear cells (PBMCs) were freshly isolated from heparinised blood of patients or healthy donors by Ficoll-Hypaque gradient (Lymphoprep, Fresenius Kabi Norge Halden, Norway) using standard procedures and viably frozen at -180°C until use. Serum samples were obtained with blood centrifugation at 2,100 rpm and maintained at -80°C. Peptide selection and synthesis A total of 13 immunogenic HLA-A*0201 nonamer (9-mer) peptides, derived from different breast cancer-associated antigens (survivin, mammaglobin-A, HER2, mucin-1, taxol-resistence associated gene 3, and bcl-XL) were selected for the study. The HLA-A*0201-restricted Flu matrix 1 (M158-66) peptide (GILGFVFTL) was used as the positive control. All peptides were produced by fluorenylmethoxycarbonil synthesis (Primm, Milan, Italy) and purity (>95%) was determined by reverse-phase high-performance liquid chromatography and verified by mass spectral MALDI-TOF analysis. Peptides were dissolved in DMSO at a concentration of 2.5 mg/ml and stored at -70°C until use. Work stocks for each peptide were prepared in PBS at a final concentration of 500 μg/ml and stored frozen. IFN-g ELISPOT assay The interferon (IFN)-γ release enzyme-linked immunosorbent spot (ELISPOT) assay was performed using a commercial kit (Human IFN gamma ELISPOT; Thermo scientific, Rockford, IL, USA), according to manufacturer's instructions. The assay was carried out using autologous peptide-pulsed monocytes as antigen presenting cells (APCs) and isolated CD8 + T lymphocytes as responders. Monocytes, isolated by a two hour plastic adherence step from patient's PBMCs, were loaded with 10 μg/ml of each 9-mer peptide in complete medium, supplemented with 5 μg/ml of human β2-microglobulin, and incubated for two hours at 37°C with 5% CO 2 . Purified effectors were obtained by immunomagnetic enrichment protocols using the human CD8 + T cell isolation kit II (Miltenyi Biotec, Bergisch Gladbach, Germany), 1 Hormone receptor status is significantly different between HER2+ and HER2patients, with HER2-overexpressing cancers being mostly ER-and PgR-(P = 0.01). CISH, chromogenic in situ hybridization; ER, estrogen receptor; FISH, fluorescence in situ hybridization; HER-2, human epidermal growth factor receptor-2; PgR, progesterone receptor. 2 Grading is significantly different between the two cohorts of patients: HER2+ patients are mainly G3, while HER2-patients are mostly G2 (P = 0.01). and then cultured with peptide-loaded monocytes (50,000 cells/well) at 1:1 effector:target ratio. FLU M158-66 and unstimulated monocytes were used as positive and negative controls, respectively. Cells were seeded onto ELISPOT capture plates in triplicates and incubated for 48 hours at 37°C with 5% CO 2 . All plates were evaluated by a computer-assisted ELISPOT reader (Eli.Expert, A.EL.VIS GmbH, Hannover, Germany). The number of spots in negative control wells (range of 0 to 5 spots) was subtracted from the number of spots in stimulated wells. Responses were considered significant if a minimum of five IFN-γ producing cells were detected in the wells. Cytokine detection Levels of IL-1α, IL-1β, IL-2, IL-6, IL-8, IL-10, IL-12p70, TNF-α, and granulocyte macrophage colony-stimulating factor (GM-CSF) were evaluated using the SearchLight ® multiplex arrays (Food and Drug Administration approved, Aushon Biosystems, TEMA Ricerca, Bologna, Italy) according to the manufacturer's instructions. Briefly, custom human 8-plexarray and human 1-plexarray (for GM-CSF detection) with pre-spotted cytokine-specific antibodies were used. Standards or pre-diluted samples were added in duplicate and, after one hour of incubation at room temperature and three washes, biotinylated antibody reagent was added to each well. After 30 minutes incubation at room temperature and three washes, block solution was added to stabilize the signal. The addition of Streptavidin-HRP Reagent and SuperSignal ® Substrate, and the acquisition of luminescent signal with a cooled CCD (Charge Coupled Device) camera, together with data analysis and processing, were performed by TEMA Ricerca laboratories' customer service (Bologna, Italy). Transforming growth factor (TGF)-β1 serum levels were assessed through DRG TGF-β1 ELISA (DRG Instruments GmbH, Marburg, Germany) under instructions. Prediluted samples and standards underwent appropriate acidification and neutralization before testing. Briefly, pretreated standards, controls and samples were dispensed into wells in duplicate and plates were incubated overnight at 4°C. After three washes, antiserum was added to wells and incubated for 120 minutes at room temperature, plate was rinsed three times and anti-mouse biotin (enzyme conjugate) was dispensed and incubated for 45 minutes. After three washes, enzyme complex was added to wells, then plates were incubated 45 minutes and washed three times. After the addition of substrate solution for 15 minutes, the reaction was stopped and the adsorbance at 450 ± 10 nm was determined with a microtiter plate reader (Bio-Tek Instruments, Winooski, VT, USA). As there is an extremely variable range of normal values reported in the literature, healthy women serum levels were taken as references. Immunohistochemistry Considering that the diagnostic biopsy is not fully representative of the whole tumor mass, the lymphoid infiltrate was investigated in 40 primary advanced breast carcinomas from a separate series of patients who underwent surgical resection during the past decade at our institution. Twenty cases were HER-2 + (3+) and 20 HER-2 -(0 or 1+). All specimens were routinely fixed in 10% buffered formalin, embedded in paraffin and then stained with H&E for histological examination. For immunohistochemical analyses, 2 to 3 µm serial sections of primary tumors were processed with automated immunostainer Benchmark XT (Ventana, Tucson, AZ, USA), and staining was carried out with the following antibodies: CD8 (clone SP57, Ventana Medical System, Tucson, AZ, USA); FoxP3 (clone 259D/ C7, BD Pharmingen, Franklin Lakes, NJ, USA) diluted 1:100 e TiA-1 (clone TiA-1, Bioreagents, Golden, CO, USA) diluted 1:100. Nuclear counterstaining was accomplished with Harris' hematoxylin. Omission of the primary antibody was used as a negative control. The results for staining were evaluated with reference to the number of unequivocally stained lymphoid cells. Ten randomly chosen representative microscopic fields were counted at 40x original magnification. Statistical analysis Chi-square test was used to compare hormone receptor (HR) expression and grading within HER2and HER2 + populations. Data obtained from multiple independent experiments were expressed as mean and standard deviation for immunophenotypic analysis and ELISPOT assays; cytokine box plots were obtained with SigmaPlot. The Student's t test for two tailed distributions and the Wilcoxon two-sample test were used for the statistical analysis of the data. Odds ratio and 95% confidence intervals in multivariate analysis were performed to assess the possible influence of clinico-pathological variables on the immunological correlations observed: immunological variables (cytokine levels) were divided into quartiles according to their concentrations and then stratified for HR expression (estrogen receptor (ER)-progesterone receptor (PgR)-and ER+ and/ or PgR+), and tumor grading (G2 or G3). Wilcoxon ranktest was used to compare the distribution of intratumor CD8 + and FoxP3 + cells between HER2 + and HER2cases. Results were considered to be statistically significant when P ≤ 0.05 (two-sided). HER2 + and HER2patients exhibit a different distribution of circulating and intratumor immune cells The distribution of different circulating immune populations was investigated at diagnosis by multiparametric flow cytometry comparing 17 women with HER2-overexpressing cancers, 20 women with HER2tumors, and 17 healthy women, who were considered as controls. A significantly lower percentage of CD3 + T cells ( Figure 1c) was observed in HER2patients with respect to both HER2 + cases (P = 0.028) and controls (P = 0.0003). In parallel, among the CD3cell populations, higher numbers of CD16 + CD56 + NK cells were detected in HER2cases compared with HER2 + patients (P = 0.049) and healthy donors (P = 0.025; Figure 1a). Interestingly, no major difference in the distribution of circulating CD3cells was observed between HER2-overexpressing patients and controls. The percentage of B cells ( Figure 1b) was not significantly different among the three groups investigated, even if HER2 -(n = 14) patients showed slightly higher numbers of CD3 -CD19 + cells than HER2 + patients (n = 15) and donors (n = 13; P = 0.07). When the CD3 + population was considered separately, HER2patients showed significantly higher percentages of CD8 + T cells (P = 0.028; data not shown) and a lower CD4 + /CD8 + ratio (Figure 1d; P = 0.046) when compared with HER2 + cases. Because of the different contribution of memory subsets in mediating antitumor immune responses [17][18][19], the differentiation state of T cell was investigated through the combined analysis of the chemokine receptor CCR7 and the CD45RA isoform, to distinguish CCR7 + CD45RA + naïve, CCR7 + CD45RAcentral memory (CM), CCR7 -CD45RAeffector memory (EM), and CCR7 -CD45RA + terminally differentiated (Temra) cells [20]. Although the three studied groups showed a similar distribution of memory CD3 + T cell subsets (not shown), separate analysis of the CD4 + and CD8 + compartments disclosed remarkable differences. In fact, compared with controls, HER2 + patients carried a higher percentage of CM CD4 + T cells (P = 0.003), whereas HER2cases showed significantly higher numbers of CM CD8 + T lymphocytes (P = 0.022). Moreover, a higher percentage of EM CD4 + cells (P = 0.023), at the expense of the CM subset (P = 0.023), was found in HER2cases compared with HER2 + patients ( Figure 1f). Finally, the two groups of breast cancer patients showed a completely different memory sharing among CD8 + cells, with a prevalence of naïve (P = 0.002) and CM cells (P = 0.005) in HER2cases and higher percentages of EM (P = 0.005) and Temra cells (P = 0.012) in HER2 + patients. Several studies argued the unfavorable involvement of circulating regulatory T cells (Tregs) in cancer progression, demonstrating the presence of increased numbers of CD4 + CD25 high FoxP3 + especially in metastatic cancers [21]. Considering that this immunophenotypic characterization is unsuitable for uniquely defining this specialized T cell subset, we used IL-7 receptor (CD127) down-regulation as a further feature indicative of suppressive functions [22] and identified Tregs as CD4 + CD25 high CD127 low FoxP3 + cells. The analysis showed that total numbers of so determined Tregs were not significantly different between patients and controls. Nevertheless, considering the two groups of patients separately, HER2exhibited a significantly higher percentage of circulating Treg cells (P = 0.02) when compared with healthy donors (Figure 1e). Characterization of the lymphoid infiltrate in an unrelated series of locally advanced breast cancers disclosed a significantly higher prevalence of intratumor CD8 + T cells in HER2 + cases (median 1000, range: 730 to 1880) as compared with the HER2subgroup (median 234, range: 117 to 890, P = 0.04). TiA-1 + cells were also more abundant in HER2 + tumors and only rarely detected in the HER2subgroup (not shown). Conversely, the median numbers of FoxP3 + cells was higher in HER2 + cases (170, range: 50 to 508) than in HER2tumors (25, range: 10 to 108, P = 0.04; Figure 2). HER2 + patients display enhanced CD8 + T cell responses to different TAA-derived epitopes compared with both HER2patients and healthy donors Spontaneous CD8 + T cell responses to 13 TAA-derived peptides (Her2, muc-1, mam-A, trag-3, survivin, bcl-x L ; Table 2) were evaluated by IFN-γ ELISPOT assay in six HER2 + and seven HER2 -HLA-A*0201 + patients and five HLA-A*0201 + age-matched healthy women. IFN-γsecreting CD8 + T cells were detected in all samples (Figure 3), although higher numbers of CD8 + T cells specific for all epitope peptides investigated were observed in both HER2 + and HER2patients compared with healthy donors (both P<0.002). Notably, the number of circulating TAA epitope-specific CD8 + T cells was higher in HER2 + cases compared with HER2 -(P<0.005), particularly against peptides derived from trag-3, muc-1, and bcl-x L (Figure 3). Empty monocytes were considered as negative controls and the number of spots was usually at the background level (<10 SFC/50,000 CD8 + cells). No significant differences were found between patients and donors against Flu M1 GIL 58-66 peptide, used as positive control (49<SFC/50,000 CD8+ cells>76), and similar levels of responses were also observed against PHA (167<SFC/50,000 CD8+ cells>221), confirming a retained T cell responsiveness. HER2 + patients show similar serum cytokine profile in respect to donors The serum levels of 10 different cytokines were evaluated in all 61 patients and 23 healthy women, considered as internal reference values. No significant difference was observed when comparing the global cohort of patients with controls. However, when breast cancer patients were considered as two different groups based on HER2 expression, the comparison revealed that HER2patients carried significantly lower amounts of IL-2 (P = 0.0222), IL-8 (P = 0.009), and IL-6 (P = 0.016) with respect to donors (Figure 4), whereas the cytokine profile of HER2 + cases was almost superimposable to that of healthy women ( Figure 4). Notably, the most evident differences emerged from the comparison between the two groups of patients, with HER2cases showing significantly reduced levels of IL-2 (P = 0.0229), IL-1β (P = 0.0207), IL-8 (P = 0.007), IL-6 (P = 0.0001), and IL-10 (P = 0.0247; Figure 4), independently by other clinical-pathological parameters such as HR expression and tumor grading (P-trend in multivariate analysis: IL-1β P = 0.002, IL-2 P = 0.004, IL-6 P = 0.004, IL-8 P = 0.02, IL-10 P = 0.02; Table 3). No differences were found in IL-1α, TNF-α, IL-12p70, GM-CSF, and TGF-β serum concentrations. Discussion Evidence accumulated so far indicates that the immune system can influence the initiation and development of cancer and it is widely believed that T lymphocytes represent the most potent antitumor effector cells. In this respect, there is an urgent need to develop therapeutic approaches able to preserve or only minimally impair immune functions since ADCC-promoting therapeutic antibodies, such as trastuzumab, and cancer vaccines are being increasingly used as adjuvant and neoadjuvant treatment modalities [23]. Furthermore, also the therapeutic efficacy of some "conventional" drugs, such as doxorubicin and paclitaxel, involves immunomediated mechanisms [11,24]. On these grounds, we considered it clinically relevant to extensively investigate at diagnosis the immunological profile of locally advanced breast cancer patients who are candidates for NC including immunomodulating drugs. The present study provides baseline immunological data that may constitute a reference for an informative monitoring of immune responses during NC (ongoing study). Provided the feasibility of monitoring antitumor responses during therapy and considering that sera from breast cancer patients should represent a valuable discovery tool to identify potential targets involved in breast cancer progression [25], we focused mainly on peripheral blood as the easiest accessible way to detect and measure immune-changes. It is worth considering that most studies reporting analyses of systemic immunologic parameters in breast cancer patients included extremely variable series of cases, thus obtaining wide ranges of values and often conflicting results. Our study does not suffer from this limitation, being based on a relatively homogeneous group of patients including only locally advanced cancers and excluding metastatic patients, which are the main contributors to outliers [26,27]. Previous data reported a significant increase in circulating B lymphocytes and NK cells in breast cancer patients in comparison with control groups [14], with a lower total number of T lymphocytes [15]. Interestingly, in our study, HER2 + patients retained a normal distribution in NK cells, and T and B lymphocytes if compared with healthy donors, whereas HER2cases displayed lower percentages of T cells and higher numbers of NK and, to a lesser extent, of B cells. The increased NK cell numbers and activity reported in pre-treated patients were related to time to treatment failure [14] and were proposed to be the result of activation of the innate immunity by the tumor or dependent on a defective regulation of NK cells in these patients [28]. Conversely, Dewan and colleagues observed significantly lower NK cell activity in PBMC from breast cancer patients, as compared with that of healthy individuals. Intriguingly, this defect was more pronounced in HER2breast cancer patients [15], suggesting an underlying NK dysfunction in this subgroup. Further, we noticed that HER2 + patients showed an increased CD4/CD8 ratio with respect to HER2cases, a feature previously associated with a better chance of responding to NC [14]. However, opinions regarding which T-cell subset provides the best tumor protection, especially among memory sub-populations, are still controversial. Indeed experimental evidence suggests that central (T CM ) and effector (T EM ) memory T cells can each confer a protective advantage [17][18][19], with T CM providing a reservoir of antigen-specific T cells, ready to expand and replenish the periphery upon secondary challenge, and T EM displaying a more activated phenotype capable of granzyme B and perforin expression, IFN-γ secretion, and tumorspecific killing in vitro [17]. Compared with healthy donors, our data disclosed a higher percentage of CD4 + T CM (CCR7 + CD45RA -) in HER2 + patients and higher numbers of CD8 + T CM in HER2cases. This observation may suggest the existence, in both groups, of an active, though predominantly memory T-cell driven, antitumor response which may benefit and respond to recall antigens from a cancer vaccine. Interestingly, in our study the main variations in memory subsets distribution were found between HER2and HER2 + patients. Although the CD4 + T cell population of HER2 + cases disclosed a favorable shift to the T CM phenotype, in these same patients CD8 + T lymphocytes revealed a predominance of T EM and terminally differentiated cells. In contrast, HER2patients exhibit mainly CCR7 + CD8 + T cells (naïve and T CM ). It should be considered that peripheral CD8 + T cells expressing effector functions against viral [29,30] or tumor antigens [31] are almost uniformly CCR7and are endowed with full effector capacities, far greater than naïve or T CM cells. Moreover, as the differentiation to terminal effector cells is related to increased Figure 3 CD8 + T cell responses to multiple breast cancer-associated antigenic epitopes assessed by IFN-g-ELISPOT (interferon-g-Enzyme Linked Immunosorbent Spot). All tests were performed using CD8 + purified T cells as effectors and autologous peptide-loaded monocytes as antigen presenting cells (APCs; effector:target ratio of 1:1). The number (enumerated as SFC, spot forming cells) of TAA-specific (or FluM1-specific, flu matrix protein1-derived epitope) circulating CD8 + T cells was investigated in HER2 -(neg = 7) and HER2 + (pos = 6) breast cancer patients, whereas antigen-specific responses of healthy women were used as controls (ctrl = 5). PHA-loaded and empty monocytes (EMPTY MONO) were used as positive and negative controls, respectively. For peptides amino acid sequences, refer to Table 2. HER-2, human epidermal growth factor receptor-2; TAA, tumor-associated antigens. cytolytic potential of CD8 + T cells, we hypothesize that the extent of maturation might be due to an effective antitumor response [18]. Pre-existing T cell responses to TAA have been reported in patients with solid tumors [32]; however, these responses usually involve a low frequency of antigen-specific T cells, not detectable in the majority of patients [18]. In this regard, literature data on circulating tumor antigen-specific T cells in breast cancer patients are still conflicting, probably because of the predominant focus on single epitopes [18]. Circulating T cells able to recognize CD8 + epitopes of HER2 [12], MUC-1 [33], mammaglobin-A [34], Trag-3 [35], survivin [36], or bcl-x L [13] have been described in distinct papers, but the evaluation of multiepitopic antitumor responses is still lacking. We therefore assessed the amount of IFN-γ-secreting CD8 + T cells specific for a broad spectrum of HLA-A*0201 peptides derived from Her2, muc-1, mam-A, trag-3, survivin, and bcl-x L . Notably, we found increased IFN-γ release to all screened epitopes in the global cohort of patients if compared with healthy donors, demonstrating the existence of spontaneous T cell responses against multiple TAA in locally advanced breast cancer patients. The ability to stimulate the generation of antitumor CD8 + T cells seemed to be more pronounced in HER2 + cancers, especially towards Her2-, trag-3-, muc-1-, and bcl-x L -derived epitopes. This peculiarity may be useful in the design and optimization of vaccine strategies, which could take advantage of host's pre-existing antitumor immune response. Moreover, the increased numbers of TAA-specific circulating CD8 + T cells characterizing HER2 + patients may positively contribute to the clinical efficacy of trastuzumab, which is able to sensitize HER2-overexpressing tumors to the killing by HER2-specific CTLs [7], and may enhance the antigen-specific immune responses promoted by doxorubicin and paclitaxel [37]. Our findings at the systemic level are also consistent with the demonstration of a significantly higher prevalence of CD8 + T lymphocytes infiltrating HER2 + tumors that could contribute to a better clinical outcome [38]. This suggests that HER2 overexpression may be associated with enhanced immunogenicity of tumor cells and/or with a less immunosuppressive microenvironment. Further characterization of the activation state of lymphocytes infiltrating these tumors is, however, required to draw definitive conclusions in this respect. It is well recognized that tumors may down-regulate the immune response to tumor antigens by inducing several immune suppressor mechanisms, including Treg recruitment. Increased numbers of Tregs have been correlated with greater disease burden and poorer overall survival [39]. In particular, Treg cells are augmented in the peripheral blood and within the tumor microenvironment in patients with breast carcinomas [40]. Our analysis of FoxP3 expression in intratumor lymphocytes disclosed significantly higher prevalence of FoxP3 + cells in HER2 + tumors. This may be a homeostatic Table 3 Odds ratio (OR) and 95% confidence interval (CI) adjusted for hormone receptor expression and tumor grading according to cytokines levels in HER+ and HER-patients HER-2, human epidermal growth factor receptor-2; IL, interleukin. consequence of the higher content of infiltrating CD8 + T cells detected in these cancers, or may reflect an active local recruitment of Tregs. It is worth considering in this respect that conflicting results have been reported with regard to Treg frequencies in HER2versus HER2 + breast cancers [27,41], discrepancies that could be due in part to tumor staging, but also to the different and often partial phenotypic markers used to identify regulatory T cells. This suppressor cell subset is often identified as CD4 + CD25 high cells [27] or as CD4 + FoxP3 + lymphocytes [41], even if these markers are unable to uniquely define a regulatory T cell phenotype. Therefore, the use of FoxP3 as a single immunohistochemical marker to identify Tregs may have overestimated the number of Tregs in the HER2 + subgroup. In fact, this approach can not discriminate true FoxP3 + Treg cells from T lymphocytes activated by local stimuli and therefore transiently expressing FoxP3 without being endowed of immunosuppressive functions. To overcome these limitations, we have bona fide considered as circulating Tregs only cells expressing the CD4 + CD25 high CD127 low FoxP3 + phenotype, as the down-regulation of the IL-7 receptor (CD127) is associated with suppressive functions [22]. This extended phenotypic definition did not disclose significant differences in Treg distribution between the whole group of breast cancer patients and controls, but it revealed higher numbers of circulating Tregs in HER2patients with respect to donors. Tregs exert their suppressor activity by inhibiting T cell proliferation, NK cell-mediated cytotoxicity [42], and TAA-specific immunity [43]. On these grounds, the nearly physiological number of circulating Tregs displayed by stage II and III HER2 + breast cancer patients may imply a favorable background for NKinvolving therapy such as monoclonal antibodies (trastuzumab), and may further benefit from the spontaneous enhanced antitumor T cell responses. The likely retained immune proficiency of HER2 + patients is supported by an apparently unchanged cytokine profile, as sharpened by the comparison with serum cytokine levels of healthy women, which displays no significant differences. It is widely accepted that solid tumors are associated with a pathologic shift toward the T-helper type 2 cytokine pattern; whereas T-helper 1induced inflammation inhibits tumor growth. In breast cancer patients, depressed serum levels of IL-2, GM-CSF, IFN-γ and enhanced TNF-α and IL-6 amounts were reported in comparison with controls [11] and some of these immune dysfunctions are also present in early-stage tumors. In our study, we noticed significantly lower levels of IL-2 and IL-8, but also of IL-6 in HER2patients with respect to healthy women. Increased serum levels of IL-6 were previously reported in progressive recurrent breast cancer patients [44], especially in the presence of metastasis [26], conversely no metastatic cases were enrolled in our cohort. Interestingly, the comparison of cytokine layouts between HER2and HER2 + patients disclosed pathogenically relevant differences between the two groups. In particular, HER2patients showed lower levels of IL-2, previously associated with relapse of the disease [45,46], reduced amounts of the pro-inflammatory cytokines IL-1β and IL-6 (repressors of in vitro cell cycle progression) [47], and reduced levels of IL-8, recently reported also in the in vitro comparison of HER2and HER2 + breast cancers cell lines and in serum samples from metastatic breast cancer patients [25]. Finally, the pleiotropic cytokine IL-10, which may exert tumor-promoting activity or considerable antitumor effects, at low and high concentrations, respectively [46], was detected at lower amounts in HER2patients. On the other hand, the higher levels of IL-2 in HER2 + patients may be consistent with an activation of T cells by TAA-derived peptides [48]. Notably, the differences in the cytokine levels observed between HER2and HER2 + cases were independent of the clinical-pathological features showed by the two cohorts of patients ( Table 3). The different immunologic profile of patients with HER2and HER2 + tumors highlights the importance of considering them as two distinct populations not only with regard to tumor characteristics, but also concerning their immune status. Our analysis, however, may show some limitations mainly due to the quite large number of immunological factors considered. This multiparametric approach has considerably restricted the global case series, thus limiting the possibility to make comparisons of adequate statistical power between subpopulations of HER2and HER2 + patients. Larger series should be therefore investigated to conclusively rule out the possible influence of distinct clinico-pathological variables on the immunological correlations observed. Moreover, the research of TAA memory T-cell responses was confined at the peripheral immune compartment; an in situ comparative survey is needed to confirm our data. An interesting issue that needs further investigation is the assessment of whether the higher percentage of effector memory CD8 + T cells observed in HER2 + patients correlates with the enhanced response against TAA noticed in this population. In perspective, therefore, the characterization of relevant immune parameters may also have a prognostic value, as recently emphasized by the finding that a decreased expression of immune response-associated genes was associated with poor prognosis, particularly in HER2 + cases [49]. Accordingly, the apparently preserved immunological capacity of HER2 + patients may constitute a favorable milieu for several immune system-based therapies. In this respect, a detailed characterization of the immunological profile at diagnosis may be useful to individualize the most promising therapeutic choice or to further contribute in the therapeutic schedule's design. Furthermore, since some chemotherapeutic drugs display beneficial effects on host immune functions [11,50], our results suggest that a careful immunemonitoring of breast cancer patients during NC may be useful to predict the response to therapy and to obtain a better prognostic definition. Conclusions In conclusion, our data indicate that, compared with HER2cases, patients with HER2-overexpressing, locally advanced breast cancer show a more limited tumorrelated immune suppression. This may account for the clinical benefit achieved in this subset of patients with the use of drugs acting through immune-mediated mechanisms. These findings also provide the rationale for further studies aimed at assessing the possible predictive and/or prognostic role of immune markers in locally advanced breast cancer patients undergoing NC.
2016-05-12T22:15:10.714Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "c5e79f8b3e3b243f3c6474a6df3fab2b0bfd3fdc", "oa_license": "CCBY", "oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr3060", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a0a0668a5f13ead478cc88cd399d1beb242058b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220883990
pes2o/s2orc
v3-fos-license
Combinational inhibition of EGFR and YAP reverses 5-Fu resistance in colorectal cancer Yes-associated protein (YAP) is a transcriptional coactivator that promotes cell proliferation, migration, and tissue homeostasis in colorectal cancer (CRC). Here, we established 5-Fu resistant CRC cell line (SW620R) and examined the role of YAP in chemotherapy resistance. We showed that YAP promoted cell proliferation, migration, and chemotherapy resistance in CRC. To increase efficacy of CRC treatment, we employed another therapeutic target EGFR which interacts with the upstream signaling molecules of YAP in Hippo pathway. Verteporfin, a YAP specific inhibitor, inhibits YAP activity by blocking the YAP-TEAD complex in the cell nucleus, and AG1478, an inhibitor of EGFR/ErbB1, induces the phosphorylation and degradation of YAP. We found that combinational inhibition of YAP by VP and AG1478 synergistically suppressed the CRC development and reversed chemotherapy resistance in vitro and in vivo. Therefore, our results demonstrated a novel therapeutic strategy, the combination of inhibitors targeting EGFR and YAP, to suppress and reverse chemotherapy resistance in colorectal cancer. Introduction Colorectal cancer (CRC) is the third most common malignant tumor (for incidence and mortality) in men and second in women in the world [1]. In 2018, over 1.8 million new colorectal cancer cases and 881,000 deaths are estimated to occur [2]. CRC patients in their progressive stage could often benefit from the neoadjuvant chemotherapy based on combination of Fluorouracil (5-Fu) and platinumbased drugs in the colorectal radical surgery period. More than 80% advanced CRC patients eventually develop relapsed disease despite their initial response to chemotherapy [3]. Thus, the 5-Fu resistance is the key barrier of improving therapy efficacy in CRC patients. The Hippo pathway, highly conserved in mammalian, regulates intrinsic organ sizes by regulating apoptosis and cell proliferation. Yes-associated protein (YAP) is a transcriptional effector of the Hippo pathway [4]. It is the key event of Hippo pathway for mediating cancer cell proliferation and migration that YAP is play a role by being phosphorylated and translocated into the cell nucleus [5,6]. YAP is maintained as highly active form in human malignancies, which suggests that YAP can be an attractive therapeutic target of cancer treatment. Verteporfin (VP), a benzoporphyrin derivative, is clinically used as a photosensitizer and recently known to suppress the Hippo pathway signaling by blocking the interaction between YAP and TEA domain transcription factor [7]. Therefore, the conventional chemotherapeutics combined with VP could potentiate the chemotherapy effectiveness, Ivyspring International Publisher overcoming its acquired resistance to initial chemotherapy [8]. EGFR tyrosine kinase inhibitors (EGFR-TKIs), such as cetuximab, gefitinib, AG1478, and erlotinib, have been commonly used in metastatic gastrointestinal cancer and have been proved to improve progression-free survival [9,10]. In colorectal cancer cells, VP can reverse primary resistance to EGFR inhibition [11]. Since dual inhibition targeting EGFR and YAP could provide better therapeutic effect than single inhibition of each [12], we set out to explore if the combination of YAP and EGFR inhibition reverses chemotherapy resistance in colorectal cancer. In the present study, we investigated the effect of combinational inhibition of YAP and EGFR on 5-Fu resistance in CRC. We showed that 5-Fu resistance upregulated YAP protein levels in 5-Fu resistant CRC cells, which could be a prognosis marker for 5-FU-based treatment. In addition, we found that combinational inhibition of YAP and EGFR reversed 5-Fu resistance in CRC in vivo and in vitro. Our study provided the underlying mechanisms for 5-Fu resistance in CRC and therapeutic strategy of combinational therapy for reversing the chemotherapy resistance in CRC. Patient selection and tissue microarray preparation Patients enrolled into the current study met the following criteria: 1. Patients were admitted to Department of Gastrointestinal Surgery of Xiangya Hospital of Central South University (Changsha, China) from July 2017 to December 2018. 2. Patients were diagnosed by pathological review of tumor biopsies along with enhanced abdominal CT or MRI scan. 3. Radical resection of colorectal cancer was conducted to the patients. 4. Post-operative adjuvant chemotherapy based on fluorouracil (5-FU) were performed for 6-12 cycles in six months. Patients who received neoadjuvant therapy before surgery were excluded. 84 patients were called each month and followed up at regular intervals. The cancer relapse of these patients was diagnosed by increasing the serum tumor markers and imagological examination. The study was approved by the Research Ethics Committee of Xiangya Hospital, Central South University. Cancer tissues were excised and fixed in 10% neutral-buffered formalin and then embedded in paraffin blocks. Each paraffin-embedded section was cut 4 μm thick, deparaffinized, and rehydrated. HE staining was performed to detect and mark typical gastric adenocarcinoma sections in CRC tissues, and the obvious normal gastric mucosa in CRC adjacent tissues was evaluated by a professional pathologist. Immunohistochemistry Immunohistochemical staining for YAP (1:200), and EGFR (1:400) was performed on the tissue slides. Negative controls were prepared by substituting the primary antibody with non-immune goat serum. 4 areas on each slide were randomly chosen for IHC scoring. The staining results were evaluated by two independent pathologists (double-blinded) at the same time. Means were taken for final analysis. The samples in which the staining intensity was none or weak and less than half cells were stained were rendered negative (-), while the samples with moderate or strong staining in more than half cells were positive (+). Establishing 5-Fu resistant cell line Four-week-old male athymic NOD/SCID mice were used to establish chemotherapy-resistance model. The mice were subcutaneously injected with SW620 cells (2×10 6 cells in 200 μL volume). After 14 days of inoculation, 5-FU by IP injection at 30mg/kg/mouse was applied thrice a week for four weeks. After four weeks, the mice were sacrificed and tumors were collected and digested into primary cells, SW620R. To obtain 5-Fu resistant cell line, tumor cells were isolated and purified by using ACCUMAX TM (Innovative Cell Technologies) according to manufacturer's instructions. In brief, the tumor tissues were rinsed with sterile DPBS twice, and transferred to a petri dish containing sterile DPBS. The tissues were cut by surgical scissors into small pieces approximately 1 mm in size, then transferred to 50 ml sterile centrifuge tube. The pieces were settled by centrifugation and carefully removed the supernatant for two times. The pieces were transferred to new 50 ml sterile centrifuge tube and treated with ACCUMAX TM , and then incubated on an agitator at RT for 30 minutes. After the incubation, the cells were isolated by cell strainers, then removed the supernatant by centrifugation at 900 rpm for 5 minutes. The cells were resuspended in DPBS and centrifuged for washing two times. After washing, the primary cells (SW620R) were cultured in RPMI1640 medium containing 5-FU (concentration 1 μM). Cell proliferation assay The cells were incubated in 96-well plates at a density of 4×10 3 cells per well overnight. At different points, 10 μL of MTT dye was added and incubated for 4 hr at 37 °C. Then, the original media was removed, and 100 μL of DMSO was added to each well and shaken for 10 min. The spectrometric absorbance at the wavelengths of 570 and 630 nm was determined with a microplate reader (Tecan, USA). Western blot analysis Anti-YAP antibody (#14074) and anti-EGFR antibody (#4267) were purchased from Cell Signaling Technology (Beverly MA, USA). The transferred membranes were subsequently incubated overnight (more than 16 hr) at 4 °C with the primary antibody (1:1000) and then the secondary antibody (1:3000) for 1 hr. Chemoluminescence detection was performed by using the Pierce ECL Western Blotting Substrate (Thermo Scientific). Transduction When SW620 cells reached 80%-90% confluence on the day of transduction, the three kinds of lentiviral stock (YAP-shRNA, EGFR-shRNA, and Control-shRNA, purchased from Sigma) were respectively transduced into the cells with PEI (Polyethylenimine). After 24 hr, these cells were replated in culture plates to obtain temporary transduction cells. Colony formation assay A total of 500 stable-transfected CRC cells were seeded into each well of a six-well plate and incubated with 10% FBS media for 15 days, with the media replaced every 3-5 days. After 15 days, the colonies were fixed with formalin and stained with 0.1% crystal violet (Sigma Aldrich). Wound healing assay The monolayer of the cells was wounded by dragging a 10 μL pipette tip. The cells were washed to remove cellular debris and then migrated for 12 hr. Images were captured under an inverted microscope. The expression of YAP and EGFR is manifested in recurrent human CRC First, we examined the expression of YAP and EGFR in paraffin-embedded sections of 84 human CRC tissue samples (36 samples of CRC non-recurrence, 48 samples of CRC recurrence) by immunohistochemistry analysis. We found that staining intensity of both YAP and EGFR in CRC recurrence was much stronger than those in CRC non-recurrence ( Figure 1A and Table 1). In addition, YAP expression was positively correlated with EGFR expression in CRC recurrence ( Figure 1A, Table 2, and Table 3). Notably, the patient with high expression of YAP1 or EGFR had a lower survival rate than the patient with low expression of YAP1 or EGFR from Kaplan-Meier survival analysis ( Figure 1B and 1C). These results suggest that the expression of YAP and EGFR is manifested in human CRC recurrence and correlated with survival rate of CRC patients. 5-Fu resistance increases YAP protein levels in CRC cells To establish the 5-Fu resistant colorectal cancer (CRC) cell line, we employed mouse chemotherapy resistance model. We injected SW620 cells into the flanks of NOD/SCID mice with 5-Fu (50mg/kg*3/ week) and monitored tumor growth. After four weeks, we collected the tumors and isolated the primary cells (named SW620R) ( Figure 1B). To examine 5-Fu resistance of SW620R cells, we tested and compared the cell viability by 5-Fu treatment in both SW620 and SW620R cells. We found that viability of SW620R cells was much higher than that of SW620 cells, suggesting SW620R obtained the resistance to 5-Fu ( Figure 1C). We next examined the YAP protein levels in various CRC cells including SW620, Colo205, HCT15, and HCT116 cells by 5-Fu treatment and found that 5-Fu decreased YAP protein levels in dose-dependent manner ( Figure 1D). In addition, we examined YAP protein levels in both SW620 and SW620R cells and interestingly found that YAP protein levels in SW620R cells was much higher than those in SW620 cells ( Figure 1E). These results suggested that 5-Fu decreased the YAP protein levels while 5-Fu resistance increased YAP protein levels in CRC cells. YAP promotes tumorigenicity of CRC cells To investigate the role of YAP in CRC cells, we established stable YAP knockdown (KD) cell lines (Figure 2A). We first tested the viability by 5-Fu treatment under YAP depletion conditions. The viability of YAP-KD cells was lower than that of control cells by 5-Fu treatment ( Figure 2B). We next evaluated tumorigenicity under YAP depletion conditions using MTT assay, colony formation assay, and scratch wound-healing assay. We found that cell proliferation ( Figure 2C), colony formation ( Figure 2D), and migration ( Figure 2E) were significantly decreased in YAP-KD cells compared to control. These results indicated that YAP provided resistance to 5-Fu treatment in CRC cells and promoted in vitro tumorigenicity of CRC cells. EGFR/YAP signaling drives 5-Fu resistance in CRC cells Since YAP increased tumorigenicity of CRC cells ( Figure 2D and 2E), we next investigated whether YAP regulated 5-Fu resistance in CRC cells. We examined YAP protein levels by treatment of verteporfin (VP), an inhibitor targeting YAP interaction with TEAD, in CRC cells. We found that VP significantly decreased YAP protein levels in SW620R cells, but not in SW620 cells ( Figure 3A). Since EGFR signaling pathway has crosstalk with the Hippo/YAP signaling pathway in various cancers including CRC [13], we further investigated whether EGFR regulated YAP protein levels in CRC cells. We tested YAP protein levels by either treatment of AG1478, one of EGFR inhibitors, or stable knockdown of EGFR. EGFR inhibition or knockdown decreased YAP protein levels in SW620 cells ( Figure 3B), suggesting EGFR positively regulated YAP protein levels. To assess the susceptibility to EGFR inhibition in 5-Fu resistant cells, we examined YAP protein levels by treatment of AG1478. EGFR inhibition more significantly decreased YAP protein levels in SW620R cells than those in SW620 cells ( Figure 3C). To further confirm the role of YAP and EGFR in 5-Fu resistant cells, we examined the cell viability using VP and AG1478. The viability of SW620R cells was more significantly decreased than that of SW620 cells by treatment of VP and AG1478 in the presence of 5-Fu ( Figure 3D). These results suggested that EGFR/YAP signaling drove the 5-Fu resistance of CRC cells. Combinational inhibition of YAP and EGFR suppressed 5-Fu resistance in vitro and in vivo in CRC Since YAP and EGFR regulated 5-Fu resistance in CRC cells ( Figure 3D), we next investigated whether combinational inhibition of YAP and EGFR could synergistically reduce chemotherapy resistance. We first examined the YAP protein levels by combinational treatment of VP and AG1478. The combinational treatment synergistically reduced the YAP protein levels compared with single treatment of VP or AG1478 in SW620R cells ( Figure 4A). We next examined in vitro tumorigenicity by combinational treatment of VP and AG1478 in SW620R cells using MTT, scratch wound-healing assay, and colony formation assay. The combinational treatment significantly decreased cell viability ( Figure 4B), Figure 4C), and colony formation ( Figure 4D and 4E) in SW620R cells compared to the control. To further confirm the effect of combinational treatment, we tested in vivo tumorigenicity using mouse xenograft model. We compared the therapeutic effects between single treatment of 5-Fu and combinational treatment of 5-Fu, VP, and AG1478 for SW620R xenograft tumors. The combinational treatment of 5-Fu, VP, and AG1478 more effectively suppressed the tumor growth whereas single treatment of 5-Fu had no significant difference on the tumor growth compared to the control ( Figure 4F and 4G). These results suggested that the combinational treatment of VP and AG1478 significantly reduced 5-Fu resistance in vivo and in vitro in CRC. Discussion CRC is characteristic of poor prognosis and high death rate, as the 3rd most common cancer in pan cancer statistics worldwide [14]. It has been reported that the expression of YAP is associated with the cancer cell proliferation and chemotherapy resistance in solid tumors [11,[15][16][17]. YAP, a transcriptional coactivator in the Hippo signaling pathway, is bound with TEAD via translocating into the cell nucleus [18]. In this study, we found that YAP expression of CRC recurrence was correlated with EGFR expression, which were manifested in human CRC patient samples ( Figure 1A, and Table 1 and 2). In addition, initial treatment of 5-Fu led to reduction of YAP protein levels while 5-Fu resistance highly increased YAP protein levels in CRC (Figure 1). We also found that YAP played important role in cell proliferation, migration, and 5-Fu resistance in CRC cells ( Figure 2). These indicated that YAP is as well associated with cancer development and chemotherapy resistance in CRC recurrence. VP (an inhibitor of YAP) reduces the translocation and expression of YAP by specially blocking YAP-TEAD complex [19]. Since single inhibition of YAP by VP could not efficiently inhibit cell viability of 5-Fu resistant CRC cells, we considered another therapeutic target related with YAP signaling. EGFR signaling pathway is related with YAP signaling pathway [20,21], and AG1478 (an inhibitor of EGFR/ErbB1) induces the phosphorylation and degradation of YAP in plasma [11,21]. We showed that YAP expression of CRC recurrence was correlated with EGFR expression in human CRC patient samples ( Figure 1A, and Table 1 and 2), suggesting EGFR is related with YAP in CRC recurrence. We next examined whether inhibition of EGFR could suppress YAP protein levels by AG1478 treatment or EGFR knockdown in CRC. Interestingly, we found that inhibition of EGFR induced decrease of YAP protein levels by either AG1478 treatment or EGFR knockdown. We next examined the synergistic effect on the inhibition of YAP protein levels by combinational treatment of VP and AG1478 in CRC. We found the combinational inhibition of YAP and EGFR synergistically suppressed cell viability, colony formation, and migration in SW620R cells ( Figure 4A-4E). In addition, the combinational treatment of 5-Fu with VP and AG1478 reversed the chemotherapy resistance in mouse xenograft model of CRC ( Figure 4F and 4G). These indicated that EGFR is related with chemotherapy resistance, which could be a therapeutic target for chemotherapy resistant CRC. (E). F, G. Xenograft tumor development in NOD/SCID mice inoculated with SW620R cells treated with vehicle (control), 5-FU (30mg/kg*3/ week, i.p.), or the combination of VP+AG1478 with 5-FU (VP 50mg/kg*3/ week, AG1478 50mg/kg*3/ week, 5-FU 30mg/kg*3/ week, i.p.) for 2 weeks. After 2 weeks, mice were sacrificed, and tumors were collected (F) and weighted (G). (n=6) The results in B, E, and G represent the mean ± SD. **p < 0.01. 5-Fu is one of the initial chemotherapeutic drugs for colorectal cancer patients [22,23]. Because primary or secondary resistance to 5-Fu make treatment failure, metastatic CRC patients maintain a low 5-year-survival rate. Therefore, resistance to 5-FU has been a major obstacle in chemotherapy for advanced CRC patients. It has been reported several targets that inhibit 5-Fu resistance in CRC. For instance, 5-Fu resistance in CRC was more affected by cytoplasmic localization of expressed Nrf2 (cNrf2) than by nuclear localization (nNrf2) [24,25]. RV-59, a nitrogensubstituted anthra[1,2-c][1,2,5] thiadiazole-6,11-dione derivative, was suggested as anti-tumor agent which effectively suppressed cNrf2, which reversed chemotherapy resistance in CRC [26]. Andrographolide synergizes the cytotoxic effects of 5-FU in CRC by targeting BAX, which provides combinational treatment strategies for chemotherapy resistance in CRC patients with expressing low level of BAX protein [27]. Here, we suggested EGFR as therapeutic target of CRC treatment, which synergized effect of YAP inhibition by VP in 5-Fu resistant CRC. In the present study, we confirmed the role of YAP in CRC tumorigenicity in vitro and in vivo. We also found that the regulation of YAP by 5-Fu is one of the major mechanisms underlying YAP-driven CRC development and chemotherapy resistance. Furthermore, our results suggested that the combinational inhibition of EGFR and YAP provided synergistic efficacy of chemotherapy resistant CRC treatment. In conclusion, our study will not only enhance our understanding of the EGFR/YAP signaling pathway in chemotherapy resistant CRC development and progression, but it will also provide a new strategy for CRC treatment in chemotherapy resistance.
2020-07-30T02:05:22.464Z
2020-07-11T00:00:00.000
{ "year": 2020, "sha1": "013c8a6e04a0850e18bbc32b3d2c1f6b75d15f7f", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v11p5432.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e1f2a45785624ab53f2d840f506e8e37c1db2f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
249298275
pes2o/s2orc
v3-fos-license
Optimization of the Decolorization of the Reactive Black 5 by a Laccase-like Active Cell-Free Supernatant from Coriolopsis gallica The textile industry generates huge volumes of colored wastewater that require multiple treatments to remove persistent toxic and carcinogenic dyes. Here we studied the decolorization of a recalcitrant azo dye, Reactive Black 5, using laccase-like active cell-free supernatant from Coriolopsis gallica. Decolorization was optimized in a 1 mL reaction mixture using the response surface methodology (RSM) to test the influence of five variables, i.e., laccase-like activity, dye concentration, redox mediator (HBT) concentration, pH, and temperature, on dye decolorization. Statistical tests were used to determine regression coefficients and the quality of the models used, as well as significant factors and/or factor interactions. Maximum decolorization was achieved at 120 min (82 ± 0.6%) with the optimized protocol, i.e., laccase-like activity at 0.5 U mL−1, dye at 25 mg L−1, HBT at 4.5 mM, pH at 4.2 and temperature at 55 °C. The model proved significant (ANOVA test with p < 0.001): coefficient of determination (R²) was 89.78%, adjusted coefficient of determination (R²A) was 87.85%, and root mean square error (RMSE) was 10.48%. The reaction conditions yielding maximum decolorization were tested in a larger volume of 500 mL reaction mixture. Under these conditions, the decolorization rate reached 77.6 ± 0.4%, which was in good agreement with the value found on the 1 mL scale. RB5 decolorization was further evaluated using the UV-visible spectra of the treated and untreated dyes. Introduction Water is essential for all forms of life, making it the most important resource on Earth. However, water decline and depletion is an escalating problem driven by the increasing global population [1,2]. This increase in population also increases pollution, regardless of whether wastewater streams come from domestic, industrial, agricultural, or other origins [3]. A telling example is the use of pesticide-contaminated water for plant irrigation, which leads to the spread of carcinogenic chemicals in soil and, successively, all along the food chain [4]. This study aims to evaluate the potential of a crude Coriolopsis gallica laccase for the decolorization of the recalcitrant azo dye RB5. The fungal strain used in this study had been isolated from decayed acacia wood in northwest Tunisia [37]. Statistical optimization was performed using Response Surface Methodology (RSM) (coupled screening and Box-Behnken designs) to determine optimized conditions for multifactorial experimentation and the interactions between variables. The optimized conditions for decolorization were first verified at a small scale (1 mL) and then tested in a larger volume of 500-mL. In addition, the UV-visible spectra of treated and untreated dye were analyzed. Fungal Strain and Culture Conditions This study used Coriolopsis gallica strain CLBE55 [ON340792] for laccase production. Coriolopsis gallica strain CLBE55 was deposited at the culture collection "Centre International de Ressources Microbiennes" (CIRM) under accession number BRFM 3473. Solid cultures of C. gallica were performed on PDA media that contained 39 g of dehydrated media (Accumix) suspended in 1000 mL of distilled water sterilized by autoclaving at 120 • C for 30 min. Liquid pre-cultures were performed in 25 mL of Malt Extract medium (Sigma-Aldrich, St. Louis, MI, USA) containing 30 g Malt Extract per L at pH 5.5 sterilized by autoclaving at 120 • C for 30 min. The pre-cultures were inoculated with 3 agar plugs (6 mm in diameter) cut from the growing edge of a plate stock culture and incubated at 30 • C for 3 days at 160 rpm. Mycelia from these three-day precultures were then partially ground down using glass beads (0.6 mm). The mycelial mixture obtained was used to inoculate 500-mL Erlenmeyer flasks containing 100 mL of M7 medium [27,63]. The basal medium contained (in g L −1 ) glucose 10, peptone 5, yeast extract 1, ammonium tartrate 2, KH 2 PO 4 6 Mo 7 O 24 , 4H 2 O 0.01. pH was adjusted to 5.5, and cultures were incubated in a rotary shaker for 7 days at 30 • C and 160 rpm. On day 3 of incubation, 300 µM CuSO 4 was added as a laccase inducer. Laccase-like Activity Assays of the Cell-Free Supernatant from Coriolopsis gallica The supernatant of a 7-day culture of C. gallica was filtered on a Miracloth membrane (Merck, Fontenay-sous-Bois, France) and centrifuged at 30 • C for 5 min at 10,000× g prior to use. The laccase-like activity of the cell-free supernatant was assayed by monitoring the oxidation of 5 mM 2,6-dimethoxyphenol (DMP) in 50 mM citrate buffer, pH 5 (469 nm, ε 469 = 27,500 M −1 cm −1 ) in the presence of 50 µL of supernatant. The assay was carried out at 30 • C for 1 min. One unit of DMP-oxidizing activity was defined as the amount of enzyme oxidizing 1 µmol of substrate per minute. Experimental Design and Data Analysis An experimental design was used to optimize the enzymatic decolorization of the recalcitrant azo dye Reactive Black 5 (RB5) (Aldrich Chemical Co., St. Louis, MO, USA) by using the culture supernatant of C. gallica. The chemical properties of the RB5 dye are reported in Table 1. Plackett-Burman Design Plackett-Burman experimental designs are useful first-step screening designs for identifying the most significant factors and weeding out uninfluential factors for further experimentation [64]. Here we applied a 15-run Plackett-Burman design, with 3 replicates including the center points, on five independent factors to determine their influence on RB5 decolorization. The five factors were laccase-like activity (x1), initial dye concentration (x2), HBT concentration (x3), pH (x4), and temperature (x5). The center point tested the linearity of the experimental points (external versus center point. The experimental design adopted required three levels, i.e., low (coded −1), medium (coded 0), and high (coded +1) ( Table 2). Table 3 reports the Plackett-Burman design used and the percentage decolorizations achieved at 120 min. The reaction took a total volume of 1 mL using 50 mM citrate buffer. All experiments were performed in triplicate. Results are presented as means ± standard deviation ( Table 3). The first-order form of the equation adopted in this part of the study is [65] (Equation (1)): where ŷ is the fitted response (% decolorization at 120 min), 0 and are the intercept and linear coefficient of the model, respectively, xi is level-coded factor variable, and k is number of factors. Plackett-Burman Design Plackett-Burman experimental designs are useful first-step screening designs for identifying the most significant factors and weeding out uninfluential factors for further experimentation [64]. Here we applied a 15-run Plackett-Burman design, with 3 replicates including the center points, on five independent factors to determine their influence on RB5 decolorization. The five factors were laccase-like activity (x 1 ), initial dye concentration (x 2 ), HBT concentration (x 3 ), pH (x 4 ), and temperature (x 5 ). The center point tested the linearity of the experimental points (external versus center point. The experimental design adopted required three levels, i.e., low (coded −1), medium (coded 0), and high (coded +1) ( Table 2). Table 3 reports the Plackett-Burman design used and the percentage decolorizations achieved at 120 min. The reaction took a total volume of 1 mL using 50 mM citrate buffer. All experiments were performed in triplicate. Results are presented as means ± standard deviation ( Table 3). The first-order form of the equation adopted in this part of the study is [65] (Equation (1)):ŷ whereŷ is the fitted response (% decolorization at 120 min), β 0 and β i are the intercept and linear coefficient of the model, respectively, x i is level-coded factor variable, and k is number of factors. Table 3. Plackett-Burman screening-plan runs and response as percent decolorization at 120 min. Run Symbol Code of Factors x 1 , x 2, x 3, x 4, x 5: coded variables for laccase-like activity, initial dye concentration, HBT concentration, pH, and temperature, respectively. Decolorization was measured every 30 min over a 2-h incubation period by tracking the decrease in absorbance of the RB5 dye (598 nm). Percentage decolorization was calculated using the following formula (Equation (2)): where A i is initial absorbance of the dye at the maximum wavelength before incubation with the enzyme, and A t is dye absorbance after 2 h of incubation. All experiments were done in a total volume of 1 mL reaction mixture in 50 mM citrate buffer. Response Surface Methodology Using a Box-Behnken Design The best condition for RB5 decolorization was determined using RSM based on a Box-Behnken design with influential variables obtained previously via the Plackett-Burman design. A total of 46 runs (Table 4) were performed in triplicate with the same 5 variables and factor levels as described above. Decolorization was adjusted using a second-order polynomial equation, and multiple regression was performed on the data to obtain an empirical template associated with the factors. The general form of the second-order polynomial equation (Equation (3)) is: whereŷ is the response (percent decolorization at 120 min), β 0 , β i , β ij and β ii are the model's intercepts, linear, interactions and quadratic coefficients, respectively, x i is levelcoded factor variable, and k is 5 in our experimental setup. Design and Statistical Analysis Minitab 16 Statistical Software (Minitab Inc., State College, PA, USA) was used for the experimental designs and statistical analysis. Factor coefficients were determined using the least-squares method. Analysis of variance (ANOVA) and a Student's t-test were used to identify the level of significance of both the fitted model and the factors and their interactions. The probability (p < 0.05) and high F-values (Fisher's test) demonstrated that the model and factors were significant. The quality of the model was analyzed using a coefficient of determination (R 2 ), adjusted coefficient of determination (R 2 adj), and root mean square error (RMSE). Decolorization Assay in 500 mL Volume The optimal conditions determined for RB5 decolorization in a reaction volume of 1 mL were then tested on a larger scale (500 mL). Experiments were performed in duplicate in 1 L Erlenmeyer flasks containing 500 mL of the following reaction mixture: 50 mM citrate buffer pH 4.2, 25 mg L −1 RB5 dye, 4.5 mM HBT, and 0.5 U mL −1 crude C. gallica laccase. The reaction mixture was incubated at 55 • C for two hours, and 1-mL samples were taken every 30 min over the 2-h incubation period. Percentage rates of decolorization were calculated following Equation (2). RB5 Spectrum Analysis The UV-visible spectrum of RB5 was analyzed to visualize its color and absorbance peak shift. Decolorization was carried out for 24 h in reference conditions described in Table S1. After 24 h, UV-visible spectra were measured on a Genesys 50 UV-VIS spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) running in the 200-800 nm wavelength range at 2-nm increment steps. Screening Design Preliminary experiments were performed to fix the levels of factors. With a view to potential scale-up for use in textile industry applications, reaction time was set at 2 h. Note that we ran tests at several reaction times (>2 h) and found no change in the final decolorization of the dye. In addition, a previous study had demonstrated that the main laccase of C. gallica was active and stable at acidic pH (4-7) for 24 h of incubation retaining from 40 to 80% of its initial activity and the same high temperature (up to 50-55 • C) as tested here [37] and that HBT was a powerful mediator for dye removal at up to 5 mM [37]. Based on these results, we thus performed the experimental design in these conditions. Table 3 shows the experimental percentages of RB5 decolorization for all 15 runs tested (Plackett-Burman design). All response values (percentage decolorization) presented very low standard deviations (from 0.1 to 3.2% units of decolorization) (Table 3), thus demonstrating good repeatability of all the experiments done in the different conditions. The Pareto chart of the standardized effects ( Figure S1) shows that 4 of the 5 variables had significant effect on RB5 decolorization (t factor > t student (p = 0.05) = 2.024; p < 0.05) and that enzyme concentration was the most effective variable (p < 0.001) compared to pH (p < 0.001), initial dye concentration (p < 0.001) and HBT concentration which was the least effective variable (p < 0.01). Temperature was found to be a non-significant variable but was kept for the second step of the RSM-based optimization protocol for two reasons: (1) its p-value (0.073) was almost ruled in as significant (i.e., very close to the 0.05 limit of significance); (2) temperature was found to be insignificant as the responses at extreme points (−1; 1) were very close to each other, but the center point was very significantly different to these values ( Figure 1; p < 0.001). Statistical results obtained from Minitab 16 gave the model coefficient, and the fitted form of the equation was then represented as follows (Equation (4)): A positive-signed main effect of a factor indicates that using higher levels gave the most effective response whereas a negative-signed main effect indicates that using lower levels was more effective. Thus, according to Equation (4), laccase-like activity (x 1 ), HBT (x 3 ) concentrations, and temperature (x 5 ) were positive-signed whereas dye concentration (x 2 ) and pH (x 4 ) were negative-signed. Moreover, the coefficient of determination (R 2 ), the adjusted coefficient of determination (R 2 (adj)), and the root mean square error (RMSE) were 85.62%, 83.35%, and 12.5% (units of RB5 decolorization), respectively, thus demonstrating the goodness-of-fit of the adopted model (Equation (4)). Statistical results obtained from Minitab 16 gave the model coefficient, and the fitted form of the equation was then represented as follows (Equation (4)): ŷ = 18.36 + 15.61·x1 − 8.03·x2 + 5.71·x3 − 15.39·x4 + 3.84·x5 (4) A positive-signed main effect of a factor indicates that using higher levels gave the most effective response whereas a negative-signed main effect indicates that using lower levels was more effective. Thus, according to Equation (4), laccase-like activity (x1), HBT (x3) concentrations, and temperature (x5) were positive-signed whereas dye concentration (x2) and pH (x4) were negative-signed. Moreover, the coefficient of determination (R²), the adjusted coefficient of determination (R²(adj)), and the root mean square error (RMSE) were 85.62%, 83.35%, and 12.5% (units of RB5 decolorization), respectively, thus demonstrating the goodness-of-fit of the adopted model (Equation (4)). Figure 1 illustrates the main effects plot of the five tested variables, which shows that no linear correlation could be found between center points and extreme points. This nonlinearity, which was very significant (p < 0.001), indicated a second-order model was able to fit decolorization as a function of all factors. We consequently applied a second response-surface experimental design for the same variables in our protocol optimization step. Box-Behnken Design Response surface methodology based on a Box-Behnken design was performed to optimize (via a second-order polynomial model) the conditions of RB5 decolorization by Figure 1 illustrates the main effects plot of the five tested variables, which shows that no linear correlation could be found between center points and extreme points. This nonlinearity, which was very significant (p < 0.001), indicated a second-order model was able to fit decolorization as a function of all factors. We consequently applied a second response-surface experimental design for the same variables in our protocol optimization step. Box-Behnken Design Response surface methodology based on a Box-Behnken design was performed to optimize (via a second-order polynomial model) the conditions of RB5 decolorization by crude laccase obtained from C. gallica. The Box-Behnken design also statistically predicted the same 5 variables as previously tested by the Plackett-Burman design. Table 4 reports the percentage decolorization with standard deviations for all tested conditions. The standard deviations were relatively low (from 0.1 to 7.7% in percentage units of decolorization). R 2 was 89.78%, R 2 (adj) was 87.85%, and RMSE was 10.48% (in percentage units of decolorization), thus indicating good agreement between experimental and predicted values using the proposed model. The second-order polynomial equation, indicating the main effects, interactions among variables, and the quadratic effect with coefficients obtained from statistical analysis are represented as follows (Equation (5) According to Equation (5), laccase-like activity (x 1 ), initial dye concentration (x 2 ), and pH (x 4 ) were the most effective variables with the highest coefficients in a linear regression with positive (x 1 ) and negative (x 2 , x 4 ) effects. Moreover, as illustrated in Table 5, ANOVA on RB5 decolorization confirmed that p-values were less than 0.001 and 0.01 for linear regression (factors without their interactions), thus making all factors highly significant. Regarding the quadratic effect of factors, some of them were statistically non-significant with p-values over 0.05, such as initial dye concentration (p = 0.465) and temperature (p = 0.089), whereas the rest of the quadratic-term factors (laccase-like activity, HBT concentration, and pH) had a significant influence (p < 0.001). Regarding the interactions between the five variables, 5 among the 10 were significant. The highest p-values found were for pH × temperature (p = 0.592) and dye × HBT concentration (p = 0.561). pH × laccase-like activity and pH × HBT concentration were the most significant and valuable interactions (with the highest negative coefficients; Equation (5)) and had a significant influence on RB5 decolorization (p < 0.001). Moreover, the initial dye concentration × pH interaction was found to be significant with a positive-signed effect and a p-value < 0.01. Residual plots (Figure 2) produced using the Box-Behnken design showed that all points were aligned on the normal line, confirming that the model used (Equation (5)) fits the experimental values (Figure 2A). This is also confirmed for the residual vs. fit and residual vs. order plots, which demonstrated that all experimental points were randomly distributed within the whole domain ( Figure 2B,C). Figure 2D presenting the frequency of residuals clearly shows that the highest frequency (40%) was for the zero residuals. The second-highest frequency (at about 25%) was for the residual equal to 5 (in percentage units of decolorization), and all the remaining values of residuals had a frequency of less than 10%. Based on these findings, the adopted model showed a good fit. units of decolorization), and all the remaining values of residuals had a frequency than 10%. Based on these findings, the adopted model showed a good fit. Iso-responses plots (contour curves) were analyzed to visualize the interacti tween factors. Figure 3A showing RB5 decolorization computed as a function of dy centration and HBT concentration (all other factors were held at center level) ind that the increase in dye concentration from 25 mg L −1 (level −1) to 125 mg L −1 (le decreased RB5 decolorization from 80% to 30%. Increasing HBT concentration to 2 (center of the experimental domain-level 0) improved RB5 decolorization compa the condition with 0.5 mM of HBT. Contour plots for the laccase-like activity× tempe interaction ( Figure 3B) showed that increasing both factors improved decolorizatio centages from 20% to more than 70%. Contour plots for the laccase-like activity × teraction ( Figure 3C) showed that maximum percentage decolorization, at more tha happened at levels +0.75 and ~−0.5, respectively. Moreover, percentage decoloriza a function of pH × temperature ( Figure 3D) and pH × HBT concentration (Figu reached more than 68% in both cases. However, the lowest levels of factors for bo pH and dye concentration variables together are likely to exceed 85% RB5 decolor ( Figure 3F). Iso-responses plots (contour curves) were analyzed to visualize the interaction between factors. Figure 3A showing RB5 decolorization computed as a function of dye concentration and HBT concentration (all other factors were held at center level) indicates that the increase in dye concentration from 25 mg L −1 (level −1) to 125 mg L −1 (level +1) decreased RB5 decolorization from 80% to 30%. Increasing HBT concentration to 2.5 mM (center of the experimental domain-level 0) improved RB5 decolorization compared to the condition with 0.5 mM of HBT. Contour plots for the laccase-like activity× temperature interaction ( Figure 3B) showed that increasing both factors improved decolorization percentages from 20% to more than 70%. Contour plots for the laccase-like activity × pH interaction ( Figure 3C) showed that maximum percentage decolorization, at more than 75%, happened at levels +0.75 and~−0.5, respectively. Moreover, percentage decolorization as a function of pH × temperature ( Figure 3D) and pH × HBT concentration ( Figure 3E) reached more than 68% in both cases. However, the lowest levels of factors for both the pH and dye concentration variables together are likely to exceed 85% RB5 decolorization ( Figure 3F). Decolorization Process in 500 mL Volume The optimized condition-set (laccase-like activity 0.5 U mL −1 , dye concentration 25 mg L −1 , HBT concentration 4.5 mM, pH 4.2, and temperature 55 °C) obtained via the Box-Behnken design was tested in a total reaction volume of 500 mL to test the conditions in larger volumes. Figure 5 shows the kinetics of RB5 decolorization as a function of incubation time, reaching 77.6 ± 0.4% decolorization within 2 h of incubation (73.3 ± 0.4% in 30min time-steps), followed gradually by a slight increase in decolorization to reach 84.7 ± 0.6% in 6 h. At 24 h of the incubation period, decolorization reached 86.4 ± 0.4%. The RB5 color changed from dark blue to intense yellow ( Figure S2) during the decolorization process. UV-VIS Spectrum Spectrum analysis showed that the maximum absorbance peak of RB5 was at 598 nm, which is in the visible-light region ( Figure S3). Moreover, the absorbance of naphthalene and benzene rings was observed in the UV region at 254 and 310 nm, respectively. In test Decolorization Process in 500 mL Volume The optimized condition-set (laccase-like activity 0.5 U mL −1 , dye concentration 25 mg L −1 , HBT concentration 4.5 mM, pH 4.2, and temperature 55 • C) obtained via the Box-Behnken design was tested in a total reaction volume of 500 mL to test the conditions in larger volumes. Figure 5 shows the kinetics of RB5 decolorization as a function of incubation time, reaching 77.6 ± 0.4% decolorization within 2 h of incubation (73.3 ± 0.4% in 30-min time-steps), followed gradually by a slight increase in decolorization to reach 84.7 ± 0.6% in 6 h. At 24 h of the incubation period, decolorization reached 86.4 ± 0.4%. The RB5 color changed from dark blue to intense yellow ( Figure S2) during the decolorization process. optimized conditions obtained here were adopted and repeated four times experimentally, reaching 82 ± 0.6% decolorization. Decolorization Process in 500 mL Volume The optimized condition-set (laccase-like activity 0.5 U mL −1 , dye concentration 25 mg L −1 , HBT concentration 4.5 mM, pH 4.2, and temperature 55 °C) obtained via the Box-Behnken design was tested in a total reaction volume of 500 mL to test the conditions in larger volumes. Figure 5 shows the kinetics of RB5 decolorization as a function of incubation time, reaching 77.6 ± 0.4% decolorization within 2 h of incubation (73.3 ± 0.4% in 30min time-steps), followed gradually by a slight increase in decolorization to reach 84.7 ± 0.6% in 6 h. At 24 h of the incubation period, decolorization reached 86.4 ± 0.4%. The RB5 color changed from dark blue to intense yellow ( Figure S2) during the decolorization process. UV-VIS Spectrum Spectrum analysis showed that the maximum absorbance peak of RB5 was at 598 nm, which is in the visible-light region ( Figure S3). Moreover, the absorbance of naphthalene and benzene rings was observed in the UV region at 254 and 310 nm, respectively. In test UV-VIS Spectrum Spectrum analysis showed that the maximum absorbance peak of RB5 was at 598 nm, which is in the visible-light region ( Figure S3). Moreover, the absorbance of naphthalene and benzene rings was observed in the UV region at 254 and 310 nm, respectively. In test conditions containing HBT, there was a significant peak with a high absorbance value in the 280-310 nm range that was almost absent in the other conditions, which may indicate the maximum absorbance of the HBT mediator. Compared to RB5 incubated solely with the crude C. gallica laccase, the presence of HBT led to a 14% increase in decolorization (82% vs. 67%) accordingly a decrease in the peak absorbance of RB5 at 598 nm. Note that there was no decrease in absorbance when RB5 was incubated with HBT but without enzyme. Discussion Textile manufacturing encompasses a complex cluster of various different technologies and machinery [7] that ultimately produce vast amounts of colored clothing but use vast amounts of water [66,67]. Textile-industry wastewater is discharged with various contaminants such as dyes that are associated with toxicity and hazardous effects on both health and the environment. The Reactive Black 5 studied here is a well-known azo dye responsible for 70% of global demand [68]. RB5 has a well-defined chemical structure and has been widely used in studies directed toward fungi-driven decolorization of dyestuffs in textile manufacturing effluents [69,70]. Using a laccase-like active cell-free supernatant of the basidiomycete C. gallica to decolorize RB5 could therefore be an efficient solution to minimize RB5 concentrations in wastewater. The experimental design successfully performed here used a screening plan that came with five factors and indicated that 4 of the 5 factor variables tested were significant (laccase-like activity, dye concentration, HBT concentration, and pH). Moreover, for the fifth variable, which was temperature, we choose to rule it as significant as it was very close to the p = 0.05 limit (p = 0.073). Daâssi et al. (2012) [71] found that the same five factors as described here had significant effects on RB5 decolorization using a crude laccase from T. trogii. However, whereas the Pareto chart ( Figure S1) indicated that laccase-like activity was the most effective factor for RB5 decolorization here, Daâssi et al., (2012) [71] found that with T. trogii, enzyme concentration had a minor effect on RB5 removal. Likewise, the decolorization of two reactive dyes, i.e., Reactive Blue 114 (RB114) and Reactive Red 239 (RR239), was unaffected by the enzyme concentration of a commercial laccase from Aspergillus sp. [72]. Furthermore, our results showed that pH was the second important factor after laccase-like activity. pH has been considered a relevant factor that mostly affects the decolorization reaction as it is a decisive factor for optimum activity and stability of the enzyme [73,74]. For instance, the optimum pH of the laccase of C. gallica is 3.0 (see Songulashvili et al. (2016) [75]) but the enzyme has only demonstrated stability in a range of pH 4.0 to 7.0 [37]. Our analysis of the pH effect found a negative slope that was very different from that found for the laccase-like activity factor. Here, the increase in pH (from 3 to 6) decreased decolorization efficiency. Dye removal by crude laccase from Trametes sp. was found to be sensitive to small pH changes [71]. When using the whole fungus culture for the decolorization of RB5 by P. eryngii F032, the optimum pH occurred at 3, affording a maximum of 94.56% dye removal, whereas an increase in pH up to 10 decreased RB5 decolorization down to just 25.9% [76]. However, for free and immobilized cells of T. versicolor and Yarrowia lipolytica NBRC 1658, increasing pH values increased the RB5 decolorization rates [77,78]. The Box-Behnken experimental design and RSM then served as a suitable approach to facilitate the process modeling and determine the optimized conditions for improving the decolorization rate. The contour plots indicated that increasing the initial dye concentration has a negative effect on RB5 decolorization. Our results are consistent with data from El Bouraie and El Din (2016) [79] and Khan et al. (2021) [80] demonstrating that a high concentration of dye progressively decreases the efficiency of color removal. In a similar way, Bonugli-Santos et al. (2016) [81] showed that increasing dye concentration to over 200 mg L −1 negatively affected RB5 decolorization by Peniophora sp. CBMAI 1063, possibly due to the inhibitory effect of the dye on laccase-like enzymes at high concentrations [82,83]. To increase the efficiency of the laccase treatment, redox mediators are often used [84]. Analysis of the contour curvature showed that the high concentration of HBT increased the percentage rate of decolorization. Daâssi [86] all observed similar results. Despite the role it plays in improving decolorization rates, HBT may inhibit the enzyme at HBT concentrations over 5 mM [85]. HBT already has an inhibitory effect on the crude laccase from T. trogii and laccase from Trametes sp. strain CLBE55 at up to 1 mM, [27,87]. Increasing the temperature parameter improved dye decolorization up to 55 • C. This result is in good agreement with the optimum temperature profile of the C. gallica laccase, which showed a correlation between the increase in temperature and laccase activity in the range of 20-60 • C [75]. Using the commercial laccase of Aspergillus oryzae, Wang et al. (2011) [88] reported that changing the temperature from 20 • C to 50 • C had no effect on the decolorization rate, but no explanation was provided for these results. Fernandez et al. (2020) [74] reached a similar conclusion with a crude secretome of Pleurotus sajor-caju, where increasing the temperature led to high levels of dye decolorization, with a maximum found at 35-40 • C. However, a study using cauliflower (Brassica oleracea) bud peroxidase to remove dyes (Reactive Red 2, Reactive Black 5, Reactive Blue 4, Disperse Orange 25, and Disperse Black 9) found that the decolorization rate was maximal at 40 • C and decreased at temperatures above 70 • C, probably due to the thermal denaturation of the enzyme [89]. The optimized conditions obtained here using RSM for laccase-like activity, dye concentration, HBT concentration, pH, and temperature were 0.5 U mL −1 , 25 mg L −1 , 4.5 mM, 4.2, and 55 • C, respectively, yielding 82 ± 0.6% color removal after 2 h of incubation. This result was in the same range for the decolorization of Tubantin direct dye bath for pH (4) and enzyme concentration (0.6 U mL −1 of laccase-active cell-free supernatant from Trametes sp.) leading to 88% decolorization [87]. However, the reaction parameters were held at different values, i.e., 75 mg L −1 of Tubantin bath dye, 1 mM HBT, 30 • C, and 20 h [87]. The laccase-active cell-free supernatant of Trichoderma asperellum was also reported to decolorize 90% of 50 mg L −1 RB5 within 24 h in the presence of an HBT mediator [90]. Complete decolorization was achieved on 50 mg L −1 of RB5 within 24 h using purified NADH-dichlorophenol indophenol (NADH-DCIP) reductase and lignin peroxidase (LiP) obtained from Sterigmatomyces halophilus SSA-1575 [91]. In conclusion, the result obtained in this work can be considered very competitive compared to others, especially given that the process used is short (2 h) and does not require any purification steps. Tests were run at a larger scale (500 mL reaction volume) to increase the total quantity of dyes with the best condition-set and achieved 77.6 ± 0.4% decolorization within 2 h and compared to controlled bioreactor processes. Mohorčič et al. (2004) [12] reported that the immobilized mycelium of Bjerkandera adusta removed 95% of the azo dye RB5 (0.2 g L −1 ) in a 5 L aerated stirred tank bioreactor within 20 days. Pavko (2011) [92] reported that the white-rot fungus T. versicolor was capable of decolorizing 97% of the synthetic dye Orange G in a 1.5 L bioreactor within 20 h. In a 3 L-volume bioreactor, Pseudomonas putida SKG-1 strain was able to decolorize 98% of the reactive orange 4 dye within 60 h [93]. This shows the value of our results obtained with a 2-h process in non-controlled flask conditions and the reproducibility of our results between the 1-mL lab-scale and the larger 500-mL scale-up. The UV-visible spectrum of RB5, in the range of 200 to 800 nm, indicated peak absorbance of RB5 at 598 nm together with the presence of benzene and naphthalene ring signatures at 254 and 310 nm, respectively, [94,95]. Spectra analysis showed that the addition of HBT decreased the absorbance peak of RB5 dye only in the presence of the laccase-like active cell-free supernatant of C. gallica, demonstrating that the laccase mediator system efficiently decolorizes RB5. Our results confirmed those of Tavares et al. (2008) [96] who showed that the presence of the 2,2 -azino-bis(3-ethylbenzothiazoline-6-sulphonique) mediator reduced the maximum absorbance peak of the dye and that the decolorization occurred when both the mediator and the enzyme were added to the reaction mixture. Conclusions The statistical design-of-experiments approach employed here showed that 4 factors among the 5 studied were key factors for the degradation of the RB5 dye. Among all these parameters, the main factor was the concentration of laccase activity used in the process. This is a critical point, as the price of the enzyme dictates the economic viability of the process. The second most critical factor was pH, probably due to the biochemical properties of the enzymes (pH optimum and pH stability). However, the present study confirmed that the laccase cell-free supernatant of C. gallica was active on RB5 dye using the mediator HBT, and that an enzymatic treatment of 2 h was sufficient to achieve up to 82 ± 0.6% decolorization in a lab-scale reaction volume of 500 mL Further scale-up experiments are required to demonstrate that this treatment can be made amenable to pilot scale and to future industrial applications. This work provides proof-of-concept for the biodegradation of textile-industry dyes as a promising alternative route to physicochemical treatments. Further experiments could usefully measure the biochemical oxygen demand (BOD) and chemical oxygen demand (COD) of the treated dye solution to evaluate the toxicity of the residual dye. Processes using cheaper and eco-friendly alternative plant-origin mediators should be tested in an effort to engineer a more sustainable system. Supplementary Materials: The following are available at https://www.mdpi.com/article/10.3390/ microorganisms10061137/s1. Figure S1: Pareto Chart of the standardized effects for RB5 decolorization (after 120 min of reaction) as function of coded value of factors. Figure S2: Shift of RB5 color from dark blue to intense yellow after 2 h of incubation. Untreated dye: 25 mg L −1 RB5 dye, 4.5 mM HBT, pH 4.2 and temperature 55 • C. Treated dye: 0.5 U mL −1 laccase-like activity, 25 mg L −1 RB5 dye, 4.5 mM HBT, pH 4.2 and temperature 55 • C. Figure S3: UV-VIS spectrum of RB5 showing original dye (25 mg L −1 ) (dark blue), RB5 added with 4.5 mM HBT (red), RB5 added with the laccase-like cell free supernatant from C. gallica (green), RB5 added with both 4.5 mM HBT and the laccase-like cell free supernatant from C. gallica (purple), HBT with the laccase-like cell free supernatant from C. gallica (light blue) and 4.5 mM HBT (orange) after a 24 h incubation period. Table S1: Decolorization conditions.
2022-06-03T15:18:48.471Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "7870108fd7ea1091318ed188dc29ca1c2ae0c419", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/6/1137/pdf?version=1654088025", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f163e52e9a7dec2db7731f527e46fe0a11c7a751", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
225326735
pes2o/s2orc
v3-fos-license
Forest Compositional Changes after a Decade of Emerald Ash Borer : Emerald ash borer is an invasive pest in North American forests. Ecological impacts of ash mortality from emerald ash borer are wide-ranging, including shifts in insect communities and wildlife behavior. Additionally, loss of ash from forests may have important implications regarding plant succession. Surveys of overstory, midstory, and understory trees within forests in northeastern Indiana, Lower Peninsula of Michigan, and northwestern Ohio were conducted to quantify the change in forest composition over a 10 year period. Interpolation of ash dominance illustrated inversion of live and dead ash values between 2007 and 2017. Even though more than 83% of overstory live ash basal area was lost across the study area, green ash was the most abundant midstory and understory species representing regeneration. Additionally, loss of ash from many of the sites resulted in compositional changes that were greater than merely the subtraction of ash. Due to the relatively large number of forest types with which ash species are associated, loss of ash will have broad ecological consequences, including on community composition. Introduction Introduced in the 1990s from Asia, emerald ash borer (Agrilus planipennis Fairmaire) has become a well-established insect pest in North American forests [1,2]. Since that introduction and subsequent discovery in 2002, emerald ash borer has spread throughout much of eastern Canada and the United States causing widespread decline and mortality in ash (Fraxinus spp.) [3]. Efforts have been made to improve biological and silvicultural control methods, as well as urban focused chemical control techniques, e.g., [4][5][6]. However, those improvements have had little impact on emerald ash borer, as evidenced by its continued spread and infestation, as well as continued ash mortality. Ecological impacts of emerald ash borer in infested forests are potentially broad, ranging from changes in wildlife behavior and insect abundances to successional trajectories and nutrient cycling [7][8][9]. The breadth of these impacts is an indication of the relative importance of ash species within North American forests. Ash (mostly black [F. nigra Marshall], green [F. pennsylvanica Marshall], and white [F. americana L.]) are defining species for eight different forest types and are commonly associated with 33 other forest types in North America [10]. Advance regeneration (i.e., seedlings in place before a disturbance) in those forests infested with emerald ash borer will be key to the subsequent recruitment of individuals. Densities of ash and other species within forests are variable and will likely relate to overstory composition [11,12]. Similar losses of a single genus or species have occurred with major consequence in eastern North American forests. The introductions of chestnut blight resulted in removal of American chestnut (Castanea dentata [Marshall] Borkh.) from much of North America and Dutch elm disease led to the loss of North American elms as a genus (Ulmus spp.) from many areas [13]. Forest ecosystems are dynamic with the scale and intensity of genus or species loss variable by forest location, composition, and other external factors. Additionally, gap formation and structural changes as a result of group or individual tree losses are essential to defining the vegetation mosaic within a forest [14]. However, genus removal from a forest may extend the impact beyond forest mosaic maintenance. Since ash as a genus is an important or associated component in many different forest types, the density of ash in those forests will vary. Density of emerald ash borer in any given forest is variable as well [15]. The objectives of this study were to (1) survey and characterize forest composition at sites in Indiana, Michigan, and Ohio, (2) quantify forest changes over a 10 year period of emerald ash borer infestation, and (3) test the hypothesis that forests with greater densities of both ash and emerald ash borer in 2007 will exhibit greater compositional changes over that decade. Materials and Methods Forty-four sites were originally selected and established in 2007 based on collaboration with state agencies (state parks, forests, and game areas), accessibility (permission to use and physically access), visual dominance of ash (rapid visual surveys conducted at numerous sites), and presence of emerald ash borer (prior trapping studies or state agency detection) [15]. Sites were in northeastern Indiana (n = 7), Lower Peninsula of Michigan (n = 30), and northwestern Ohio (n = 7), USA. After the 2007 surveys were conducted, sites were categorized as low emerald ash borer density (≤87 adults) and high density (>87 adults) based on number of adults captured on eight different trap types, as defined as a natural break in the data by Marshall et al. [15] (Figure 1). In both 2007 and 2017, five locations were randomly selected within a forest stand and used as center points for surveys. Available center point locations for random selection were originally associated with emerald ash borer traps used in 2007 to quantify densities [15]. Selection of locations for plot center points included avoiding overlap during basal area measurements. Overstory Survey Basal area was measured in July 2007 with a 10-factor Cruz-All angle gauge (Forestry Suppliers, Jackson, MS, USA) [15] and in August and September 2017 with a 10-factor cruising prism (Forestry Suppliers, Jackson, MS, USA). Individuals ≥ 8 cm dbh were identified to species. Basal area (m 2 /ha) for live ash, dead ash, and total forest were calculated, as well as percent of total forest basal area comprised of live and dead ash. Understory and Midstory Survey In August and September 2017, understory and midstory tree individuals were counted. At each plot center, a 1 m 2 quadrat and a 5 m 2 quadrat were established. Within the 1 m 2 quadrat, all understory trees (≤2 m in height) were counted and identified to species. Within the 5 m 2 quadrat, all midstory trees (>2 m in height, <8 cm dbh) were counted and identified to species. Data Analysis Relative dominance was calculated as basal area of each ash species at a site divided by total basal area of the site. An analysis of variance (ANOVA) was used to compare the relative dominance of black, green, and white ash blocked on state and nested in sampling years with a Tukey's HSD post-hoc test. Relative importance values were calculated for each species a site as the sum of relative frequency (number of plots/total number of plots), relative density (number of individuals/total number of individuals), and relative dominance (basal area/total basal area) divided by 3. Inverse distance weighting (IDW) interpolation was used to map the distribution of relative dominance (percent basal area) of live and dead ash from 2007 and 2017 surveys. Pearson's correlation was used to test for relationships between understory and midstory counts (ash and pooled other species) and 2017 overstory live ash, dead ash, and pooled other species basal area measures (m 2 /ha). A dissimilarity ratio was calculated for each forest as d total d ashRemoved where d total was the Bray-Curtis dissimilarity of forest overstory composition between 2007 and 2017, and d ashRemoved was the Bray-Curtis dissimilarity between 2007 with ash removed and 2017. Bray-Curtis dissimilarity provided a robust measure of ecological "distance" and represented a compositional "distance" between sample years at a given site [16]. Dissimilarity ratio values greater than 1.0 represented dissimilarity (i.e., distance) in forest community composition between sample years that was greater than the subtraction of ash, while values of 1.0 or less represent community compositional change as merely the subtraction of ash. Bray-Curtis dissimilarity values were calculated using vegdist function in the vegan package of R [17]. Chi-square tests were used to test for independence in shifts of the most important overstory species related to ash and emerald ash borer density categories. All statistical analyses were conducted with α = 0.05 in R version 4.0.2 [18]. Results While five plot locations were selected and surveyed in each forest stand, data analysis was conducted with stand as the experimental unit and plot values were pooled within each stand (labeled as site due to single stand per forest). ANOVA identified differences in the relative dominance of the three ash species (F 2,336 = 7.70, P < 0.001), however, the difference with a post-hoc test was only between green and black ash dominance. While this difference did exist, further analysis pooled ash as a genus because (1) green-white and black-white relative dominance did not differ, and (2) black ash was the single ash species at only one site. Across all sites, ash ranked in the top-5 of relative importance values at 37 of the 44 sites (84.1%). However, ash was the top-ranked species at only 14 sites (31.8%). There was a reduction in mean live ash basal area from 3.29 to 0.54 m 2 /ha across the entire study area, with an increase in mean dead ash basal area from 0.23 to 4.91 m 2 /ha. IDW interpolation map for portions of Indiana, Michigan, and Ohio illustrate the patchy relative dominance of ash across the sampling sites in 2007 (Figure 2A,B). For most of the study area, live ash accounted for less than 40% of basal area in 2007 with relatively low numbers of standing dead ash in 2007 ( Figure 2B, Table 1). These values switched in 2017 with relatively low numbers of live ash and standing dead ash becoming much more common ( Figure 2C,D, Table 1). No sites were included in southwest Michigan, which limits the accuracy and interpretation of the IDW interpolation in that portion of study area (Figures 1 and 2). In the understory, green ash was the most abundant species accounting for 52.8% of all seedlings counted across the study area. Black and white ash were substantially less abundant, accounting for 4.1% and 1.9%, respectively. All plots surveyed had ash seedlings. The top-3 non-ash understory species were sugar maple (Acer saccharum Marshall) as 19.1% of individuals, black cherry (Prunus serotina Ehrh.) as 2.2%, and American elm (Ulmus americana L.) as 2.1%. Green ash was also the most abundant midstory species, accounting for 30.4% of individuals. As with the understory, black and white ash were far less abundant, accounting for 3.1% and 0.7% of midstory individuals, respectively. Black cherry and American elm were in the top-3 non-ash species in the midstory (5.1% and 4.2% of individuals, respectively). Red maple (A. rubrum L.) was the most abundant non-ash midstory species (6.8% of individuals). Counts of understory ash and other tree species pooled were positively correlated (r = 0.31, p = 0.044). However, counts of midstory ash and other tree species were not correlated (r = −0.07, p = 0.630). Understory ash counts were not correlated with basal area measurements in 2017. However, understory species pooled other than ash were positively correlated with the 2017 pooled other species basal area measurement (r = 0.32, p = 0.035). Midstory ash counts were positively correlated with 2017 dead ash basal area (r = 0.37, p = 0.014). In 18 of the surveyed forests, the overstory composition did not change beyond the subtraction of ash (dissimilarity ratio ≤ 1.0), while shifting of the composition greater than merely the subtraction of ash (dissimilarity ratio > 1.0) occurred at 26 forests (59%) (Figure 3). However, the shift was independent of the 2007 ash density categories (X 2 1 = 0.70, p = 0.402) and emerald ash borer density categories (X 2 1 = 3.13, p = 0.077). In eight single forest stands, the most dominant overstory species were boxelder (A. negundo L.), red maple, sugar maple, standing dead black ash, black walnut (Juglans nigra L.), swamp white oak (Quercus bicolor Willd.), and sassafras (Sassafras albidum [Nutt.] Nees). In two stands each, the most dominant overstory species were Balsam fir (Abies balsamea [L.] Mill.), silver maple (A. saccharinum L.), shagbark hickory (Carya ovata [Mill.] K. Koch), quaking aspen (Populus tremuloides Michx.), and black cherry. Standing dead green ash became the most dominant in nine forests. Removing standing dead ash from the relative importance value calculations, those dead ash dominated forests added one forest each of balsam fir, eastern cottonwood (P. deltoides W. Bartram ex Marshall), quaking aspen, black cherry, and arborvitae (Thuja occidentalis L.); two forests of black walnut; and three forests of red maple. Seventy-one percent of forests with ash as the most important species in 2007 shifted to standing dead ash, with most of those becoming dominated by standing dead green ash. In one forest, the 2017 dominant species did not occur in the five most relatively important species in 2007, shifting from green ash to sugar maple; however, sugar maple was in the ten most abundant species originally (Supplmentary Tables S1 and S2). Discussion The introduction of emerald ash borer to North America has resulted in the major loss of ash individuals within numerous forests, e.g., [12]. Loss of these stand defining species would be expected to lead to compositional and structural changes within those forests. Many of the forests in the study presented here had compositional changes greater than merely the subtraction of ash, even though ash was the top-ranked species by basal area at only 32% of sites. Such losses of stand defining species can have a variety of effects on biotic and abiotic characteristics of forests [19]. IDW interpolation provided a graphical representation of the change that occurred across the study sites. The inversion of abundance between live and dead ash is striking, however, there was not an exact conversion from live to dead. While not measured, at many sites there was an abundance of coarse woody debris made up of downed ash individuals. This likely reduced the standing dead basal area measures at several sites. If an intermediate measurement between 2007 and 2017 had been collected, there likely would have been a clearer pattern of ash loss and subsequent accumulation of coarse woody debris as there are interactions between disturbance and time across different forest types [20,21]. The locations with maximum live ash basal area in 2017 were isolated in the IDW map, with ash reduced to well below 10% of forest basal area. Across the study area, live ash was reduced by approximately 84%. This value of loss for the entire study site is lower than 2008 values presented by Klooster et al. [12]. Additionally, Marshall et al. [22] estimated 7.8% of standing ash were dead individuals in 2006 across a similar study area as presented here, which is reasonable that in 2007 4.4% of standing ash basal area was dead with the potential falling of trees to become coarse woody debris. Since understory and midstory counts were not performed in 2007, there is no measure of regeneration change between years. However, other studies have quantified the changes in ash understories, which provide context in what was observed in these 44 sites. Rapid decreases in ash seedling densities over the course of several years were reported in previously published studies performed in 2008 and 2009, where no new seedlings were observed in most sites [12,23]. However, all sites and plots within sites surveyed in 2017 had ash seedlings. Even with different survey methods, these values are inconsistent with those presented by those previous studies [12,23]. The results presented here from 2017 were similar to those reported by Burr and McCullough from [11] who found ash seedlings in relatively high densities at sites close to the introduction epicenter for emerald ash borer in southeast Michigan. Also, the most abundant understory and midstory non-ash species were similar to Burr and McCullough [11], with sugar maple, black cherry, and American elm dominating. Flower et al. [8] reported maple and elm species had high growth rates in forests where ash was lost. While ash regeneration is still occurring and individuals are established in the understory and midstory, overstory compositional changes have occurred with the loss of ash. Removing a genus from the forest will inherently lead to compositional change, shifting dominance to different species. However, the intensity of that change varied across study sites. At 18 sites, dissimilarity between 2007 and 2017 was accounted for by merely subtracting ash from the overstory. However, the remaining sites had substantial dissimilarity ratios that indicated a compositional change greater than the subtraction of ash. These forests did shift to different dominant species. While the understory species were similar to previous studies, the overstory composition shifts were to a variety of species. With over 80% of live ash lost to emerald ash borer between the 10 years of surveys, there was a clear change in forest composition. Whether costs are calculated in economic or ecological terms, e.g., [8,24,25], this loss has resulted in a substantial and devastating impact on the forests within the North American Great Lakes region. Ash has been retained in the overstory at many sites surveyed, but those areas with ash basal area accounting for more than 10% of the forest are isolated. Due to the dominance and importance of ash in several different forest types, the loss of ash will have long-term successional and functional repercussions [8]. Forests will respond to the loss of ash, however, the inherent resiliency of each forest to disturbance (e.g., silvicultural activities, ice damage, wind-throw, disaease) will be weakened [26,27]. Conclusions Loss of ash to emerald ash borer from many North American forests has been a growing concern since the pest insect was discovered. Interpolation of ash dominance between 2007 and 2017 surveys illustrates a clear loss of ash and isolation of few areas with ash basal area accounting for more the 10% of the forest. Additionally, regeneration of ash has occurred in these forests, but with removal of over 83% of ash basal area, the seed source is in peril. The substantial loss of ash across the Great Lakes region has caused overstory compositional shifts that will likely result in changes in successional trajectories. Several different species have moved into the dominant position in these communities, however, in many of the forests standing dead ash is now the dominant basal area component. Those new dominant species were not consistent across the study area. These forests are responding to the removal of ash as a key component of the forest type. Potentially, such a loss of a key genus will reduce the inherent resiliency each forest has to future disturbances.
2020-09-03T09:04:39.519Z
2020-08-30T00:00:00.000
{ "year": 2020, "sha1": "8c4c7be148a8eeeef577726a1cc38220f20b220a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/11/9/949/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "09891c8c82787a679fe9fa593b3128006b34ffe6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
53108148
pes2o/s2orc
v3-fos-license
Host Feasibility Investigation to Improve Robustness in Hybrid DWT+SVD Based Image Watermarking Schemes Today, we face different approaches to enhancethe robustness of image watermarking schemes. Some of them can be implemented, but others in spite of spending money, energy, and time for programming purpose would fail because of not having a strong feasibility study plan before implementation. In this paper, we try to show a rational feasibility study before implementation of an image watermarking scheme. We develop our feasibility study by proposing three types of theoretical, mathematical, and experimental deductions. Based on the theoretical deduction, it is concluded that the “S” coefficients in the second level of Singular ValueDecomposition (SVD) offerhighrobustnesstoembed watermarks.Toprove this, amathematicaldeduction composed of two parts is presented and the same results were achieved. Finally, for experimental deduction, 60 different host images in both normal Introduction Digital Image watermarking is introduced to protect the digital medium from illegitimate access and illegal alteration [1][2][3].To achieve the required functionalities in the target application special care has to be taken so that the embedded watermark can resist attacks and manipulations [4].Various techniques are introduced in digital image watermarking.In [5] different techniques of image watermarking are divided into spatial and transform domain and it is mentioned that the transform domain techniques provide higher robustness and imperceptibility than spatial domains to embed the watermark images.On the other hand in [6] among different transform domain techniques, DWT was referenced as a superior transform domain technique for image watermarking while combination of this technique with other transform domains can compensate the flaws of using each technique solitary.In [7][8][9][10][11] hybridization of DWT and SVD is considered as an efficient combination to increase the resistance of the watermarking scheme against signal processing and noise attacks. However, depending on attack's types and intensity this hybrid technique is not robust as well. In this paper, we purely investigate host images to find regions of interests representing the least distortion against geometric and signal processing attacks after Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD) transformations.We called this investigation as a feasibility study in order to find the regions of interest and consequently to improve robustness.The aim is to prove an idea of increasing the robustness of hybrid DWT+SVD schemes by selecting the regions of interest in the host image. In other words the ROIs are selected basically by analyzing the host image and investigating the theories ending up to high robustness. For this purpose, a feasibility study is developed in three phases.In the first phase, the theoretical deduction supporting basic idea of dominance of "S" coefficients in the second level of SVD that can enhance robustness is investigated.In the second phase, mathematical deductions to prove the dominance of these areas are developed and, finally, the hospitability of the candidate areas for embedding the watermark is investigated by exposing the host images on 9 types of geometric and signal processing attacks.This paper is organized as follows: first, a description about the SVD and DWT and the structure of an image when it is decomposed by SVD and the method to find ROIs is presented.Secondly, the theoretical deductions and preliminaries to enhance image resistance geometrical and noise attacks are described.In the third phase, this theoretical deduction is proved mathematically and finally in order to demonstrate the superiority of the "S" coefficients in the second level of SVD, a wide range of medical and normal images is exposed on 9 most common types of image processing and geometric attacks and stability of "S" coefficients in the first and the second level of SVD are compared to prove both phases of theoretical and mathematical deductions experimentally. As it is mentioned before, there is no specific implementation in this paper.In fact, we state the way that we approached to the idea of stability of the "S" coefficients against attacks and consequently selected as the regions of interests.In this paper, we just state an approach tending to ensure an economic implementation without wasting time, money, and energy.The whole implementation of the proposed watermarking process is represented in [12]. An Overview on SVD and the Proposed ROIs Singular value decomposition (SVD), decomposes a matrix into left and right singular vectors and a diagonal matrix of singular values.If X would be an m * n matrix, it can be written as X=USV T , where "U" is an orthogonal n * n, "V" is an m * m orthogonal matrix and, and "S" is a n * m matrix such that its first r diagonal entries are nonzero singular values of 1, 2, 3, . .., r and the rest of entries are all zero.The order of values in singular values is 1> 2> 3> . . .> r.The expression of X=USV T is called the singular value decomposition for X.This definition is shown below and in Figure 1. Regarding an image as a matrix, it can be decomposed by U and V singular vectors and S singular values.In image decomposition by SVD, U and V carry the whole geometric specifications of the image while the luminance is carried by S. Since the S coefficients are geometric-free, they are less affected by geometric attacks like rotation and scaling.However, they still cannot resist several geometric and a range of signal processing attacks [8,[14][15][16].In this research the aim is to find regions from S coefficients that not only resist geometric but also are less affected by signal processing changes. To find the regions of the host image with high resistance against distortion, we need to find the highest energetic parts of image after SVD transform.Since in S coefficients the highest energetic portion is located in S (1,1), the image blocking can be a good idea. After one level DWT, the host image is divided into n * n blocks while n<watermark size.S (1, 1) from each block is collected in a new matrix, and then the second level of SVD is imposed on this matrix.The S coefficients of this matrix are considered as the ROIs and we prove that these secondary coefficients are more robust and stable than S coefficients of the first level of SVD after one level DWT. Figure 2 shows the process in detail. The scheme can be performed for every size of n * n host image and it depends on the size of the chosen watermark image.For example, in a 64 * 64 watermark image and one level of DWT applied on it, before embedding to the host image, the size of the watermark becomes 32 * 32.Hence, we need 32 * 32 blocks to hide the watermark.Considering that the host image size is 512 * 512, after one level DWT it is changed to 256 * 256 and for receiving 32 * 32 blocks we need to divide 256 * 256 into 8 * 8 blocks to get 32 * 32 places for inserting the watermark.As a result, the optimum block size is selected as 8 * 8; for other sizes of the cover images the same calculations need to be performed. The rest of the paper is assigned to prove the predominant of the S coefficients in the second level of SVD compared with their peers in the SVD level one, as a desire host for information hiding.In the following, three phases of our deductions constitute theoretical, mathematical, and experimental deductions presented. Theoretical Host Feasibility Deduction It is necessary to check the host image in order to determine whether the selected points (regions of interests) are suitable places to embed watermark or not.Suitability is defined in terms of stability against changes or robustness against attacks and imperceptibility or invisibility based on human visual system [17].In this section, we try to find stable places in the host image such that their stability can be justified theoretically.In other words, we have to find proficient points among singular values or vectors of SVD.These points will be selected based on the facts and theories presented about robustness in SVD and DWT (as another transform domain to link with SVD to make a hybrid robust scheme). Theoretically the following deductions lead to "S" coefficients in the second level of SVD as selected regions of interest. Basic Ideas and Theoretical Deduction to Enhance Robustness Respecting to Imperceptibility.In the following, basic ideas to achieve robustness are explained. Scattering. According to [18], to make the maximum robustness against attacks, the watermark should be spread all over the entire cover image instead of only being hidden in a limited number of bits.Such scheme ensures the robustness of the watermark against attacks. Since SVD decomposes the image matrix to its constituent elements as singular values and left and right singular vectors, a greater degree of spreadness will be achieved by executing this transform technique.DWT also performs a multiresolution technique in which the image can be defined based on each frequency subband [5].Thus, the watermark image can be added to each subband frequency as a small noise.Adding the watermark as small noise to high frequency subbands is becoming recognizable by human visual system (HVS), because they have smaller values.In contrast, low frequency sub band includes higher values so that the effect of changes can rarely be seen in them.Figure 3 shows the values of an image after one level wavelet decomposition in each frequency sub band and how adding a small noise after DWT is scattered in the whole of image by checking the values. Separability. According to definition of [13], separability is the difference between the correlation of watermark and the highest correlation of the watermark after attack.Higher separability leads to higher robustness.It is referred to a parameter "S" as separability to measure the robustness. where "C" is regarded as the correlation between the embedded watermark and the most prominent coefficient of the attacked watermarked and "CK" is the correlation between Kth random Watermarked and the most prominent coefficient of the watermarked data after attack as shown in Figure 4.The more separability, the more robust scheme will be offered. In other words, the watermark image should be hidden to the most prominent selected coefficient of the cover image, so that less effect can be imposed by attacks. To obey this matter and also to scatter the watermark, the cover image will be transformed by DWT, and selected sub bands divided to n * n non overlapped blocks (n is a power of 2).Then SVD transform will be performed for each block.Since now, spreading the watermark is performed.In the second step we have to find the most prominent coefficients to hide the watermark.According to the SVD definition, singular values are less affected by geometric and 0.9 0.9 0.9 0.9 As it is shown in Equation ( 3), singular values at the (1, 1) position are larger than the other singular values.Hence, the first singular values at the mentioned position will be chosen and kept in an ultimate matrix.This matrix is built from the all singular values of each n * n block at (1, 1) position.Thus, it includes the highest energy compaction and is less influenced by synchronization.In the next step, the SVD transform will be performed for the second time.The singular values produced by this transformation also inherit the dominant characteristics of the last level. Stability of SVD Singular Values.After SVD transform, the singular values of an image matrix are invariant to transpose, flip, rotation, scale, and translation [19,20].This means that after mentioned attacks, the singular values are less affected by them.Then, they can be good candidates for embedding the watermark image.It is expected that these characteristics will be inherited by each level of SVD decomposition.Thus, it can be concluded that the singular values at higher level of SVD would be more robust to geometric and signal processing attacks. Mathematical Host Feasibility Deduction In order to prove the robustness of the selected areas which are the "S" coefficients of the second level of SVD, a mathematical justification is conducted.In [21,22] it is shown that blocking in the cover image increases the robustness of the watermark. The mathematical proof in this section consists of one deduction which can be divided into two parts. The first part proves that S coefficients in SVD decomposition are more stable when blocking is performed on the cover in comparison to doing SVD without blocking, and the second part proves that performing SVD on each block of our first deduction and gathering S (1, 1) in a separate matrix and performing SVD for this new matrix increases the stability of S coefficients in the second SVD.Stability shows that when exposing the image to attacks, the values of S (1, 1) coefficients are less changed and are not substantially modified.As a result, the S coefficients in the second level of SVD are more robust against small perturbations in comparison to the S coefficients when only one level of SVD is performed on the host image. Deduction: in this deduction we first consider A as an image matrix.After decomposing A to singular value decomposition (SVD), we have If the watermark is hidden on S coefficients then we have where A * is called watermarked picture and U and V are singular vectors of image A. The watermark is hidden in S coefficients based on the following formula: where S is the singular values of the decomposition matrix of image A, and S is the singular values of decomposition matrix of watermark W, and is the scaling factor. Instead of S * in ( 5), ( 6) is replaced to get This is equal to On the other hand, ⟨S ⟩ is regarded as ΔS, which means the variation of the S coefficients of the image after addition of the watermark.In respect to imperceptibility, S or ΔS should be as small as possible.For this purpose, the scaling factor "" is multiplied by the S coefficients of the watermark.If these variations are very small, this means that the limitation of ( 8) when Δs→0 will be a very small value like , but not zero.Thus, we have On the other hand, if image A is divided into four blocks A1 and A2, A3, and A4, such that = 1 + 2 + 3 + 4 : Then, gathering all S(1,1) of each SA1, SA2, SA3, and SA4 into another matrix called B and decomposing matrix B into its singular values, we have After SVD since () = after hiding the watermark, B is changed to B * and decomposition of matrix B * is Since the watermark is hidden in singular values or SB coefficients, ( * ) = ( + Δ), SVD can be written as ( + ) , so (12) Since = Δ , ( 13) is written as ( * ) = + (Δ ) When the changes of Δ → 0, it means that Δ would be very small value like .In the following it is expressed mathematically: Compare ( 9) to ( 14), since ⊆ and ∈ ⋃ 4 =1 (1,1) ⇒ Variation of B< variation A. Since both S A and S B qualities are from S coefficients or in other words, S A is made of singular values of A, and SB is made of S(1,1) second singular values of A, so the variation and sensitivity of S coefficients are very small against perturbation of image processing and geometric attacks based on [23].Since in (14), it was proved that variation of S B is less than S A and S A is stable against small perturbation, it is believed that the variation of S B is less than of S A .As a result S B is more robust against signal processing and geometric attacks in comparison to S A . In this deduction it is proved that the variations and constraints of the B matrix is less than variations and constraints in the A matrix. Hence, inserting the watermark in B matrix, including high energetic parts of S coefficients of matrix of A, after dividing it into blocks, is more robust than inserting the watermark in only the singular values of A. The reason is that since the amount of the watermark that should be hidden in both A and B is the same, and because B <A, the B coefficients are more decomposed in comparison to A coefficients in SVD.Then after inverse SVD (ISVD) for matrix of A, we only have U A (S A + ΔS A ) V A T , while for B we have Hence the watermark is more scattered in B in comparison to A, and scattering will increase the robustness. This deduction can be extended for more than four blocks in blocking of the cover image.Later on, we will also prove this again based on experimental results. Inherited Specification of Singular Values in SVD2. After SVD transformation, the singular values have the specific characteristics which make them consistent with some geometric and signal processing distortions as follows [19,20]: (i) Transpose Invariance: Matrix A and its transpose A T have the same nonzero singular values. (ii) Flip Invariance: Matrix A, the row flip A rf , and the column flip A cf have the same nonzero singular values. (iii) Rotation Invariance: Matrix A and A r (A rotated by an arbitrary angle) have the same nonzero singular values. (iv) Scale Invariance: If we scale up A by L 1 times in row and by L 2 times in column simultaneously, for every nonzero singular value of A√ 1 2 is a nonzero singular value of the scaled up image.The two images will have the same number of nonzero singular values. (v) Translation Invariance: If A is expanded by adding rows and columns of black pixels, the resulting matrix A e has the same nonzero singular values as A. (vi) Transpose Invariance: Matrix A and its transpose A T of the second level of SVD have the same non-zero singular values. Since SVD2 has the same quality of SVD1, all the specifications of SVD2 are inherited from SVD1.Here, the Transpose Invariance specification is proved and the other specifications are referenced to [23].Now, we prove that the B matrix which made up of S (1, 1) of each n * n blocks of A inherits the transpose invariance specification from A matrix.In other words, the Transpose Invariance specification from the A matrix can be moved to the B matrix. Proof.Consider matrix A and decompose it to its singular value and vectors.So we have If we decompose its singular values into second SVD then ( 15) is changed to Then transposing matrix A will be concluded to Then Since S B is a diagonal matrix, in every diagonal matrix, both matrix and its transpose are the same.Then, S B T = S B and ( 18) can be written a: But considering ( 15) and ( 16) the statement in the parenthesis (V B S B U B T ) is the same definition for S T . Hence, ( 19) can be written as Thus, it is proved that singular values in B matrix are also transpose invariant. All mentioned specifications of S coefficients are extensible and can be applied on S coefficients of B matrix due to identical quality of S coefficients in both A and B matrix. Experimental Results of the Host Feasibility Deduction This section is dealing with the experimental deduction. It is in fact the results of a feasibility test to prove the selected points for inserting the watermark, which are that "S" coefficients of the second level of SVD are suitable and robust to embed the watermark and their robustness is more than "S" coefficients in the first level of SVD. After proving the idea of enhancing the robustness (by using the second level of SVD), theoretically and mathematically, an experimental test is ordered.For this purpose 60 pictures from two data bases of medical and normal images are exposed on the most predominant geometric and signal processing attacks to experimentally demonstrate that the "S" coefficients in the second level of SVD are more robust than "S" coefficients in the first level. Since the hosts images are selected from two databases, all images which were selected from "http://sipi.usc.edu/database" are called normal images and all images chosen from "http://radiopedia.org/encyclopesia/cases/all"database are referred to medical images.60 host images are exposed to different types of geometric and signal processing attacks, and based on the performance metric mentioned in Equation ( 21), the cover images are examined.Images types are including both medical and normal images.The examined attacks are Gaussian noise 0.01, Gamma Correction 0.1, Average filter 3×3, Crop 1/2, Salt and Pepper 0.01, Scaling 1/2, Speckle 0.01, Median filtering 3×3, and Rotation 50.Normalized Correlation Coefficient NC is calculated for the "S" coefficients in the first and second level of SVD. The main reason behind this comparison is to understand how much the "S" coefficients are similar before and after attacks.Thus, once SVD in the first level is performed on the image, the NC is checked for the first level of SVD by the following formula: where 1 is the first level of S coefficients in SVD host image before attack and 1 * is the first level of S coefficients in SVD host image after attacks.By means of this, the stability of S components in the host image at the first level of SVD decomposition is investigated.For the second time the NC will be calculated by the following formula, but for the second level of SVD again in host image. In order to prove the stability and resistance of "S" coefficients in the second level of SVD, the NC1 resulted from comparison of "S" coefficients in first level will be compared to NC2 resulted from comparison of "S" coefficients in the second level.The larger NC shows the more stability and ability to resist against attacks. The test is performed on different image sizes from both medical and normal images. In the following, the average NC for the first and second level of SVD among 33 images with the size of 512 * 512, 9 images of 1024 * 1024, and 18 images of 256 * 256 are compared.The experimental results are shown in Tables 1-3. As shown in Figure 5 and Table 1, it is clear that the "S" coefficients in the second level of SVD are more stable and their levels of resistance against attacks are better than "S" coefficients in the first level.This experiment is repeated for different sizes of images such as 1024 * 1024 and 256 * 256, and all the results confirm this finding. Figure 6 and Table 2 illustrate the average NC for the "S" coefficients in the first and second level of SVD for several normal and medical images with the size of 1024 * 1024.As shown in Figure 6, in all of the attacks, NC for SVD 2 is more than NC in SVD 1. SVD 2 stands for "S" coefficients in the second level of SVD, while SVD 1 represents "S" coefficients in the first level of SVD.The same results as the previous experiment demonstrate the level of resistance of "S" coefficients in the second level of SVD against the attacks and superiority of these coefficients in comparison to the "S" coefficients in the first level of SVD.As represented in Figure 7 and Table 3, in all of the attacks, NC for SVD 2 is more than NC in SVD 1. SVD 2 stands for "S" coefficients in the second level of SVD, while SVD 1 represents "S" coefficients in the first level of SVD.Regarding this experiment, the stability and superiority of "S" coefficients in the second level of SVD are totally demonstrated in various types and size of images.As a result, these coefficients are considered as a good host for image hiding.In this experiment, all normal images are taken from USC-SIPI image database with the address http://sipi .usc.edu/database, and all medical images are taken from http://radiopedia.org/encyclopesia/cases/all radiology cases including the real samples of patients.This database has a variety of samples of real medical cases with the different modalities of MRI, CT, and X-RAY. Conclusion and Future Work In this paper no specific implementation is presented.The aim is to ensure that our investigations tend to a successful implementation without wasting time, money and resources.The focus is to find regions of interest on the host image which are stable points for image watermarking such that least alteration is shown when they faced with signal processing and geometric distortions.For this purpose, firstly, basic theories to enhance the robustness are highlighted and then they are proved mathematically.Results of this theoretical and mathematical study ended up to use the second level of SVD to increase stability of the region of interest to embed watermarks.More stability leads to more robustness.After mathematical proof, it is necessary to show the robustness of "S" coefficients in the second level of SVD experimentally.For this purpose, an experiment was conducted on various host images before embedding the watermark.In this experiment, 60 host normal and medical images were exposed on geometric and signal processing attacks and the stability of "S" coefficients in the first and second level of SVD are compared.The experimental results proved the superiority of "S" coefficients in the second level of SVD in comparison to the first level in terms of robustness.Future work is to develop an image watermarking scheme based on the second level of SVD and to devise an authentication system in order to omit the false positive detection due to use "S" coefficients in SVD. Figure 2 : Figure 2: Selected "S" coefficients of the second level of SVD. 2 A (LL) 1 Figure 3 : Figure 3: New values of an image after one level DWT decomposition. Figure 5 : Figure 5: Comparison of stability in "S" coefficients in the first and second level of SVD for 33 images 512 * 512. Table 1 : Average NC for S coefficients in the first and second level of SVD for 33 images 512 * 512. Table 2 : Average of NC for "S" coefficients in first and second level of SVD for 9 images 1024 * 1024. Table 3 : Average of NC for "S" coefficients in first and second level of SVD for 18 images 256 * 256.
2018-11-03T13:03:15.998Z
2018-10-10T00:00:00.000
{ "year": 2018, "sha1": "d130bc061bb4e236b01d266b1f8a96bba612d867", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/am/2018/1609378.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d130bc061bb4e236b01d266b1f8a96bba612d867", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55320813
pes2o/s2orc
v3-fos-license
Physical Fitness for Elderly of a University Project Participants, Practitioners of Weight and Welcoming House Residents The strengthening of the elements of physical fitness through physical activity is extremely important for the elderly and are directly associated with independence and autonomy in their day-to-day. This study aimed to evaluate physical fitness in three different senior groups, namely elderly participants of a university extension project with physical activity, body builders in conventional and residents academies shelters without physical activity. Were invited, voluntarily, 110 elderly, divided into G1, G2 and G3, respectively, through a battery of fitness tests adapted to the elderly, with assessments of strength and flexibility lower and upper limbs, speed, agility and balance, flexibility the upper limbs and aerobic resistance were also evaluated body mass index (BMI), waist-hip ratio, systolic blood pressure and diastolic blood pressure. The results show that in all the variables G3 presented rated below the other groups, except BMI compared to G1 and G2, and testing to achieve the back, compared to the G1, thus the G3 was rated risk of loss functional mobility in most tests. The conclusion is the need for the inclusion of older people in physical activity for the maintenance and improvement of physical fitness, thus favoring the quality of life in the aging process. Introduction Aging is a dynamic and progressive process, in which both morphological and functional and biochemical can interfere with an individual's ability to adapt to the social environment in which he lives, making it more vulnerable to injuries and diseases, thus compromising their quality of life [1]. It is predicted that in Brazil the population over 60 years of age is approximately 11% of the population by 2020, and 26.7% of the total in 2060 [2]. And to have a healthy aging, Maciel [3], it mentions that the adoption of an active lifestyle provides many health benefits, since it is considered as an important component for improving the quality of life and functional independence of the elderly. Physical activity acts as an important tool, maintaining and/or improving physical fitness levels with advancing age, thus reducing functional limitations [4]. Strengthening of physical fitness elements is extremely important for the elderly and are directly associated with independence and autonomy in their day-to-day [5]. According to Silveira [6] physical fitness can be divided into biological attributes such as strength and muscular endurance, flexibility, aerobic capacity, weight control, which offer some protection to the appearance of organic disorders caused by sedentary lifestyle or related to the performance, which involves a number of related components to sports or work performance, such as agility, balance, coordination, power and speed of displacement and muscle reaction. Exercise can produce a profound improvement of essential functions for the physical fitness of the elderly. Weight training, for example, is an important tool for the improvement of physical fitness, independence and quality of life in this population. Increases in strength and muscle power, important for maintaining independence and reducing falls among the elderly, can be observed after a few weeks of weight training [7]. Already the group carried out systematic physical exercises favors the accession of the elderly bringing health benefits and expanding both the biological aspects as psychosocial [8]. A sedentary lifestyle can accelerate the aging process through limitations of functional fitness [9]. So for Coelho et al. [8] is of great importance to preserve physical through exercise, it is they who dictate the pace in relation to the activities of everyday life, and this is more evident in ages where the decline of body functionality is more evident, making it difficult to performing simple activities and compromising the quality of life. Given the importance of physical exercise for the maintenance and improvement of the components of physical fitness of the elderly, this study aimed to evaluate the fitness of three groups of elderly, they were divided into elderly participants of an extension project university with physical activity, body builders and residents in conventional gyms shelters without physical activity. Sample For the study were invited on a voluntary basis, 110 elderly patients aged 60 years, of both sexes in the city of Canoas and surroundings. Participants were divided into three groups: the G1 elderly participants of an extension project of a university, the G2 body builders, both with 6 months minimum of physical exercise and the elderly residents G3 houses host (home) without physical exercise. Data collection procedure Physical fitness was assessed through a battery of tests Rikli & Jones [10]aerobic endurance, flexibility, and agility/balance. Body mass index was also assessed as an estimate of body composition. The sample comprised 7,183 participants from 267 sites in 21 states. Summary data (M, SD, and percentiles With power ratings of the lower limbs (test stand and sit chair), upper limb strength (forearm flexion), flexibility of the lower limbs (test sit and reach), speed, agility and balance (walking test 2 , 44 meters and return to sit), flexibility of upper limbs (up the back) and aerobic endurance (walking test 6 minutes). They were also evaluated in waist, abdomen and hip for calculating the waist hip ratio (WHR), as well as height and weight to calculate body mass index (BMI) and also checking the systolic blood pressure (SBP) and diastolic blood pressure (DBP), and the elderly hypertensive blood pressure kept controlled (PA) by drug treatment. Conducting the tests occurred in random days between the groups running in days participants of G1 and G2 had not done exercise, so not that there was interference in outcomes. It began with all to stand for 5 minutes to check the PA with a pulse digital apparatus brand Citizen CH-602B. Soon after checking the weight with a digital scale precision Plenna 100gr, height with a stadiometer and circumference of waist, abdomen and hip with a tape measure. They were initiated then the physical tests and the last to be done was the 6min test. The results were recorded on a card, individual per participant and typed in Excel 2013 program. Statistical Analysis To compare the mean of the variables BMI, WHR and arm bending tests and 6 minutes walk used the ANOVA and to detect differences between groups, we used statistical Bonferroni post hoc test. Already in the variables of the test sitting and standing, sit and reach, reach back, agility and balance, SBP and DBP for failing to submit normal distribution, we used the statistical test nonparametric Kruskal Wallis test to compare the averages of the groups. Ethical Aspects All participants signed the Term of Free and Informed (IC), being informed about the procedures for research, accepting voluntarily participate in the study. Result The Table 1 below shows the results of the battery of fitness tests Rikli & Jones [10] for elderly, presenting the results in groups, number of participants in each group, mean, standard deviation, ANOVA test results and level of significance. Evaluating the results, with respect to BMI groups showed significant difference (p<0.05), compared to only G2 G3. In relation to age and WHR G1, G2 and G3 showed significant difference (p=0.000) and the arm flexion test (p=0.001) and 6-minute walk, though this significance in relation to G1 and G2 was (p=0.006) and between G3 and the other groups was (p=0.000). The tests of sitting and standing, sit and reach, reach back, agility and balance, SBP and DBP did not show normal distribution. BMI remained normal classification in the G3, but the G1 and G2 presented classification of overweight, while WHR presented in G1, G2 and G3 the high, moderate and very high ratings respectively [11]. Comparing the results of fitness tests isolation between G1, G2 and G3 from the classification Rikli & Jones [11] in six minutes walk test, sit and stand, sit and reach and foot agility and balance, G1 and G2 maintained normal classification as G3 presented risk of loss of functional mobility. Since the arm bending test showed normal rating G1, G2 was above average risk of G3 and functional loss of mobility test and reach the back, G1 presented below the average while G2 and G3 were considered normal. Finally, the classification of SBP/DBP and remained normal in G1, G2 and G3 [12]. The above results show that in all the variables G3 presented rated below the other groups, except BMI compared to G1 and G2, and testing to achieve the back, compared to the G1, thus the G3 was rated risk loss of functional mobility in most tests. Already the highest score was evident in G2, except the variables BMI, SBP/DBP and six minutes walk test, the latter with better ranking G1 (Table 1). Discussion Aging alone leads to loss of functional skills and cognitive functioning [5]. Aggravating this process when you do not have the habit of regular physical exercise, as shown by the results presented above, where G1 and G2 (of exercise practitioners), showed better results in most physical fitness tests when compared to G3 ( non-practicing). It was possible to assess that in relation to BMI, G3 remained within normal limits, even though the sedentary group as G1 and G2 were overweight and can be explained by the fact that the power was not controlled during the study and we do not know the diet adopted by the elderly home. SBP and DBP remained in the normal rating G1, G2 and G3, which we believe is due to the blood pressure control in hypertensive elderly, by the use of antihypertensive drugs in the three groups. Have physical fitness showed better rating in all tests in G1 and G2 compared to G3 except the test to reach the back, only G3 compared to G1. This disagrees with the study Fidelis et al. [13] 74 old, divided into two groups of 37 individuals each, and practitioners and not physical activity practitioners in which significant differences between groups with respect to flexibility, demonstrating the efficacy of supervised exercise training with respect to improvement of flexibility. On the difference in levels of fitness of G1 and G2 was compared with G3, our findings corroborate the literature review Lamb et al. [14] where it was possible to realize the benefits provided by physical activity in relation to maintaining the health and functional capacity of the elderly, allowing deduce that a sedentary lifestyle can accelerate the decrease of functional capacity, leading to dependence on the performance of everyday activities. Still on the benefits of physical activity, comparing practitioners and non-practitioners in the elderly population, the study of Borges [15] With 24 elderly practitioners of physical activities and 24 sedentary elderly, indicated that practitioners had good levels of autonomy to carry out their daily activities while sedentary were more difficult and even dependency. As in this study, the data presented above show, that no physical exercise brought the G3 risk loss of functional capacity, worsening their quality of life and independence. The quality of life of elderly residents of shelters, may also be associated with other factors, according to the study Dágios et al. [16] which showed lower degree of satisfaction in quality of life compared to non-institutionalized elderly in the four domains (physical, psychological, social relationships and environment) of the WHOQOL-BREF, and in the six domains (sensory function, autonomy, past activities, gifts and future, social participation, death and dying and intimacy) of the WHOQOL-OLD. Lamb et al. [14] evaluating the quality of life of elderly residents of shelters through the WHOQOL-OLD questionnaire, found that the average score of 52.9%, corresponding to a perception neither satisfactory nor unsatisfactory quality of life. The elderly living in long-stay institutions have reduced visual, cognitive and physical abilities, compromising the autonomy and independence, which may be associated with low quality of life of seniors. In addition, physical inactivity helps the onset of sarcopenia, a problem that occurs due to loss of muscle mass by the death of motor units and can be avoided through strength training, and aerobic exercise also has positive effects on the reduction of muscle loss during aging [17]. It was observed in Ritti Study Days [7] increase in muscle strength after a few weeks of weight training with seniors, and thus can help not only the independence, but also in reducing the incidence of falls, joint event with the muscle wasting process (sarcopenia). And even among institutionalized elderly, a study by Ribeiro et al. [18] With 144 elderly residents of shelters offering its residents exercise programs in Oporto (Portugal), showed that elderly trained showed improved functional mobility and balance with consequent reduction of the risk of falling that untrained elderly. From the results presented by the G1, showing the relationship of the practice of physical activities offered by the project with the maintenance of the elements of physical fitness, the study of Santos [19] with 141 elderly patients at a university extension program in Florianópolis-SC also showed that participation in different leisure activities offered by the program is associated with good perception of quality of life of the investigated elderly because all domains and facets evaluation showed average above 60. Finally, the results of this study and studies above show the benefits of physical activity on physical fitness and consequently the quality of life of seniors ( Table 2). according to Lamb [14] it is evident the importance of physical activity in the delay of aging declines, it is necessary to create strategies for the participation of older people in physical activity groups, contributing to improving the quality of life and independence of older people [15][16][17][18][19]. Table 2: You are invited to participate in the research project identified above. The document below contains all the necessary information about the research we are doing. Your cooperation in this study will be of great importance to us, but to give up at any time, this will not cause any harm to you. It was observed from the results presented above, the groups practicing physical activities were significantly higher in the results of fitness tests compared to the group of sedentary individuals. Especially in shelters, it is necessary the implementation of physical activity programs for these elderly people do not become dependent, thus losing their autonomy. It follows from this study and other studies presented, the need for the inclusion of older people in physical activity supervised the maintenance and improvement of physical fitness and functional capacity, thus favoring the quality of life and independence in the process aging (Table 3). The rationale and objectives for this research This research aims to assess the physical fitness of institutionalized elderly, bodybuilders and living group of participants. The purpose of my participation Their participation will be crucial for this study in helping us figure out which of the physical fitness groups shows better results as the physical activity levels of each group. The procedure for data collection The test will be performed: Battery physical fitness testing Rikli & Jones [10]. The information collected will be used for purposes of academic studies, they will only be used in this research may separately or compose other academic studies related matters. After the use of the data material will be disposed of, ensuring the confidentiality of information (Table 4). Table 4: I agree of his own free will to participate as a volunteer and I am aware. Identification of the Principal Investigator Name: Telephone: Profession: Registrationon Council number: Email: Address: Of the discomforts and risks In case of any discomfort or discontent communicate the assessor for the test to be stopped. Benefits You can meet your fitness rating, and contribute to assessments of physical activity programs of the projects so that they can be improved. The exemption and reimbursement of expenses My participation is free of cost and will not receive compensation because they will not have expenses in testing. The freedom to refuse, withdraw or withdraw my consent I have the freedom to refuse, withdraw or discontinue this research collaboration at the time you want, without any explanation. My withdrawal will not cause any damage to my health or physical well being. Will not interfere with the study. Da guaranteed confidentiality and privacy The results obtained during this study will be kept confidential, but I agree that they are published in scientific journals, since my personal data are not mentioned. The clarification assurance and information at any time I have the assurance to know and get information at any time, procedures and methods used in this study as well as the results of this research. Therefore, you can consult the responsible researcher Paola Ferreira dos Santos. For questions not cleared properly by the (s) researcher (s) of disagreement with the procedures, or ethical irregularities still be able to contact the Ethics Committee of the ULBRA Canoas (RS), at Rua Ragamuffin, 8001-Building 14-Room 224, Barrio San Jose, CEP 92425-900-phone (51) 3477-9217, Email: comitedeetica@ulbra.br
2019-03-18T14:02:30.994Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "b598ae39907a40d340deb36570dd0d2eb21148b9", "oa_license": "CCBY", "oa_url": "http://crimsonpublishers.com/ggs/pdf/GGS.000571.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d44b981e97a635fd3d8e916265dc53412ada4a23", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
229441592
pes2o/s2orc
v3-fos-license
High-resolution minirhizotrons advance our understanding of root-fungal dynamics in an experimentally warmed peatland investigated temporal variations in the abundance and growth of and with in Our investigation conducted against the of the and Peatland Responses Under Changing Environments which (iv) a loss of mosses and (v) a decrease in black spruce aboveground biomass ( mariana In the present study, we focused on two automated, high-resolution minirhizotron tubes installed at the coldest and warmest ends of the experimental temperature gradient, respectively. The dark-green coniferous trees represent black spruce ( Picea mariana ) and the light-green and brown trees represent larch ( Larix laricina ). The small bushes represent ericaceous shrubs, the light-green plants represent Sphagnum mosses, and the light-green herbs represent graminoid species. The black root tips represent Cenoccoccum geophilum ectomycorrhizas that mostly associate with spruce while the light-colored root tips represent rhizomorph-forming ectomycorrhizas that mostly associate with larch. Created with BioRender.com Societal Impact Statement Mycorrhizal fungi enable plants to thrive in the cold, waterlogged, organic soils of boreal peatlands and, with saprotrophic fungi, largely contribute to the sequestration of atmospheric carbon in peat. Hence, fungi support the contribution of peatlands to global climate regulation, on which society depends. Here we used high-resolution minirhizotrons for an unprecedented glimpse of the belowground world of a forested bog and highlighted linkages between environmental change and the abundance, dynamics, and morphology of vascular plant fine roots and fungal mycelium. These changes may have implications for peat carbon accumulation on the boreal landscape. Summary • Minirhizotron technology has rarely been deployed in peatlands which has limited our understanding of root-fungal dynamics in one of planet's most carbon-dense ecosystems. • We used novel, high-resolution minirhizotrons in a forested bog to explore temporal variation in the abundance and growth of plant fine roots and fungal mycelium with changes in peat temperature and moisture. We utilized the framework of the Spruce and Peatland Responses Under Changing Environments experiment and focused on two minirhizotron tubes installed at the coldest (+0, elevated CO 2 ) and warmest (+9°C, elevated CO 2 ) ends of the experimental temperature gradient, respectively. • We found that in warmer and drier peat, ericaceous shrub roots and ectomycorrhizal fungal rhizomorphs were more abundant, and the growth of rhizomorphs and sporocarps was greater. In turn, fine roots of trees, ectomycorrhizas, and darkcolored fungal hyphae were more abundant in colder, wetter peat. Ultimately, the belowground active season for both plant roots and fungi was extended by 62 days at the warmest compared to the coldest end of the gradient, with implications for belowground carbon, water, and nutrient fluxes. • High-resolution minirhizotrons in peatlands provided an unprecedented view of ericaceous shrub and tree fine roots and their mycorrhizal fungal partners in situ. | INTRODUC TI ON Ericoid and ectomycorrhizal fungi (ERM and ECM, respectively) dominate mycorrhizal fungal communities of boreal peatlands (Read & Perez-Moreno, 2003;Thormann, 2006) and, along with saprotrophic fungi, play critical roles in peatland biogeochemical cycling. In particular, the diversity and abundance of living and dead fungal structures are key drivers of peat formation Juan-Ovejero et al., 2020;Thormann, 2006). As changes in climate rapidly modify the boreal biome, novel challenges await fungi (Page & Baird, 2016). Experiments manipulating climate in boreal peatlands suggest that rising temperature and a potential drawdown of the water table will convert peatlands from ecosystems gaining carbon to ecosystems rapidly losing carbon (Bragazza et al., 2016;Hanson et al., 2020;Hopple et al., 2020;Ise et al., 2008;Jassey & Signarbieux, 2019). This would critically diminish peatland contributions to ecosystem services that range from global climate regulation to more local contributions related to water flow, biodiversity protection, mitigation of natural risks, and sustainment of local economies (Page & Baird, 2016). Globally, peatland conservation is a priority for action aiming at mitigating climate change (Humpenöder et al., 2020;International Union for Conservation of Nature, 2017). Yet, generating more comprehensive conservation strategies in peatlands necessitates a better understanding of belowground peatland processes-especially the keystone role played by fungal mycelium-now, and in response to changing environmental conditions. This is crucial as protection of intact peatlands and restoration policies may help turning the land system into a global net carbon sink by 2,100, as projected by current mitigation pathways (designed to reach a predefined climate target such as limiting or returning global mean warming to 1.5°C relative to the pre-industrial base period 1850-1900Humpenöder et al., 2020). The timing, or phenology of fungal activity, like that of fine roots, has received little attention in terrestrial ecosystems underlain by organic soils (Iversen et al., 2012). This is a serious shortcoming because fine-root and fungal mycelial phenology is critical to our understanding of the dynamics of carbon fluxes of the planet's most carbon-dense ecosystems (Iversen et al., 2018;Juan-Ovejero et al., 2020;Schwieger et al., 2019). Mixed evidence on fine-root phenology from North American and European peatlands shows that there tends to be an offset of leaf and fine-root growth, but that root growth can occur in single or multiple flushes in the spring and autumn, and that the importance of peat temperature and water table depth varies among sites (Iversen et al., 2018;Schwieger et al., 2019). Studies in boreal forests suggest that mycelial growth peaks during autumn and coincides with periods of maximum fineroot growth (Wallander et al., 2001). In addition, the abundance of medium-and long-distance exploration types was reported to be lower during leaf flush (Hupperts et al., 2017). Our limited knowledge of fine-root and fungal dynamics in peatlands has largely been based upon extrapolation from destructive methods such as sequential soil coring or ingrowth cores (reviewed by Iversen et al., 2012). However, non-destructive technology such as minirhizotrons remains one of the best methods to understand belowground dynamics in natural ecosystems. Yet, the deployment of minirhizotron technology in peatlands has been limited because of the environmental challenges these ecosystems pose (e.g., poor drainage and an accumulation of thick organic horizons) and because of common misconceptions regarding the effectiveness of minirhizotrons in peatlands (Iversen et al., 2012). Here we peered into the unseen belowground world of a water-saturated peat bog using novel, automated, high-resolution minirhizotron (AMR) technology (Rhizosystems, LLC; Figure 1). This robotic technology provided an unprecedented view of the physical interactions among ericaceous shrub and tree fine roots and their ERM and ECM fungal partners in situ (images showcased at INSIDE THE SPRUCE BOG), which is, to the best of our knowledge, the first of its kind. Prior to the installation of AMRs in a peat bog, several AMRs were previously deployed in a meadow ecosystem (Hernandez & Allen, 2013), a neotropical forest (Swanson et al., 2019) and in Californian mixed forest ecosystems Taniguchi et al., 2018;Vargas & Allen, 2008), where findings highlighted the importance of the dynamics of fungal mycelium in response to changing environmental conditions. Our goal was to explore how AMR technology may advance our understanding of belowground dynamics in peatland ecosystems. Therefore, this technology advanced our understanding of linkages between environmental change and the abundance, morphology, and dynamics of vascular plant fine roots and fungal mycelium. K E Y W O R D S dynamics, fine roots, minirhizotron, mycorrhizal fungi, peatland, phenology, warming In particular, we investigated temporal variations in the abundance and growth of plant fine roots and fungal mycelium with changing environmental conditions (peat temperature and moisture) in a forested bog. Our investigation was conducted against the backdrop of the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experiment, in which five enclosures spanned a range of warming from +0, +2.25, +4.5, +6.75, to +9°C, while another five enclosures spanned the same range of warming and were also exposed to elevated CO 2 (e[CO 2 ]; +500 ppm above ambient), therefore creating a wide set of environmental conditions. Here we focused on two enclosures: the unheated (+0, e[CO 2 ]) and warmest (+9°C, e[CO 2 ]) because (a) they are at the two extremes of the experimental temperature gradient, (b) both temperature and CO 2 are manipulated in these enclosures, which are interdependent climate change factors and (c) both enclosures had the longest and best AMR data records. To date, findings on belowground dynamics at SPRUCE obtained using litter bags, root ingrowth cores, and ion-exchange resins suggest that whole-ecosystem warming and associated peat drying have increased mycorrhizal necromass decomposition rates , ericaceous shrub root growth, partly due to an extended belowground active season , and plant-available nutrients (Iversen et al., unpublished data; Iversen (CI), Latimer Figure 2). However, few effects of e[CO 2 ] on belowground processes have been observed (e.g., Malhotra et al., 2020), similar to the trend observed in a peatland mesocosm subjected to warming, e[CO 2 ] and lowered water table (Asemaninejad et al., 2018). Findings on aboveground dynamics at SPRUCE indicate that warming (a) has extended the aboveground active season (Richardson et al., 2018), (b) has negatively affected photosynthesis and growth of black spruce (Picea mariana; Dusenge et al., 2020), of which the ECM communities are dominated by highly melanized Cenococcum geophilum , (c) has decreased Sphagnum moss annual productivity (Norby et al., 2019), and (d) has increased ericaceous shrub annual net primary productivity . Building on these results, we zoomed in on the plant-fungal symbiosis to investigate linkages between belowground dynamics and changing environmental conditions both within (seasonal patterns) and between the two enclosures selected. We hypothesized that: (a) the abundance of vascular plant fine roots and fungal rhizomorphs would be higher in autumn and with warmer peat temperatures (e.g., in the warmed enclosure), especially at depth, as plants may have more aerobic peat available for colonization under drier conditions, (b) the abundance of dark-colored ECM fungal structures would be higher in the unheated enclosure due to the decline of black spruce with warming (which has roots mostly colonized by melanized fungi), (c) fungal growth would be offset from fine-root production because the growth of mycorrhizal fungi is mainly dependent on newly produced photosynthates, and (d) the belowground active season for both plant roots and fungi would be extended in the warmed enclosure compared to the unheated enclosure. | Site description The backdrop of our zoomed-in investigation of root-fungal dynamics in a bog was the SPRUCE experiment, located in the forested S1 Generally, sedges are non-mycorrhizal in wet and waterlogged environments (Muthukumar et al., 2004). The Sphagnum moss layer is dominated by S. angustifolium and S. fallax in depressed hollow microtopography and S. magellanicum on raised hummock microtopography. | Automated minirhizotron installation The AMR system (Rhizosystems, LLC) is a newly developed technology that provides an unprecedented glimpse at root-fungal dynamics over time and throughout the peat profile; our study is the first time this technology has been used in a bog ecosystem. The AMRs are each composed of a USB-port microscope (ProScope camera) connected to a local computer and placed on a sled that moves within a clear acrylic minirhizotron tube that is 157 cm long × 10.8 cm wide ( Figure 1; INSIDE THE SPRUCE BOG). An AMR tube is imaged in full in ~24 hr and a full-tube scan consists of 33,784 individual images that measure 3.01 × 2.26 × 0.125 mm (x, y, z = depth of field; 640 × 480 pixels at 100 × magnification). The AMR system is fully sealed, and a pump system cycles air through a desiccant to avoid condensation on the inside of the tube, which could obscure images ( Figure 1 ). Images are transmitted to a server through a wired F I G U R E 2 Changes in above-and belowground communities along the temperature gradient of the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experiment. In the SPRUCE experiment five enclosures span a range of warming from +0, +2.25, +4.5, +6.75, to +9°C, while another five enclosures spanned the same range of warming and were also exposed to elevated CO 2 (e[CO 2 ]; +500 ppm above ambient). Only responses to 5 years of whole-ecosystem warming are displayed on the diagram and include: (i) an increase in mycorrhizal necromass decomposition rates , (ii) an increase in ericaceous shrub growth (above-and belowground; Hanson et al., 2020;Malhotra et al., 2020) (iii) an increase in plant-available nutrients (Iversen et al., unpublished Dusenge et al., 2020). In the present study, we focused on two automated, high-resolution minirhizotron tubes installed at the coldest and warmest ends of the experimental temperature gradient, respectively. The dark-green coniferous trees represent black spruce (Picea mariana) and the light-green and brown trees represent larch (Larix laricina). The small bushes represent ericaceous shrubs, the light-green plants represent Sphagnum mosses, and the light-green herbs represent graminoid species. The black root tips represent Cenoccoccum geophilum ectomycorrhizas that mostly associate with spruce while the light-colored root tips represent rhizomorph-forming ectomycorrhizas that mostly associate with larch. Created with BioRender.com network and assembled into mosaics using RootView software (http://205.149.147.131:8010/index.php). In 2014, each of the ten SPRUCE enclosures received one AMR, installed at an ~45° angle into a pre-made hole in raised hummock microtopography. Only one AMR was installed in each enclosure due to the high production cost of this technology (as the market is limited) and its prototype status; we focused on raised hummock microtopography because previous investigation of the spatial distribution of fine roots across the S1 bog found that raised, drier hummocks had the greatest root standing crop (Iversen et al., 2018). The calculated depth of each image or group of images accounts for the angle of installation and we considered "0" cm depth to be where the tube met the peat surface of the hummock. Full tube scans were archived weekly or bi-weekly beginning in early June 2014 and will continue to be archived for the duration of the SPRUCE experiment. | Image analyses Here we focused on two AMR tubes, one in the unheated (+0, e[CO 2 ]) and one in the warmest (+9°C, e[CO 2 ]) enclosure. Full tube scans were collected weekly or bi-weekly from autumn 2018 to spring 2020. In both enclosures, P. mariana was the nearest tree species to the AMRs (the unheated enclosure had 13 P. mariana trees and one L. laricina tree and the warmed enclosure had 18 P. mariana trees and six L. laricina trees; see Childs et al., 2020;data citation). To deal with the overwhelming number of images (33,784 images per tube per imaging date), we followed two complementary approaches: (a) tube-level assessments and (b) patch-level assessments ( Figure S2; Ripley, 2005). | Tube-level assessments To examine shifts in root and fungal abundance with changing environmental conditions, we surveyed 48 sub-mosaics of 36 images each for both the unheated and the warmest enclosures from autumn 2018 to spring 2020 (1,728 images per imaging date). We imposed a regular sampling grid on a given tube mosaic (from 0 to ~30 cm peat depth) in order to sample belowground patches proportionally to their area in the tube mosaic (Ripley, 2005; Figure S2A). For each sub-mosaic, at each imaging session, we estimated the abundance of In addition, we attributed the shade "light" or "dark" to ectomycorrhizas and fungal hyphae as a proxy of the level of melanization (Fernandez et al., 2013; fungal rhizomorphs were exclusively lightcolored). Fine roots and ectomycorrhizas were not separated by tree species or fungal species and fungal classes were not separated by fungal guilds (saprotrophs or mycorrhizal) because this was impossible using images only. The abundance of each of our defined classes was expressed as the percent of the sub-mosaic surface where the class was present ; data citation). | Patch-level assessments To examine root and fungal phenology, for each tube, five sub-mosaics of 36 images were arbitrarily chosen to encompass one belowground patch where one or more class(es) were present (DYNAMICS; Figure S2B). Out of the five sub-mosaics for the +9, e[CO 2 ] enclosure, one was within the 0-10 cm peat depth range, three within the 10-20 cm peat depth range and one within the 20-30 cm peat depth range. For the +0, e[CO 2 ] enclosure, four sub-mosaics were within the 0-10 cm peat depth range and one within the 10-20 cm peat depth range . All sub-mosaics were analyzed by the same person to obtain length per individual root or fungal structure (except fungal hyphae that did not grow linearly but rather increased in areal coverage) per sub-mosaic area using RooTracker (Duke University). Growth phenology (October 2018-March 2020) was calculated as the length extension of each structure (e.g., root, rhizomorph etc.) at each imaging session divided by the surface area of the sub-mosaic and the number of days between imaging session (µm cm −2 day −1 ). To assess the growth phenology of fungal hyphae, we measured the surface area covered by the hyphae at each imaging session using ImageJ (Fiji) and divided it by the surface area of the sub-mosaic. We averaged the growth phenology of each class across the five sub-mosaics per tube (in some cases, averaging in '0' growth where no growth was observed), and treated negative production as zero data citation). | Environmental variables Given our focus on just two experimental endpoints within the SPRUCE experiment, we considered root and fungal dynamics in relation to the quantifiable effects of experimental warming and e[CO 2 ] on edaphic conditions in the surrounding peat, using SPRUCE as a proxy for changing environmental conditions. Hummock temperature (°C) and peat volumetric water content (cm 3 H 2 O/cm 3 peat) were obtained from Hanson et al. (2016). Hummock temperature (measured with thermistors, model HMP-155; Vaisala, Inc.) is the average of the soil temperature (30-min measurements were first averaged daily) at hummock heights 0, +10, and +20 cm above the level of the hollow (which was defined as "0 cm") and volumetric water content is the average of data (30-min measurements were first averaged daily) from three sensors (10HS; Decagon Devices Inc.) that were installed laterally at about +15 to +20 cm in a hummock. Water table height with respect to mean hollows (m ± hollow = 0) was measured using sensors (2,000 mm TruTrack Water and Temperature Voltage Output) installed within well casings and was obtained from Hanson et al. (2016). | Data analyses Data analyses were conducted in R version 3.6.1 (R CoreTeam, 2019). Due to the absence of replication in our study (one tube per enclosure), we focused our data analyses on the responses of root-fungal dynamics to quantified changes in peat temperature and moisture over time and across tubes. We averaged continuously collected environmental data to obtain one value for each imaging date and to test the effects of temperature and peat volumetric water content on the abundance and growth of root and fungal mycelium, both within and between enclosures together, we fitted generalized linear mixed models using adaptive gaussian quadrature (R package 'GLMMadaptive'; Rizopoulos, 2020). This package is useful for analysis of over-dispersed data with an excess of zeros and repeated measures. For each belowground class, we fitted zero-inflated negative binomial model using the "mixed_model" function. This model is a mixture of a count distribution (negative binomial part) and a binary distribution (zero-inflated part). Temperature and volumetric water content data were centered and scaled using the "scale" function and both added as fixed effects in both the negative binomial and zero-inflated parts. Time was added as a random effect only in the negative binomial part and the three depths were not separated. Models had the following structure: belowground class ~ temperature+moisture, random = 1|date, family (zero-inflated negative binomial), zero-inflated (factors) = temperature + moisture. We used the DHARMa package (Hartig, 2020) to evaluate the goodness of fit of the models; here we report both statistically significant results at p < .05 and marginally significant results at p < .1. | Tube-level assessments (abundance) In both enclosures, the abundance of ericaceous shrub and tree roots was significantly related to seasonal patterns in hummock temperature and water content (Figure 4; Table S1A). Shrub root abundance decreased when the peat became wetter in the spring and tree root abundance decreased when the peat became warmer in the summer. Fungal mycelial structures were more abundant when temperature and peat water content started to drop, in autumn, in both enclosures (Figure 4). Accordingly, mixed-model analysis suggested that temperature negatively predicts light and dark hyphae abundance, although it did not significantly predict the abundance of fungal rhizomorphs. Lastly, the presence (but not the abundance) of sporocarps was significantly negatively related to peat water content (Table S1A). For the entire time period, ericaceous shrub roots were mainly constrained to shallow peat and were about five times less abundant in the unheated than in the warmed enclosure. In particular, shrub roots were present on 4% and 19% of the sub-mosaics surface on average, in the 0-10 cm peat layer for the unheated and warmed enclosure, respectively (Figure 4). By contrast, tree fine roots and ectomycorrhizas appeared to be evenly distributed throughout the peat profile (from 0 to ~30 cm peat depth) and their abundance increased with hummock temperature between enclosures. Over the whole peat profile, tree fine roots and ectomycorrhizas (dark and light) were present on 7% and 0.6% of the sub-mosaic surface respectively in the unheated enclosure compared to 2% and 0.3% of the sub-mosaic surface, respectively in the warmed enclosure. Ectomycorrhizas present in the unheated enclosure were also darker than the ones present in the warmed enclosure. In turn, both the abundance of light and dark fungal hyphae sharply decreased with hummock temperature: hyphae were present on about 40% of the sub-mosaics surface in the unheated enclosure while they were present on about 0%-10% of the sub-mosaics surface in the warmed enclosure. In the SPRUCE bog, Helotiales and Meliniomyces spp. dominate the ERM fungal community Kennedy et al., 2018) and form non-aggregated mycelia of dark hyphae with thick melanized cell walls (Clemmensen et al., 2015;Martino et al., 2018). It is thus plausible that a sizeable fraction of the dark hyphae we observe on the unheated SPRUCE enclosure are of ERM origin as well as from Cenococcum geophilum (Fernandez et al., 2013. Fungal rhizomorph and sporocarp abundance increased with hummock temperature between enclosures, especially in deeper peat. Fungal sporocarps were absent in the unheated enclosure, and when found in the warmed enclosure, were constrained to 10-20 cm depth. | Patch-level assessments (phenology) For both enclosures, daily growth of tree and ericaceous shrub roots was significantly, negatively related to daily changes in peat temperature and to peat water content, for shrub roots only (Table S1B), while daily rhizomorph production was significantly positively related to daily temperature. Furthermore, temperature was a good indicator of the occurrence of rhizomorph production as predicted by the zero-inflated portion of the mixed-model (Table S1B). In both enclosures, tree and shrub roots had a bimodal pattern of production (focused in the spring and autumn), with a higher growth in the spring ( Figure 5). Tree and shrub root production reached 200 and 100 µm cm −2 day −1 respectively in the unheated enclosure, but reached 300 µm cm −2 day −1 and almost 400 µm cm −2 day −1 respectively in the warmed enclosure. The percent cover of light and dark fungal hyphae peaked in the spring and autumn (similar to tree and shrub roots) and was strongly reduced in the +9, e[CO 2 ] enclosure (no dark hyphae was produced in the warmed enclosure; Figure 5). In contrast to fine-root and fungal hyphal dynamics, ectomycorrhizas, fungal rhizomorphs, and sporocarps had a unimodal pattern of production (autumn), which was offset from root phenology. The growth of rhizomorphs was higher in the warmed enclosure, reaching 1,000 µm cm −2 day −1 in September, while belowground sporocarps F I G U R E 4 Relative abundance (%) of fine roots and fungal mycelium from autumn 2018 to spring 2020 (n = 1) in a forested bog where the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experiment is located. Data were obtained from images captured using one automated, high-resolution minirhizotron, installed in the unheated (+0, elevated [CO 2 ]) and in the warmest (+9°C, elevated [CO 2 ]) SPRUCE enclosures (in raised hummock microtopography). Root types are color-coded in black, and fungal mycelial structures are color-coded in orange; the bottom panels show the associated time series of hummock temperature, relative water table depth, and peat moisture content. The dashed line for "hollow surface" is relevant only for understanding the relative water table depth were only produced in the warmed enclosure. Furthermore, the belowground active season (duration of root and mycelium production) increased from c. 181 days in the unheated enclosure to c. 243 days in the warmed enclosure. | D ISCUSS I ON We used high-resolution minirhizotrons to peer belowground in a boreal, forested bog that is the location of one of the world's largest peatland warming experiments. Despite the lack of replication in our study, high-resolution minirhizotrons highlighted striking differences in the abundance and phenology of plant fine roots and fungal mycelium between the two extremes of the experimental temperature gradient at SPRUCE (+0, e[CO 2 ]) vs. +9°C, e[CO 2 ]). At the warmer and drier end of the temperature gradient, ericaceous shrub roots and ectomycorrhizal fungal rhizomorphs were more abundant and the growth of rhizomorphs and sporocarps was greater than in the colder, wetter, end of the gradient which was characterized by a higher abundance of tree fine roots, darkcolored ectomycorrhizas and fungal hyphae (Figure 2). In turn, the belowground active season for both plant roots and fungi was extended by 62 days in the warmed enclosure compared to the unheated enclosure. | Trade-offs between shrub roots and ericoid fungi Ericaceous shrub fine-root abundance was higher in hummock microtopography at the warmest end of the SPRUCE temperature gradient which partially supports our first hypothesis that vascular plant fine roots would be more abundant in the warmed enclosure. This is in line with Malhotra et al. (2020), who utilized the whole temperature gradient at SPRUCE and found that shrub fine-root production increased linearly by 1.2 km m −2 year −1 per degree increase in soil temperature at SPRUCE, using root ingrowth cores in which fungal mycelium were not examined. These results suggest that ericaceous shrubs may dominate the belowground plant community in warmer boreal peatlands, potentially leading to a decline in peat carbon accumulation because shrub roots exudate labile carbon compounds that prime the rhizosphere and thus increase heterotrophic decomposition (Bragazza et al., 2013;Lin et al., 2014;Gavazov et al., 2018; but see Basiliko et al., 2012). Ericaceous shrubs may also dominate the aboveground plant community in warmer peatlands, as observed at SPRUCE-for example, shrub annual net primary productivity was ~111.3 gC m −2 year −1 in the unheated enclosure compared to ~140.6 gC m −2 year −1 in the warmest enclosures in 2018 and in Canadian and European peatlands (Bragazza et al., 2016;Chong et al., 2012). This could further F I G U R E 5 Growth phenology of roots and fungal mycelium (µm cm −2 day −1 ) from autumn 2018 to spring 2020 (n = 1) in a forested bog where the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experiment is located. Data were obtained from images captured using one automated, high-resolution minirhizotron, installed in the unheated (+0, elevated [CO 2 ]) and in the warmest (+9°C, elevated [CO 2 ]) SPRUCE enclosures (in raised hummock microtopography). Growth phenology (October 2018-March 2020) was calculated as the new appearance or length extension of each structure (e.g., root, rhizomorph etc. except fungal hyphae) at each imaging session divided by the surface area of the sub-mosaic and the number of days between imaging session. The bottom panels show the associated time series of hummock temperature and peat moisture content reduce peat formation because growing shrubs shade out peatforming Sphagnum mosses (Bragazza et al., 2016;Chong et al., 2012;Norby et al., 2019). Given that ericaceous shrub root production, biomass, length and abundance are higher at the warmest end of the SPRUCE gradient ; this study; Figure 2), we expect that shrubs will rely more on direct root resource uptake than on ERM fungal mycelium as peatlands become warmer (as hypothesized at a global level by Bergmann et al., 2020). Indeed, we found that the abundance of dark fungal hyphae-which could be ERM in origin and from Cenococcum geophilum-was lower in the warmed enclosure ( Figure 4). Our expectation is consistent with the model of Baskaran et al. (2017), which predicts that plants rely more on direct root uptake under conditions of high nitrogen availability, and Iversen et al. (unpublished data; Iversen (CI) ceous shrub root abundance and the loss of ERM fungi that produce necromass known to be highly recalcitrant (Clemmensen et al., 2015;Fernandez et al., 2019;Thormann, 2006) and that potentially produce mycelial networks (Grelet et al., 2010). | Rhizomorph abundance While molecular analyses would be required to know the taxonomic identity of fungi producing the rhizomorphs in the SPRUCE bog, in general, a large fraction of the rhizomorph mass in soils is thought to be mycorrhizal and not saprotrophic (Godbold et al., 2006;Read, 1992;Vargas & Allen, 2008). Besides, the high carbon cost of rhizomorph production and maintenance is likely paid by plants (Agerer, 2001;Clemmensen et al., 2015;Hobbie, 2006;Hupperts et al., 2017; see minirhizotron images at FUNGAL MYCELIUM). Therefore, our results may reflect a shift to more carbon-demanding ECM fungi as observed in other warming experiments in cold ecosystems (Deslippe et al., 2011;Morgado et al., 2015;Salazar et al., 2020). The abundance of fungal rhizomorphs was higher deep in the peat profile of the warmed enclosure, further confirming our first hypothesis. Associating with rhizomorph-forming ECM fungi may also be a strategy for plant hosts to acquire water, especially at depth. Consistent with this idea, we have observed drying of the peat with elevated temperatures, indicating that there may be a larger volume of aerobic peat for roots and fungi to colonize (Figure 4). We thus expect ECM rhizomorphs to facilitate plant water uptake in warmer peatlands because (a) they form vessel-like, hydrophobic elements that can efficiently transport water over long distances (Agerer, 2001;Brownlee et al., 1983) and (b) they form common mycorrhizal networks that may provide a pathway for the transfer of hydraulically lifted water between plants ( Egerton-Warburton et al., 2007;Warren et al., 2008). Observed differences in mycelial morphology between the two ends of the temperature gradient at SPRUCE may be related to ECM host performance (Fernandez et al., 2017). At SPRUCE, multiple data streams indicate that L. laricina individuals may be better acclimating to warming than P. mariana individuals (Dusenge et al., 2020;Peters et al., unpublished (Clemmensen et al., 2015). This mechanism could partly explain the increase in L. laricina leaf nitrogen content and the higher rate of mycorrhizal fungal necromass decomposition with elevated temperatures at SPRUCE . | Tree fine-root and ectomycorrhiza abundance We hypothesized that tree fine roots would also be more abundant at the warmest end of the temperature gradient, as Malhotra et al. (2020) showed that vascular plant fine-root length increased linearly with temperature across the experimental temperature gradient. Here focusing on the two extremes of the gradient, we found that tree roots and ectomycorrhizas were less abundant in the warmed enclosure compared to the unheated enclosure. In addition, in the warmed enclosure, ectomycorrhizas were only light-colored forms (see minirhizotron images at MYCORRHIZAS; Figure 4, which supports our second hypothesis that the abundance of dark-colored ECM fungal structures would be higher in the unheated enclosure. This implies that the contribution of tree root-and mycorrhizal-derived carbon inputs to the total carbon stored in peat may decrease at the warmest end of the gradient, because non-melanized necromass inputs are highly labile . Decreases in tree fine-root abundance in the warmed enclosure could explain the increase in rhizomorph abundance because they generally dominate areas of low root density (Mucha et al., 2018;Peay et al., 2011). The observed decreases in tree fine-root abundance and increases in rhizomorph abundance at the warmest end of the temperature gradient suggest that trees may adopt an opposite strategy to the increasingly "do-it-yourself" resource acquisition of ericaceous shrubs by "outsourcing" to ECM fungal mycelium in a warmer future (e.g., Bergmann et al., 2020). | Belowground phenology Our third hypothesis that fine-root phenology would be offset from that of fungi was supported as fine-root production peaked in March and in February, while that of fungi peaked in August and September, in the unheated and warmed enclosures, respectively. The peak in fungal growth was largely driven by rhizomorphs. Therefore, ECM fungi may have greatly contributed to the autumn peak in rhizomorph production (Högberg et al., 2010;Hupperts et al., 2017). This is consistent with the increase in rhizomorph growth with elevated temperatures. Indeed, whole-ecosystem warming extends the aboveground active season (Richardson et al., 2018) and increases vascular plant photosynthetic capacity (especially that of L. laricina, Dusenge et al., 2020). As a result, ECM hosts may allocate higher amounts of newly fixed or stored carbon to their mycorrhizal partners. Furthermore, the peak in rhizomorph growth in the warmed enclosure corresponded to the highest seasonal temperatures and lowest peat water contents, which supports our expectation that associating with rhizomorph-forming ECM fungi may be a strategy for plant hosts to acquire water in drying peat soil (Vargas & Allen, 2008). Sporocarps were produced belowground in the warmed enclosure only, in autumn 2018 and 2019, when temperatures were the highest (Figures 4 and 5). In addition, sporocarp abundance linearly increased with warming in autumn 2019 as illustrated by Figure S3. These results could be explained by a potential increase in ECM host-derived carbon allocation belowground, because, in boreal forests, mushrooms from ECM origin are known to fruit toward late autumn (Boddy et al., 2014;Högberg et al., 2010;Kauserud et al., 2012), however, more information is needed on ECM fruiting body phenology in peatlands. We found that the belowground active season increased from c. 181 days in the unheated enclosure to c. 243 days in the warmed enclosure which supported our fourth hypothesis that the belowground active season for both plant roots and fungi would be extended in the warmed enclosure compared to the unheated enclosure. This may greatly increase root and fungal resource uptake and belowground respiration. Yet, these processes also depend on root and fungal turnover, which were not estimated here but will be a focus of future work. In addition, future work should focus on understanding root and fungal dynamics across hummock and hollow microtopography (Asemaninejad et al., 2017;Fernandez et al., 2019;Malhotra et al., 2020) as well as linking minirhizotron observations and molecular techniques to fungal ecological strategies (i.e. fungal guilds) to better understand fungal guild-specific responses to climate changes. | CON CLUS IONS Climate changes are rapidly modifying the boreal biome, yet, whether fungi will hinder plant resilience or provide a buffer against changes in temperature and atmospheric CO 2 concentrations remains to be answered. Tackling this question is crucial in order to quantify the future contribution of boreal peatlands to global climate regulation. The development of new minirhizotron technology in recent years enables observation of narrow-diameter extra-radical fungal hyphae directly at an unprecedented resolution. Here we used this technology to peer belowground in a boreal, forested bog that hosts one of the world's largest peatland warming experiments. Despite the lack of replication in our study, high-resolution minirhizotrons highlighted striking differences in the abundance and phenology of plant fine roots and fungal mycelium between the two extremes of an experimental temperature gradient. We showed that warmer and drier environmental conditions were associated with shifts in the abundance of vascular plant fine roots, alterations in the morphology of mycorrhizal and saprotrophic fungal mycelium and lengthening of the belowground active season. These changes represent a loss of fungal functional diversity and may reduce peat carbon accumulation. Protecting boreal peatland belowground biodiversity may be crucial for action aiming at mitigating climate change. ACK N OWLED G EM ENTS We thank Deanne Brice, Kristen Holbrook, Les Hook, Nathan Thorp, and Jordan Woodward for image processing and data availability. The
2020-12-10T09:03:00.249Z
2020-12-03T00:00:00.000
{ "year": 2020, "sha1": "18592eaa5593b382edd9239caa2a000b2ddb8326", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ppp3.10172", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f5642610ec08ae3239da9cdad7414a3c0caef2e4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
268244981
pes2o/s2orc
v3-fos-license
Transcriptome Analysis of Meloidogyne javanica and the Role of a C-Type Lectin in Parasitism Meloidogyne javanica is one of the most widespread and economically important sedentary endoparasites. In this study, a comparative transcriptome analysis of M. javanica between pre-parasitic second-stage juveniles (Pre-J2) and parasitic juveniles (Par-J3/J4) was conducted. A total of 48,698 unigenes were obtained, of which 18,826 genes showed significant differences in expression (p < 0.05). In the differentially expressed genes (DEGs) from transcriptome data at Par-J3/J4 and Pre-J2, a large number of unigenes were annotated to the C-type lectin (CTL, Mg01965), the cathepsin L-like protease (Mi-cpl-1), the venom allergen-like protein (Mi-mps-1), Map-1 and the cellulase (endo-β-1,4-glucanase). Among seven types of lectins found in the DEGs, there were 10 CTLs. The regulatory roles of Mj-CTL-1, Mj-CTL-2 and Mj-CTL-3 in plant immune responses involved in the parasitism of M. javanica were investigated. The results revealed that Mj-CTL-2 could suppress programmed cell death (PCD) triggered by Gpa2/RBP-1 and inhibit the flg22-stimulated ROS burst. In situ hybridization and developmental expression analyses showed that Mj-CTL-2 was specifically expressed in the subventral gland of M. javanica, and its expression was up-regulated at Pre-J2 of the nematode. In addition, in planta silencing of Mj-CTL-2 substantially increased the plant resistance to M. javanica. Moreover, yeast co-transformation and bimolecular fluorescence complementation assay showed that Mj-CTL-2 specifically interacted with the Solanum lycopersicum catalase, SlCAT2. It was demonstrated that M. javanica could suppress the innate immunity of plants through the peroxide system, thereby promoting parasitism. Introduction Root-knot nematodes (RKNs), known as obligate biotrophic plant parasites, are one of the main plant-parasitic nematodes, infecting more than 5000 plant species [1].They attack the roots of various plants and absorb essential nutrients from highly specialized, multinucleate feeding cells.The genus of RKNs is dominated by Meloidogyne incognita, M. arenaria, M. hapla and M. javanica, causing crop losses amounting to hundreds of billions of US dollars each year [2].M. javanica is widely distributed in southern China, affecting many economically important crops, ornamental plants and fruit trees [3]. With the rapid development of biotechnologies, the whole genome sequencing of M. incognita and M. hapla was completed in 2008 [4,5], which facilitates the molecular function study of effectors in RKNs.Transcriptome and genome sequencing will provide more information for elucidating gene function and provide insights into the interaction between nematodes and hosts.There have been relevant transcriptome studies on M. enterolobii, Plants 2024, 13, 730 2 of 15 M. graminicola and M. incognita [6], and in this study the comparative transcriptome sequencing data of M. javanica between pre-parasitic second-stage juveniles (Pre-J2) and 14-day post-infection (dpi) parasitic juveniles (Par-J3/J4) were obtained for the first time.Additionality, among the obtained differentially expressed genes (DEGs), 10 C-type lectins (CTLs) were selected as candidates for further investigation. Lectins, present in plants, animals and microorganisms, are able to specifically bind to monosaccharides or oligosaccharides, which enable them to bind to soluble carbohydrates, or sugar residues of glycoproteins, thus triggering a series of downstream cascade reactions [7].Lectins could sense the host or suppress host immunity to promote parasitisms of themselves.Recognition between pahtogens and a host is the prerequisite for successful infection of pathogens [8].This process is affected by many factors, but lectins on the surface of pathogens and glycans on the surface of host cells have been proven to play a critical role [9].The first characterized lectin was isolated from the seeds of the castor bean (Ricinus communis) and took the name ricin [10].A hepatic lectin was discovered in rabbit liver in 1974 and may be the first lectin of the mammalian origin [10]. There are several types of lectin domains present in nematodes, including the GH18 domain of class V chitinase, hevein domain, legume lectin-like domain, LysM domain, F-type lectin domain, Ricin toxin B chain (RTB) lectin domain, calreticulin domain, M-type lectin domain, CTL domain (CTLD) and galectin domain, of which CTLD and the galectin domain are the most studied [10].A CTL is a Ca 2+ -dependent glycan-binding protein (GBP) that shares primary and secondary structural homology in its carbohydrate-recognition domain (CRD).The CRD of CTLs is generally regarded as the CTLD, representing a ligand binding motif that binds to sugars, proteins, lipids and even inorganic ligands [11].One of the most striking characteristics of CTLs is the "WIGL" motif, which is highly conserved in CTLs.A CTL performs its function in the cell adhesion, glycoprotein clearance and innate immunity [10,12].According to previous studies, CTLs are widely present in free-living nematodes, animal-parasitic nematodes and plant-parasitic nematodes.However, the quantities of CTLs were relatively larger in free-living nematodes compared to plant-and animal-parasitic nematodes.The gene SCN1018, cloned from Heterodera glycines, encodes a CTL [13], and soaking H. glycines at Pre-J2 in doublestranded RNA of SCN1018 to induce gene knockdown via RNA interference reduces parasitic nematodes in host plants [14].Eleven CTLs were identified in Rotylenchulus reniformis [15].The CTL Mg01965 from M. graminicola can suppress plant defense and promote nematode parasitism [12].The MiCTL1a from M. incognita could suppress the flg22-stimulated reactive oxygen species (ROS) burst through interactions with catalases in Arabidopsis thaliana [16]. In this study, a comparative transcriptome analysis between Pre-J2 and Par-J3/J4 of M. javanica was conducted, through which a CTL was identified from the obtained list of DEGs and the expression pattern of the CTL was elucidated.The results confirmed that Mj-CTL-2 could inhibit plant immune responses and play a role in nematode parasitism via interactions with the Solanum lycopersicum catalase SICAT2. Overview of M. javanica Transcriptome Sequencing, Assembly and Annotations Illumina Hiseq 2000 was utilized for transcriptome sequencing of M. javanica at Pre-J2 and Par-J3/J4.This sequencing yielded 55,759,032 reads from the Par-J3/J4 library and 92,886,548 reads from the Pre-J2 library, with an average length of 257 bp and 261 bp, respectively (Table S1).There were 34,047 and 45,352 unigenes in the transcriptomes at Pre-J2 and Par-J3/J4, respectively, and a total of 48,698 unigenes were finally obtained after mixed splicing. Identification and Verificationof DEGs in Transcriptome In total, 18,826 genes exhibited significant differences in expressions between Par-J3/J4 and Pre-J2 (|log2FoldChange| > 1, padj < 0.05), including10,552 down-regulated DEGs and 8274 up-regulated DEGs in Par-J3/J4 (Figure 2a,b).From transcriptome DEGs of M. javanica, eight unigenes were selected for RT-qPCR to evaluate the accuracy of the transcriptome data.Four of these had high FPKM values at Par-J3/J4, while the other four had high FPKM values at Pre-J2., The relative expression at Par-J3/J4 was obtained based on that the relative expression at Pre-J2 was assigned as 1 (Figure 2c).It can be seen that the trend of relative expression is consistent with the trend of FPKM value. Identification and Verificationof DEGs in Transcriptome In total, 18,826 genes exhibited significant differences in expressions between Par-J3/J4 and Pre-J2 (|log2FoldChange| > 1, padj < 0.05), including10,552 down-regulated DEGs and 8274 up-regulated DEGs in Par-J3/J4 (Figure 2a,b).From transcriptome DEGs of M. javanica, eight unigenes were selected for RT-qPCR to evaluate the accuracy of the transcriptome data.Four of these had high FPKM values at Par-J3/J4, while the other four had high FPKM values at Pre-J2., The relative expression at Par-J3/J4 was obtained based on that the relative expression at Pre-J2 was assigned as 1 (Figure 2c).It can be seen that the trend of relative expression is consistent with the trend of FPKM value. Furthermore, the DEGs were classified according to GO and KEGG terms.In detail, the 10,552 down-regulated DEGs were enriched in 30 GO terms including molecular function (MF, 9251), cellular component (CC, 1217) and biological process (BP, 8426).In the 30 GO terms, the DEGs mainly responded to substrate-specific channel activity in MF, troponin complex in CC and G-protein receptor signaling path ways in BP (Figure 3a).Moreover, the 10,552 down-regulated DEGs were classified into 20 KEGG pathways.It was found that these DEGs showed close associations with metabolism, longevity, proliferation and calcium signaling pathways (Figure 3b).In a similar way, the 8274 up-regulated DEGs were enriched in 30 GO terms, and the largest number of unigenes were related to structural molecule activity in MF, sibosome in CC and the organonitrogen compound biosynthetic process in BP (Figure 3c).In addition, the 8274 up-regulated DEGs were classified into 20 KEGG pathways (Figure 3d), and these DEGs showed close associations to ribosome, oxidative phosphorylation, DNA replication and fatty acid metabolism. DEGs and 8274 up-regulated DEGs in Par-J3/J4 (Figure 2a,b).From transcriptome DEGs of M. javanica, eight unigenes were selected for RT-qPCR to evaluate the accuracy of the transcriptome data.Four of these had high FPKM values at Par-J3/J4, while the other four had high FPKM values at Pre-J2., The relative expression at Par-J3/J4 was obtained based on that the relative expression at Pre-J2 was assigned as 1 (Figure 2c).It can be seen that the trend of relative expression is consistent with the trend of FPKM value.Furthermore, the DEGs were classified according to GO and KEGG terms.In detail, the 10,552 down-regulated DEGs were enriched in 30 GO terms including molecular function (MF, 9251), cellular component (CC, 1217) and biological process (BP, 8426).In the 30 GO terms, the DEGs mainly responded to substrate-specific channel activity in MF, troponin complex in CC and G-protein receptor signaling path ways in BP (Figure 3a).Moreover, the 10,552 down-regulated DEGs were classified into 20 KEGG pathways.It was found that these DEGs showed close associations with metabolism, longevity, proliferation and calcium signaling pathways (Figure 3b).In a similar way, the 8274 up-regulated DEGs were enriched in 30 GO terms, and the largest number of unigenes were related to structural molecule activity in MF, sibosome in CC and the organonitrogen compound biosynthetic process in BP (Figure 3c).In addition, the 8274 up-regulated DEGs were classified into 20 KEGG pathways (Figure 3d), and these DEGs showed close associations to ribosome, oxidative phosphorylation, DNA replication and fatty acid metabolism. Screening of Lectins and Sequence Alignment of the Mj-CTL Genes DEGs with homologies to lectin domains were screened and studied [10].There were 53 lectin proteins consisting of seven types of lectin domains, including the CTLD (10), galectin domain (27), calreticulin domain (1), legume lectin-like domain (3), LysM domain (3), RTB lectin domain (6), and the GH18 domain of class V chitinase (3) (Table S2).There are 10 CTLs in M. javanica, 48 in M. graminicola and 33 in Hirschmanniella oryzae, respectively, but they were not found in P. thornei.In addition, there are 27 galectins, accounting for the majority, three chitinases and one calreticulin in M. javanica, and they could be also detected in another six species of plant-parasitic nematodes.In addition, six RTB lectins could be detected in M. javanica, but not in the soybean cyst nematode, and legume lectins have only been found in M. javanica, H. oryzae and H. avenae. Plants 2024, 13, x FOR PEER REVIEW 5 of 1 (3), RTB lectin domain (6), and the GH18 domain of class V chitinase (3) (Table S2).Ther are 10 CTLs in M. javanica, 48 in M. graminicola and 33 in Hirschmanniella oryzae, respec tively, but they were not found in P. thornei.In addition, there are 27 galectins, accountin for the majority, three chitinases and one calreticulin in M. javanica, and they could be als detected in another six species of plant-parasitic nematodes.In addition, six RTB lectin could be detected in M. javanica, but not in the soybean cyst nematode, and legume lectin have only been found in M. javanica, H. oryzae and H. avenae.Among the 10 CTL genes of M. javanica, 4 were highly expressed at Pre-J2, and 3 ha the N-terminal signal peptides, named Mj-CTL-1, Mj-CTL-2 and Mj-CTL-3.The amin acid sequences of Mj-CTLs have four conserved "C" (cysteine) sites, one conserve "WIGL" domain and one conserved "WND" motif (Figure 4). Suppression of Gpa2/RBP-1-Induced Cell Death by Mj-CTL-2 Previous studies have confirmed that CTLs from plant-parasitic nematodes can pro mote parasitism by suppressing plant defenses [12,16], so the possible role of the CTLs i M. javanica in suppression of host defense was investigated herein.The results of the inh bition of cell death experiment revealed that Mj-CTL-2 exerted a striking inhibitory effect o cell necrosis (cell necrosis rate was 18%), while Mj-CTL-1 and Mj-CTL-3 did not markedly sup press cell necrosis (cell necrosis rates were 67% and 60%, respectively) (Figure 5b). Suppression of Gpa2/RBP-1-Induced Cell Death by Mj-CTL-2 Previous studies have confirmed that CTLs from plant-parasitic nematodes can promote parasitism by suppressing plant defenses [12,16], so the possible role of the CTLs in M. javanica in suppression of host defense was investigated herein.The results of the inhibition of cell death experiment revealed that Mj-CTL-2 exerted a striking inhibitory effect on cell necrosis (cell necrosis rate was 18%), while Mj-CTL-1 and Mj-CTL-3 did not markedly suppress cell necrosis (cell necrosis rates were 67% and 60%, respectively) (Figure 5b). Spatial and Developmental Expression Analysis of Mj-CTL-2 As we have demonstrated that Mj-CTL-2 can inhibit the Gpa2/RBP-1-induced tobacco cell death, so Mj-CTL-2 was selected for further experiments.The tissue localization of Mj-CTL-2 was determined by in situ hybridization.As the negative control treatment, no signal was observed in sense RNA probes, while a strong signal was observed within the subventral gland cells at Pre-J2 after hybridization with the digoxigenin (DIG)-labeled antisense RNA probes (Figure 6a). With β-actin as the reference gene, a RT-qPCR assay was performed to examine the expression of Mj-CTL-2 in seven developmental time points of M. javanica.Using the expression level at 10 dpi as reference, the time-specific fold change difference in expression of Mj-CTL-2 is shown in Figure 6b.It can be observed that the highest expression level of Mj-CTL-2 appeared at Pre-J2, after which its expression level gradually declined. Inhibitory Effect of Mj-CTL-2 on the ROS Burst The plants and pathogens evolved a relationship of mutual slaughter.The immune system was activated when pathogens attacked the host with a key feature of the burst of Spatial and Developmental Expression Analysis of Mj-CTL-2 As we have demonstrated that Mj-CTL-2 can inhibit the Gpa2/RBP-1-induced tobacco cell death, so Mj-CTL-2 was selected for further experiments.The tissue localization of Mj-CTL-2 was determined by in situ hybridization.As the negative control treatment, no signal was observed in sense RNA probes, while a strong signal was observed within the subventral gland cells at Pre-J2 after hybridization with the digoxigenin (DIG)-labeled antisense RNA probes (Figure 6a). Spatial and Developmental Expression Analysis of Mj-CTL-2 As we have demonstrated that Mj-CTL-2 can inhibit the Gpa2/RBP-1-induced tobacco cell death, so Mj-CTL-2 was selected for further experiments.The tissue localization of Mj-CTL-2 was determined by in situ hybridization.As the negative control treatment, no signal was observed in sense RNA probes, while a strong signal was observed within the subventral gland cells at Pre-J2 after hybridization with the digoxigenin (DIG)-labeled antisense RNA probes (Figure 6a). With β-actin as the reference gene, a RT-qPCR assay was performed to examine the expression of Mj-CTL-2 in seven developmental time points of M. javanica.Using the expression level at 10 dpi as reference, the time-specific fold change difference in expression of Mj-CTL-2 is shown in Figure 6b.It can be observed that the highest expression level of Mj-CTL-2 appeared at Pre-J2, after which its expression level gradually declined. Inhibitory Effect of Mj-CTL-2 on the ROS Burst The plants and pathogens evolved a relationship of mutual slaughter.The immune system was activated when pathogens attacked the host with a key feature of the burst of With β-actin as the reference gene, a RT-qPCR assay was performed to examine the expression of Mj-CTL-2 in seven developmental time points of M. javanica.Using the expression level at 10 dpi as reference, the time-specific fold change difference in expression of Mj-CTL-2 is shown in Figure 6b.It can be observed that the highest expression level of Mj-CTL-2 appeared at Pre-J2, after which its expression level gradually declined. Inhibitory Effect of Mj-CTL-2 on the ROS Burst The plants and pathogens evolved a relationship of mutual slaughter.The immune system was activated when pathogens attacked the host with a key feature of the burst of reactive oxygen species (ROS) [17]; however pathogens usually secreted effectors to suppress the burst of ROS and developed the battle relationship between the plant and pathogen [18].Additionally, pES vectors expressing Mj-CTL-2 and enhanced GFP (eGFP) were introduced into tobacco leaves through agroinfiltration.At 2 days after infiltration, leaf discs were collected and exposed to flg22.The results unveiled that over-expression of Mj-CTL-2 in the plants reduced the flg22-induced ROS production in comparison with the negative control eGFP (Figure 7). Plants 2024, 13, x FOR PEER REVIEW 7 of 15 reactive oxygen species (ROS) [17]; however pathogens usually secreted effectors to suppress the burst of ROS and developed the battle relationship between the plant and pathogen [18].Additionally, pES vectors expressing Mj-CTL-2 and enhanced GFP (eGFP) were introduced into tobacco leaves through agroinfiltration.At 2 days after infiltration, leaf discs were collected and exposed to flg22.The results unveiled that over-expression of Mj-CTL-2 in the plants reduced the flg22-induced ROS production in comparison with the negative control eGFP (Figure 7). Attenuating Effect of in Planta RNA Interference (RNAi) of Mj-CTL-2 on Nematode Parasitism In order to conform the effect of Mj-CTL-2 in nematode parasitism, tobacco rattle virus (TRV)-mediated gene silencing was performed to silence the target gene during infection.With the nematode β-actin gene as a reference gene, RT-Qpcr analysis results manifested that the transcript level of Mj-CTL-2 in nematodes at 5 dpi infesting the Ptrv2-Mj-CTL-2infiltrated plants displayed a drastic reduction compared with that in the control plants (Figure 8a), demonstrating the effective gene silencing mediated through in planta RNAi.Other Mj-CTL isoforms were used to verify the specificity of the Mj-CTL-2-targeting RNAi by Qrt-PCR analysis.The results showed that the transcriptional expressions of these Mj-CTL isoforms were not affected by the Mj-CTL-2-targeting RNAi treatment (Figure 8a). Attenuating Effect of in Planta RNA Interference (RNAi) of Mj-CTL-2 on Nematode Parasitism In order to conform the effect of Mj-CTL-2 in nematode parasitism, tobacco rattle virus (TRV)-mediated gene silencing was performed to silence the target gene during infection.With the nematode β-actin gene as a reference gene, RT-Qpcr analysis results manifested that the transcript level of Mj-CTL-2 in nematodes at 5 dpi infesting the Ptrv2-Mj-CTL-2infiltrated plants displayed a drastic reduction compared with that in the control plants (Figure 8a), demonstrating the effective gene silencing mediated through in planta RNAi.Other Mj-CTL isoforms were used to verify the specificity of the Mj-CTL-2-targeting RNAi by Qrt-PCR analysis.The results showed that the transcriptional expressions of these Mj-CTL isoforms were not affected by the Mj-CTL-2-targeting RNAi treatment (Figure 8a). Interactions between Mj-CTL-2 and Solanum Lycopersicum Catalase: SlCAT2 Previous research has shown that MiCTL1a is able to interact with the Arabidopsis catalase AtCAT3, and the co-expression of the GFP-MiCTL1a and mCherry-tagged fusion of AtCAT3 in Nicotiana benthamiana (N.benthamiana) cells has shown that MiCTL1a and AtCAT3 are co-localized mainly in the plasma membrane and seldom in the peroxisome [16].Therefore, in this study, three catalases homologous to AtCAT3 were obtained from tomatoes by sequence alignment (Figure S4).Then, the interactions between three candidate proteins SlCAT1, SlCAT2 and SlCAT3 and Mj-CTL-2 were further examined using the yeast two-hybrid (Y2H) co-transformation assay.The results showed that yeast co-expressing Mj-CTL-2 and SlCAT2 grew on a quadruple dropout medium lacking adenine, histidine, leucine and tryptophan (Figure 9a). The transient expression experiment of tobacco illustrated that the fluorescence signal of Mj-CTL-2 Δsp (without signal peptides) was concentrated intracellularly (Figure 9b).Furthermore, the bimolecular fluorescence complementation (BiFC) was performed to confirm the interaction between Mj-CTL-2 and SlCAT2 in tobacco leaves, with pES-YFPN+pES-YFPC as the empty vector serving as negative control.No fluorescence signal could be observed in the negative control, but green fluorescence signals could be observed intracellularly and around the tobacco cells in Mj-CTL-2 Δsp + SlCAT2 treatment (Figure 9b), indicating that Mj-CTL-2 can interact with SlCAT2 intracellularly.The pTRV2-Mj-CTL-2-infiltrated tomato plants had 38% fewer female nematodes than the vector Ptrv-infiltrated control plants at 30 dpi (Figure 8b).These findings suggest that Mj-CTL-2 plays a role in nematode parasitism. Interactions between Mj-CTL-2 and Solanum Lycopersicum Catalase: SlCAT2 Previous research has shown that MiCTL1a is able to interact with the Arabidopsis catalase AtCAT3, and the co-expression of the GFP-MiCTL1a and mCherry-tagged fusion of AtCAT3 in N. benthamiana cells has shown that MiCTL1a and AtCAT3 are co-localized mainly in the plasma membrane and seldom in the peroxisome [16].Therefore, in this study, three catalases homologous to AtCAT3 were obtained from tomatoes by sequence alignment (Figure S4).Then, the interactions between three candidate proteins SlCAT1, SlCAT2 and SlCAT3 and Mj-CTL-2 were further examined using the yeast two-hybrid (Y2H) co-transformation assay.The results showed that yeast co-expressing Mj-CTL-2 and SlCAT2 grew on a quadruple dropout medium lacking adenine, histidine, leucine and tryptophan (Figure 9a). The transient expression experiment of tobacco illustrated that the fluorescence signal of Mj-CTL-2 ∆sp (without signal peptides) was concentrated intracellularly (Figure 9b).Furthermore, the bimolecular fluorescence complementation (BiFC) was performed to confirm the interaction between Mj-CTL-2 and SlCAT2 in tobacco leaves, with pES-YFPN+pES-YFPC as the empty vector serving as negative control.No fluorescence signal could be observed in the negative control, but green fluorescence signals could be observed intracellularly and around the tobacco cells in Mj-CTL-2 ∆sp + SlCAT2 treatment (Figure 9b), indicating that Mj-CTL-2 can interact with SlCAT2 intracellularly. Discussion Transcriptome sequencing has been widely used to search for effectors in plant-parasitic nematodes.For example, transcriptome sequencing of Aphelenchoides besseyi at mixed ages was performed in 2014, yielding 13 putative effectors specific to A. besseyi [19].In 2016, three potential effector proteins were identified from the transcriptome data of M. enterolobii, which may inhibit the plant immune response to promote the pathogenicity of nematodes [6].In this study, the transcriptome of M. javanica at Par-J3/J4 and Pre-J2 were sequenced for the first time, by which 34,047 and 45,352 unigenes were obtained, respectively.This is also the first study where the transcriptome of M. javanica has been compared between the pre-and post-infection period.In other plant-parasitic nematodes, some comparison data of nematode transcriptomes at different developmental stages have been obtained.In the study of Huang et al.,11,443 DEGs were obtained among eggs, juveniles, females and males of Radopholus similis.Among the 11,443 DEGs, 2613 were upregulated in juveniles, which were mainly related to immunity, digestion and infection, while 3546 were down-regulated, showing associations with the metabolism, growth, proliferation, transcription and protein synthesis [20].Zhou et al. investigated RKNs invasion and development in rice roots through RNA-seq transcriptome analysis [21].It was found that 952 and 647 genes were differentially expressed at 6 dpi (invasion stage) and 18 dpi (development stage), respectively.Gene annotation showed that the DEGs were categorized into diverse metabolic genes and stress response genes. Discussion Transcriptome sequencing has been widely used to search for effectors in plantparasitic nematodes.For example, transcriptome sequencing of Aphelenchoides besseyi at mixed ages was performed in 2014, yielding 13 putative effectors specific to A. besseyi [19].In 2016, three potential effector proteins were identified from the transcriptome data of M. enterolobii, which may inhibit the plant immune response to promote the pathogenicity of nematodes [6].In this study, the transcriptome of M. javanica at Par-J3/J4 and Pre-J2 were sequenced for the first time, by which 34,047 and 45,352 unigenes were obtained, respectively.This is also the first study where the transcriptome of M. javanica has been compared between the pre-and post-infection period.In other plant-parasitic nematodes, some comparison data of nematode transcriptomes at different developmental stages have been obtained.In the study of Huang et al.,11,443 DEGs were obtained among eggs, juveniles, females and males of Radopholus similis.Among the 11,443 DEGs, 2613 were up-regulated in juveniles, which were mainly related to immunity, digestion and infection, while 3546 were down-regulated, showing associations with the metabolism, growth, proliferation, transcription and protein synthesis [20].Zhou et al. investigated RKNs invasion and development in rice roots through RNA-seq transcriptome analysis [21].It was found that 952 and 647 genes were differentially expressed at 6 dpi (invasion stage) and 18 dpi (development stage), respectively.Gene annotation showed that the DEGs were categorized into diverse metabolic genes and stress response genes. Herein, the comparative transcriptome data of M. javanica between Par-J3/J4 and Per-J2 revealed that 18,826 unigenes displayed significantly different expressions (p < 0.05), among which 10,552 were down-regulated and 8274 were up-regulated in Par-J3/J4.The downregulation unigenes were mainly related to metabolism, longevity, proliferation and calcium signalling, and these processes are associated with aging [22].There is ample evidence that the dysregulation of calcium signaling is one of the key events in neurodegenerative processes [23].The up-regulation of unigenes was mainly associated with ribosomes, oxidative phosphorylation, DNA replication and fatty acid metabolism, etc.The changes in the expression levels of these genes are in line with the developmental progression of M. javanica from Per-J2 to Par-J3/J4. Among the DEGs of M. javanica, 53 lectin genes with seven types of lectin domains were filtered from the transcriptome data, including the CTLD (10), galectin domain (27), calreticulin domain (1), legume lectin-like domain (3), LysM domain (3), RTB lectin domain (6), and the GH18 domain of class V chitinase (3).Hepatic asialoglycoprotein receptor (ASGPR) is the first identified animal CTL, and since then more than 1000 CTLs have been identified [24].A large number of CTL genes have been identified in free-living nematodes, but few in parasitic nematodes.Loukas et al. found that CTLs secreted by animal-parasitic nematodes play a key role in host immunity [25].Harcus et al. found three CTL genes, namely Hp-CTL-1, Nb-CTL-1 and Nb-CTL-2 in Heligmosomoides polygyrus and Nippostrongylus brasiliensi [26].In M. chitwoodi, a CTL gene is secreted in the subventral esophageal gland of the nematode [27].Mg01965, a CTL from M. graminicola, inhibits plant defense and promotes the parasitism of nematodes [12].MiCTL1a, a CTL from M. incognita, interacts with A. thaliana catalase to inhibit flg22-stimulated ROS burst [16].In this study, Mj-CTLs had four conserved "C" (cysteine) sites, one conserved "WIGL" domain and one conserved "WND" motif.They contain an N-terminal signal peptide but no transmembrane domain.One of the best-known characteristics of CTLDs is the "WIGL" motif, which is highly conserved in CTLDs.The "WIGL" motif is involved in the formation of hydrophobic cores in the tertiary structure of the CTL fold [10].Moreover, the conserved "WND" motif used for calcium binding is also present in Mj-CTLs [28].CTLs are produced as transmembrane proteins or secreted as soluble proteins in the parasitism [28].The N-terminal signal peptide of Mj-CTLs usually aids the translocation of proteins to the endoplasmic reticulum and secretion into host plants [12]. The subventral glands produce secreted effectors of plant parasitic nematodes (PPNs) that are active during nematode penetration and at the early infection stages in roots [29].According to a previous study, there are four effector proteins containing CTLDs, including two CTL-like proteins from M. chitwoodi, Mg01965 from M. graminicola and MiCTL1 from M. incognita, which are expressed in the subventral glands of RKNs at Pre-J2 [12,16,25].The results of in situ hybridization showed that Mj-CTL-2 was localized in the subventral esophageal gland cells, and its expression level was the highest at Pre-J2.It has been previously demonstrated that Mg01965 and MiCTL1a secreted from nematodes into the apoplasts suppress ROS burst in planta [12,16].It was found in this study that Mj-CTL-2 had significant inhibitory effects on Gpa2/RBP-1-induced cell death and flg22-stimulated ROS burst.Moreover, MiCTL1a from M. incognita has been shown to interact with A. thaliana catalases: AtCAT1, AtCAT2 and AtCAT3 [16].Herein, yeast co-transformation and BiFC assays showed that Mj-CTL-2 only interacted with SlCAT2 intracellularly, probably because the conservation of CAT proteins in S. lycopersicum is lower than that in A. thaliana (Figure S4).Mj-CTL-2 was silenced using an in planta RNAi assay, which led to significantly fewer nematodes in the roots compared to those in plants infiltrated with the pTRV vector (Figure 8b).This result, combined with the previous findings that the expression of Mj-CTL-2 was the highest at Pre-J2, indicated that Mj-CTL-2 indeed plays an important role in the early interaction between M. javanica and host plants. In summary, 48,698 unigenes were obtained from the comparative transcriptomic analysis of M. javanica between Pre-J2 and Par-J3/J4.Mj-CTL-2 is a novel CTLD-containing an effector obtained from transcriptome data.It was proposed in this study that Mj-CTL-2 is released at the early infection stages of M. javanica, and that Mj-CTL-2 interacts with SlCAT2 to affect host defense responses by disturbing the balance of the peroxide system. CTLs may play a similar role in different plant-parasitic nematodes.The transcriptional data provide details on DEGs between Pre-J2s and Par-J3/J4.In future, we can explore more interesting effectors in the parasitic stage of M. javanica, as there are fewer known key effectors in regard to M. javanica, and it will be helpful to illustrate the molecular pathogenesis of this nematode. Materials and Methods 4.1.Nematode Culture M. javanica individuals were inoculated on the roots of the Xiahong No.1 tomato cultivar at 25 • C with a photoperiod of 16 h light/8 h dark in the green house at Zengcheng campus teaching & research base of South China Agricultural University as described previously [30].Freshly hatched Pre-J2s were collected from eggs picked on the tomato root after inoculation at 30 days.Par-J3/J4s were dissected from the root after inoculation at 18 days after inoculation.Then, they were stored in a 1.5 mL centrifuge tube at −80 • C for further use. RNA Extraction and Library Preparation The RNA was extracted from approximately 20,000 Pre-J2s and 3000 Par-J3/J4s using the RNAprep pure Tissue Kit (Tiangen, Beijing, China).Sequencing libraries were constructed using the NEBNext Ultra RNA Library Prep Kit (NEB).Library fragments were purified with the AMPure XP system (Beckman Coulter, Shanghai, China) to select complementary DNA (cDNA) fragments with a preferential length of 250-300 bp.Index-coded samples were clustered on a cBot Cluster Generation System using TruSeq PE Cluster Kit v3-cBot-HS (Illumina, San Diego, CA, USA).In addition, 125 bp/150 bp paired-end reads were generated on the Illumina Hiseq platform. Reads Mapping to the Reference Genome and Functional Classification Clean reads were obtained by removing reads containing adapter, ploy-N and lowquality reads from raw data.Meanwhile, the Q20, Q30 and GC contents of the clean data were calculated.All the downstream analyses were based on the clean data with high quality.Reference genome and gene model annotation files were downloaded from the WormBase ParaSite genome database directly (https://parasite.wormbase.org/Meloidogyne_incognita_prjeb8714 (accessed on 28 May 2019)).The index of the reference genome was built using Hisat2 v2.0.5, and paired-end clean reads were aligned to the reference genome using Hisat2 v2.0.5.For function annotation, Gene Ontology (GO) analysis was performed with Blast2GO, and WEGO (https://wego.genomics.cn/(accessed on 1 November 2018)) was used for GO term classification.Additionally, KEGG pathways were automatically generated by the KEGG Automatic Annotation Server with the BBH method (https://www.genome.jp/tools/kaas/(accessed on 3 April 2015)). Quantification of Gene Expression Level and Differential Expression Analysis FeatureCounts v1.5.0-p3 was employed to count the read mapping to each gene, and the FPKM of each gene was then calculated based on the length of the gene and the count of reads mapped to that gene.Differential expression analysis of two conditions/groups was performed using the DESeq2 R package (1.16.1).DESeq2 provided statistical routines for determining differential expression in digital gene expression data using a model based on the negative binomial distribution.The resulting p-values were adjusted using the Benjamini and Hochberg approach to controlling false discovery rates.Genes with|log2FoldChange| > 1 and padj < 0.05 found by DESeq2 were assigned as differentially expressed. Validation of DEGs by qPCR Eight DEGs were selected for RT-qPCR evaluation to evaluate the accuracy of the transcriptome data obtained by Illumina sequencing.The cDNA at the two life stages was reversely transcribed using One-Step gDNA Removal and cDNA Synthesis SuperMix (TransGen Biotech, Beijing, China).Then, qPCR was performed using Two-Step RT-PCR SuperMix (TransGen Biotech) and a Takara qPCR instrument.With Mj-actin as a reference gene, the relative fold change was calculated using the 2 −∆∆Ct method [31].Each experiment was conducted in triplicate, with three biological replicates each. In Situ Hybridization In situ hybridization was performed as previously described by De Boer et al. [32].First, specific 200 bp-300 bp templates were amplified from Mj-CTL-2, and then DIG-labeled sense and antisense RNA probes were generated using the T7 enzyme (Promega, Madison, WI, USA).Later, the nematode sections were hybridized and examined under the ECLIPSE Ni microscope (Nikon, Tokyo, Japan).Primers used in this study were listed in Table S3. Developmental Expression Analysis An RNA prep micro kit (Tiangen, Beijing, China) was utilized to extract the RNA from M. javanica at different developmental time points, including Pre-J2, 2 dpi and 5 dpi (Par-J2), parasitic third-stage juveniles at 10 dpi and 14 dpi (Par-J3), parasitic fourth-stage juveniles at 18 dpi (Par-J4), and adult female nematodes at 30 dpi.Approximately 100 nematodes at Pre-J2, Par-J3, Par-J4 and female stages were collected for RNA extraction.Regarding the Par-J2, 100 nematodes were inoculated into the tomato root, and the infected roots were collected for RNA extraction.Next, the cDNA at seven life stages was subjected to reverse transcription using One-Step gDNA Removal and cDNA Synthesis SuperMix (TransGen Biotech), followed by qPCR using Two-Step RT-PCR SuperMix (TransGen Biotech) and a Takara qPCR instrument.With Mj-actin as a reference gene, the relative transcript abundance was calculated using the 2 −∆∆Ct method [29].Each experiment was conducted in triplicate, with three biological replicates each. Inhibitory Effect of CTLs of M. javanica on Gpa2/RBP-1-Induced Tobacco Cell Death Nicotiana benthamiana plants were grown at 25 • C for 4 weeks in a greenhouse with a 16 h light/8 h dark cycle.Then, Mj-CTL-1, Mj-CTL-2 and Mj-CTL-3 fragments without signal peptides were cloned into the pES vector with an eGFP-tag fused at the C-terminus to generate pES:CTL.After that, the constructs pCAMBIA1305: Gpa2 and pCAMBIA1305:Rbp-1:HA were kept in the laboratory, and experiments were conducted in line with the procedures elaborated by Chen et al. [33].Briefly, the plasmids pES:CTL, pCAMBIA1305: Gpa2, pCAMBIA1305:Rbp-1:HA were introduced into the Agrobacterium tumefaciens strain EHA105.The transformed bacteria were cultured and suspended in a buffer containing 10 mM 2-(N-morpholino) ethanesulfonic acid (MES) (pH5.5) and 200 µM acetosyringone at a final optical density at 600 nm (OD600) of 0.3, then infiltrated into tabacco leaves with individual MES as the control.Symptoms were photographed 3 days after the last infiltration, and the suppression degree of cell death was analyzed by the necrosis rate. Detection of ROS Burst To detect the ROS generation after flg22 treatment, Mj-CTL-2 fragments without signal peptides were cloned into the pES vector with an eGFP-tag fused at the C-terminus to generate pES: Mj-CTL-2 ∆SP .Next, recombinant plasmids were transformed into the A. tumefaciens strain GV3101.N. benthamiana leaves at 4 weeks old were infiltrated with A. tumefaciens carrying the recombinant plasmid.After 48 h of infiltration, leaf discs were collected and put into 96-well plates (Costar 96-well white flat bottom polystyrene) containing 200 µL of double-distilled water (ddH 2 O).A luminol-based assay was used to detect ROS.After 16 h, the water was removed and 100 µL of ddH 2 O containing 34 µg luminol (Sigma, St. Louis, MO, USA), 20 µg horseradish peroxidase (Sigma) and 100 nM flg22 were added, the data were read using a Tecan Infinite 200 Pro plate reader [34]. In Planta RNAi A fragment of about 300 bp of Mj-CTL-2 was amplified by PCR.Next, the fragment was digested using XbaI and SacI, then cloned into the pTRV2 vector digested with the same enzymes to generate pTRV2.The vectors pTRV2 and the pTRV2 derived pTRV2-CTL-2 were transformed into the A. tumefaciens strain EHA105, respectively.Later, tomato plants were infected by EHA105 carrying the corresponding constructs, as he procedures previously described [35].The coat protein gene of TRV was used to verify whether viruses successfully invade cells, and it was detected using the primer pair TRVcpF/TRVcpR at 14 dpi [36].After 21 days, 200 freshly hatched nematodes were successfully inoculated into the roots of plants expressing TRV.At 5 dpi, the RNA was extracted from plant roots, and RT-qPCR was performed to detect the silencing efficiency of target genes in nematode.Independent RT-qPCR experiments were performed three times.After 30 days, the roots were taken out and stained with sodium hypochlorite and acid fuchsin.Finally, nematodes in the roots were counted. Y2H Co-Transformation and BiFC Assays The encoding sequence of Mj-CTL-2 without signal peptide was cloned into the pGBKT7 vector to generate pGBKT7: Mj-CTL-2 ∆SP as the prey vector.The open reading frames (ORFs) of SlCAT1, SlCAT2 and SlCAT3 were inserted into the pGADT7 vector to generate pGADT7: SlCAT1, pGADT7: SlCAT2 and pGADT7: SlCAT3, respectively, as the bait vector.Then, the prey and bait plasmids were co-transformed into the yeast strain Y2HGOLD, and the transformants were grown on an SD/LeuTrp selection medium.Finally, positive clones were selected and cultured on the SD/LeuTrpHisAde medium [37]. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/plants13050730/s1, Figure S1 S1: Summary of M. javanica transcriptomes sequencing; Table S2: The number and types of lectin genes on various plant parasitic nematodes; Table S3: The primers used in this paper. Figure 2 . Figure 2. The analysis and verification of DEGs between Par-J3/J4 (A) and Pre-J2 (B).(a) Volcano map of DEGs in the transcriptomes at Par-J3/J4 compared to Pre-J2.Red dots indicate up-regulated Figure 3 . Figure 3.The top 30 enriched GO terms of down-regulated DEGs and up-regulated DEGs, respectively (a,c).Twenty KEGG pathways of the down-regulated DEGs and up-regulated DEGs, respectively (b,d). Figure 3 . Figure 3.The top 30 enriched GO terms of down-regulated DEGs and up-regulated DEGs, respectively (a,c).Twenty KEGG pathways of the down-regulated DEGs and up-regulated DEGs, respectively (b,d). Figure 4 . Figure 4. Amino acids alignment of CTLs from M. javanica, M. graminicola, and M. incognita.The re box reflects the position of the conservative amino acid. Figure 4 . Figure 4. Amino acids alignment of CTLs from M. javanica, M. graminicola, and M. incognita.The red box reflects the position of the conservative amino acid. Figure 6 . Figure 6.Expression patterns of Mj-CTL-2 in M. javanica.(a) In situ hybridization of Mj-CTL-2.Fixed nematodes are hybridized with sense (below panel) and antisense mRNA probes (above panel) from Mj-CTL-2.SvG: subventral esophageal gland cells; M: metacorpus.Bars, 100 μm.(b) The expression pattern of Mj-CTL-2.With β-actin as the reference gene, the stage-specific expression of Mj-CTL-2 is detected by qRT-PCR at seven different life stages of M. javanica.The x-axis represents seven stages, while the y-axis represents relative expression.The fold change values were calculated using the 2 −ΔΔCT method and presented as the change in the mRNA level at various time points after inoculation.Each column represents the means of three independent experiments with standard deviation. Figure 6 . Figure 6.Expression patterns of Mj-CTL-2 in M. javanica.(a) In situ hybridization of Mj-CTL-2.Fixed nematodes are hybridized with sense (below panel) and antisense mRNA probes (above panel) from Mj-CTL-2.SvG: subventral esophageal gland cells; M: metacorpus.Bars, 100 μm.(b) The expression pattern of Mj-CTL-2.With β-actin as the reference gene, the stage-specific expression of Mj-CTL-2 is detected by qRT-PCR at seven different life stages of M. javanica.The x-axis represents seven stages, while the y-axis represents relative expression.The fold change values were calculated using the 2 −ΔΔCT method and presented as the change in the mRNA level at various time points after inoculation.Each column represents the means of three independent experiments with standard deviation. Figure 6 . Figure 6.Expression patterns of Mj-CTL-2 in M. javanica.(a) In situ hybridization of Mj-CTL-2.Fixed nematodes are hybridized with sense (above panel) and antisense mRNA probes (below panel) from Mj-CTL-2.SvG: subventral esophageal gland cells; M: metacorpus.Bars, 100 µm.(b) The expression pattern of Mj-CTL-2.With β-actin as the reference gene, the stage-specific expression of Mj-CTL-2 is detected by qRT-PCR at seven different life stages of M. javanica.The x-axis represents seven stages, while the y-axis represents relative expression.The fold change values were calculated using the 2 −∆∆CT method and presented as the change in the mRNA level at various time points after inoculation.Each column represents the means of three independent experiments with standard deviation. Figure 8 . Figure 8.Effect of in planta RNAi of Mj-CTL-2 on M. javanica parasitism.(a) The qRT-PCR assays of the expression levels of Mj-CTL-2 in M. javanica collected from non-infiltrated tomato plants and pTRV2, pTRV2-Mj-CTL-2 agroinfiltrated plants.The expression levels of Mj-CTL isoforms from M. javanica were quantified to determine the specificity of RNAi.(b) The number of adult females per root.The different lowercase letters indicate statistically significant differences based on ANOVA with the Duncan post hoc test (p < 0.05).The experiment was performed with three biological repeats. Figure 8 . Figure 8.Effect of in planta RNAi of Mj-CTL-2 on M. javanica parasitism.(a) The qRT-PCR assays of the expression levels of Mj-CTL-2 in M. javanica collected from non-infiltrated tomato plants and pTRV2, pTRV2-Mj-CTL-2 agroinfiltrated plants.The expression levels of Mj-CTL isoforms from M. javanica were quantified to determine the specificity of RNAi.(b) The number of adult females per root.The different lowercase letters indicate statistically significant differences based on ANOVA with the Duncan post hoc test (p < 0.05).The experiment was performed with three biological repeats. Figure 9 . Figure 9. Mj-CTL-2 interacts with Solanum lycopersicum catalase: SlCAT.(a) Yeast two-hybrid tests between Mj-CTL-2 and SlCATs.Left column: yeast cell growth carrying the baits (in pGBKT7 vector) and preys (in pGADT7) grown on SD/LeuTrp medium.Right column: yeast cell growth on the selective quadruple dropout medium SD/LeuTrpAdeHis.Yeast cells containing p53 and SV40 large T-antigen are used as the positive control, and those containing pGBKT7 and pGADT7 are used as the negative control.(b) BiFC validation of the interaction between Mj-CTL-2 and SlCAT2 in N. benthamiana leaves.The fluorescence signal is detected at 48 h after infiltration.Images are captured by confocal microscopy.Bars, 20 μm. Figure 9 . Figure 9. Mj-CTL-2 interacts with Solanum lycopersicum catalase: SlCAT.(a) Yeast two-hybrid tests between Mj-CTL-2 and SlCATs.Left column: yeast cell growth carrying the baits (in pGBKT7 vector) and preys (in pGADT7) grown on SD/LeuTrp medium.Right column: yeast cell growth on the selective quadruple dropout medium SD/LeuTrpAdeHis.Yeast cells containing p53 and SV40 large T-antigen are used as the positive control, and those containing pGBKT7 and pGADT7 are used as the negative control.(b) BiFC validation of the interaction between Mj-CTL-2 and SlCAT2 in N. benthamiana leaves.The fluorescence signal is detected at 48 h after infiltration.Images are captured by confocal microscopy.Bars, 20 µm. : GO classification of unigenes in the transcriptome of Meloidogyne javanica; Figure S2: Distribution of the KEGG pathway of unigenes in the transcriptome of Meloidogyne javanica; Figure S3: Multiple sequence alignment of Solanum lycopersicum catalases.Table
2024-03-06T16:04:06.555Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "c143849da0f1dc7d126ef24f2c84542a94de2274", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/13/5/730/pdf?version=1709558818", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "622f2457d0b73ee565fdcfd63617ec2f2b4db810", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
264303179
pes2o/s2orc
v3-fos-license
Comparison of Body Composition Variables between Post-Bariatric Surgery Patients and Non-Operative Controls Background: Since bariatric surgery results in massive weight loss, it may be associated with a disproportionate decrease in lean body mass. Objective: To evaluate body composition in post-bariatric surgery patients who had a successful weight loss at 12 months (>50% excess weight loss) with comparisons to healthy controls who were matched for age, sex and BMI. Methods: This is an observational analytic study using data from post-bariatric surgery patients who had laparoscopic Roux-en-Y gastric bypass (RYGB) or laparoscopic sleeve gastrectomy (SG) at King Chulalongkorn Memorial Hospital. Patients who had percentage excessive weight loss (%EWL) >50% and achieved a BMI of <30 kg/m2 within 12 months after the surgery were included. Non-operative healthy controls matched for sex, age, and BMI (1:1) were recruited. The 12-month post-bariatric surgery BMI was used to match the BMI of the control subjects. A single bioelectrical impedance analysis (BIA) (Inbody 770) machine was used for the entire study. Results: Sixty participants were included in this study. There are 30 post-bariatric surgery patients (female n = 19, male n = 11) and 30 non-operative controls (female n = 19, male n = 11). The 12-month post-bariatric surgery patients had lower percentage of body fat (PBF) (30.6% vs 35.9%, P-value .001) and trunk fat mass (10.3 vs 12.4 kg, P-value .04) than non-operative controls. The 12-month post-bariatric surgery patients also were found to have more soft lean mass (SLM) (47.7 vs 39.9 kg, P-value .001), fat free mass (FFM) (51.1 vs 42.3 kg, P-value .001), skeletal muscle mass (SMM) (27.5 vs 23 kg, P-value .003), and trunk lean mass (21.2 vs 19 kg, P-value .02). Conclusion: Despite the significant reductions in all body composition variables in post-bariatric surgery patients at 12-month follow-up, both fat free mass and skeletal muscle mass were found to be higher in the surgical patients compared to the control group. Clinical trials: Thai Clinical Trials Registry, https://thaiclinicaltrials.org/ ID:TCTR20200223003 Introduction Obesity is defined as abnormal or excessive fat accumulation that presents a risk to health.Obesity is also associated with the development of type 2 diabetes mellitus, cardiovascular disease, various types of cancer, and other adverse pathological conditions. 1 Bariatric surgery is one of the treatment methods that can achieve long-term weight loss in individuals with severe obesity. 2 The criteria for consideration of bariatric surgery are a body mass index (BMI) of >40 kg/m 2 or a BMI of >35 kg/m 2 with comorbidities such as hypertension or dyslipidemia.Patients with pre-diabetes or diabetes may qualify with a BMI between 30 and 35 kg/m 2 . 3The procedures that are applied in bariatric surgery fall under the general classifications of restrictive and malabsorptive procedures.Maximum weight loss usually occurs within 12 to 24 months following bariatric surgery, 4 and mainly due to an increase in satiety and long-term hypophagia.Possible mechanisms include changes in taste, food preferences, gastric emptying rates, vagal signaling, gastrointestinal hormone activity, circulating bile acids, and gut microbiota. 5,6Lifelong vitamin and mineral replacement therapy are often required to prevent nutritional deficiencies after surgery, especially Roux-en-Y gastric bypass (RYGB) surgery. Clinical Medicine Insights: Endocrinology and Diabetes In the management of patients who are obese, the aim is to reduce body fat content while preserving the lean component of body mass. 6Because weight loss following bariatric surgery, particularly Roux-en-Y gastric bypass (RYGB), is much greater than weight loss with nonsurgical methods, it may be associated with a disproportionate decrease in lean body mass (LBM). 7Research has reported that body composition changed after bariatric surgery with both fat mass and fat free mass decreasing significantly. 8A previous study showed no significant differences in body composition between sleeve gastrectomy (SG) and RYGB at 1 year after adjusting for differences in initial BMI pre-operatively. 9We are particularly interested in patients who had a successful weight loss at 1 year after bariatric surgery (>50 %EWL and achieved BMI of <30 kg/m 2 ) whether they would have altered body compositions.Our study aimed to evaluate body composition in post-bariatric surgery patients who had a successful weight loss at 12 months with comparisons to healthy controls who were matched for age, sex, and BMI. Methods This is an observational analytic study using data from postbariatric surgery patients who had laparoscopic Roux-en-Y gastric bypass (RYGB) or laparoscopic sleeve gastrectomy (SG) at King Chulalongkorn Memorial Hospital during the period of January 2015 to December 2019.Patients aged 18 to 65 years who had percentage excessive weight loss (%EWL) >50 and achieved BMI of <30 kg/m 2 within 12 months after surgery were included.A total of 795 patients underwent bariatric surgery during the period of January 2015 to December 2019 and 90 patients had successful weight loss at 12 months post-surgery (>50 %EWL and BMI <30 kg/m 2 ).Thirty patients had complete data to be included in the study (Supplemental Figure 1). Non-operative controls who were healthy (no history of heart/liver/kidney disease, endocrine disorder, cancer, AIDS), had not been participated in any previous weight loss interventions (weight loss program, medications, bariatric surgery), weight stable (no change of >5% of body weight within 3 months) and matched for sex, age, and BMI (1:1) were recruited from health care personals that had annual checkup data available.The 12-month post-bariatric surgery BMI was used to match the BMI of the control subjects.Patients taking medications that can affect body composition were excluded (diuretics, steroid-based, psychotropic, and diabetic medications).Written informed consent was obtained from all participants.We used Glim criteria of The Global Leadership Initiative on Malnutrition 2018 to evaluate malnutrition. In the post-bariatric surgery group, data on patient history, laboratory investigation, post-bariatric surgery body composition (baseline, 6, and 12 months after surgery) were collected from electronic medical records.In the control group, data on patient history and laboratory investigation were collected from electronic medical records and body composition was measured during the study visit.Questionnaires on dietary intake and physical activity were collected for both groups.A 7-day dietary intake (protein intake and calories intake) at baseline, 6-month, and 12-month post-bariatric surgery were recorded in medical records by nutritionist at time point.Physical activity questionnaires were obtained at the 12-month post-bariatric surgery.Thai version of Short Format International Physical Activity Questionnaires (IPAQ-SF) was used for physical activity evaluation.Single bioelectrical impedance analysis (BIA) (Inbody 770) machine was used for the entire study. Statistical Analysis Demographic data and clinical parameters were described for each group.Continuous variables are expressed as median (interquartile range: IQR).Differences in continuous and categorical variables between post-bariatric surgery and control groups were analyzed using a Wilcoxon rank sum test and Chisquare test, respectively.The Wilcoxon signed rank test were used to compare body composition variables between the baseline and follow-up visit.All P-values reported are two-sided.Statistical significance was defined as P < .05.Stata version 15.1 (Stata Corp., College Station, Texas) was used for analysis. Results Sixty participants were included in this study (30 post-bariatric surgery patients, 30 non-operative controls).Table 1 shows baseline characteristics of the entire study population.Nineteen post-bariatric surgery patients were female (63.3%) with a median age of 41 years old.There were no significant differences between sex, age, BMI, total protein intake, smoking, alcohol drinking, and physical activity between the 2 groups.Total daily caloric intake was significantly lower in the postbariatric surgery group. Table 2 shows body composition variables of post-bariatric surgery patients at the 12-month follow-up time point compared to non-operative controls.Comparison between the two groups reveal that the 12-month post-bariatric surgery patients had a lower waist-hip ratio (WHR) (0.83 vs 0.9, P-value < .001),lower percentage of body fat (PBF) (30.6 vs 35.9%,P-value .001),less appendicular lean mass (ALM) (9 vs 16.9 kg, P-value < .001),less trunk fat mass (10.3 vs 12.4 kg, P-value .04)and a lower ALM/BMI (0.34 vs 0.63, P-value < .001).Post-bariatric patients also showed higher levels of soft lean mass (SLM) (47.7 vs 39.9 kg, P-value .001),fat free mass (FFM) (51.1 vs 42. 3 12-month post bariatric surgery patients had low body mass index or reduced muscle mass following recommended thresholds. Figure 1 shows box plots comparing the median of body composition variables between the 12-month post-bariatric surgery patients and control groups. Table 3 shows the changes in body composition variables at 6 and 12 months compared to baseline levels after bariatric surgery in the post-bariatric surgery group.At 6 and 12 month followup, there were statistically significant decreased changes in the median levels of all body composition variables including waisthip ratio (WHR) (−0.13 and −0.16, P-value < .001),soft lean mass (SLM) (−4. Discussion This study showed that the 12-month post-bariatric surgery patients had lower waist-hip ratio (WHR), percentage of body fat (PBF), and trunk fat mass compared to controls.While no significant difference in total protein intake and physical activity between the 2 groups were found, soft lean mass (SLM), fat free mass (FFM), skeletal muscle mass (SMM), and trunk lean mass were statistically higher in the bariatric group.The 12 months post bariatric surgery follow-up revealed a significant reduction in all body composition variables. This study produced different results compared to previous studies regarding changes in body composition.Although body fat mass, fat free mass and skeletal muscle mass were continuously lost during the 12-month follow-up period, the 12-month post-bariatric surgery patients yet had higher fat free mass and skeletal muscle mass than control groups.In theory, bariatric surgery results in very rapid weight loss from multiple mechanisms including restriction of the stomach area and decreased nutrient absorption which can lead to malnutrition, decreased muscle mass, and sarcopenia.In our study, none of the postbariatric patients had malnutrition or sarcopenia, as defined by GLIM criteria of The Global Leadership Initiative on Malnutrition 2018. 10 One key factor that may affect changes in body composition is daily protein intake.In the first phase of active weight loss after bariatric surgery, patients should consume at least 60 to 90 g of protein per day or 1.2 to 1.5 g/kg/day 11 to prevent the breakdown of fat free mass, especially muscle mass.In this study, the average daily protein intake in the post-bariatric surgery group was 70 (50-80) g/day or 0.92 (0.41-1.59) g/kg/day and tended to be higher than the control group, although this was not statistically significant difference.It seemed that our patients were able to follow the suggested daily protein intake recommendations.In addition to higher total protein intake, exercise programs after weight loss surgery to develop muscle mass and nutrition counseling at obesity clinics can play an important role for this result. To our knowledge, this is the first study to evaluate body composition in post-bariatric surgery patients who had a successful weight loss at 12 months (>50% excess weight loss) with comparisons to healthy controls who were matched for age, sex and BMI.In previous reports, controls compared to surgery patients were either healthy normal-weight subjects 12 or weight-reduced subjects after completing medical treatment. 13Benedetti et al found that fat mass was not statistically different between post biliopancreatic diversion (BPD) subjects and controls (healthy volunteers matched for age, sex, and height).However, post-BPD patients retained significantly more fat free mass (FFM) than controls. 12Ciangura et al 6 reported no evidence of a decrease in total, trunk, or appendicular LBM in weight-reduced subjects (after RYGB) compared to a nonsurgical control group of similar age and body fat. The strengths of this study include having a control group that is matched by age, sex and BMI and using multiple parameters to evaluate body composition changes.Our matching of BMI was done using the 12-month post-operative BMI of the surgical patients.And we included only those with successful weight loss after bariatric surgery (achieving BMI of less than 30 kg/m 2 and excessive weight loss of >50% at 12 months).Some limitations need to be addressed.We did not match for height and weight in this study as we could only match for BMI.There are some limitations in the use of BIA in obese populations as it is an indirect method to evaluate body compositions.And for evaluation of total protein/total calories intake and physical activity could be affected by a recall bias.Lastly, we have a small number of subjects and power analysis for sample size calculation was not done.Further prospective studies should be conducted for a period greater than 12 months or in postoperative patients with BMI >30 kg/m 2 to evaluate long-term changes in body composition after performing bariatric surgery. Conclusion Data from this study has provided a better understanding of changes in body composition during weight loss following bariatric surgery in patients who at 12-months post-surgery have achieved a BMI <30 kg/m 2 and excessive weight loss of >50%.Despite the significant reductions in all body composition variables in post-bariatric surgery patients at 12-month follow-up, both fat free mass and skeletal muscle mass were found to be higher in the surgical patients compared to the control group. Table 2 . Body composition variables of the 12-month post-bariatric surgery patients and non-operative controls. Table 3 . Body composition variables of bariatric surgery patients at baseline, 6-, and 12-month follow-up after bariatric surgery.
2023-10-20T05:12:21.665Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "6ed80c467c4f6bb55b085ec72a014913612d9660", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11795514231206731", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ed80c467c4f6bb55b085ec72a014913612d9660", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5710595
pes2o/s2orc
v3-fos-license
Emerging Endoscopic and Photodynamic Techniques for Bladder Cancer Detection and Surveillance This review provides an overview of emerging techniques, namely, photodynamic diagnosis (PDD), narrow band imaging (NBI), Raman spectroscopy, optical coherence tomography, virtual cystoscopy, and endoscopic microscopy for its use in the diagnosis and surveillance of bladder cancer. The technology, clinical evidence and future applications of these approaches are discussed with particular emphasis on PDD and NBI. These approaches show promise to optimise cystoscopy and transurethral resection of bladder tumours. INTRODUCTION Bladder cancer is the ninth most common cancer in the world, affecting more than 356,600 people each year. It has the highest incidence in Egypt followed by Europe and North America [1]. The majority of diagnosed patients (75-85 percent) present with nonmuscle invasive bladder cancer (NMIBC), which is characterised by a probability of recurrence at 1 and 5 years of 15-61 and 31-78 percent, respectively. Moreover, progression of disease is seen in this group from <1-17% and <1-45% at 1 and 5 years, respectively [2]. Flexible cystoscopy and voided urine cytology are currently the initial investigations of choice for patients with symptoms suggestive of bladder cancer. The mainstay of treatment for NMIBC is complete transurethral resection (TUR), ultimately preventing disease recurrence and progression. It is postulated that bladder cancer recurrence occurs via four mechanisms: incomplete resection, tumour cell reimplantation, growth of microscopic tumours, and new tumour formation [3]. First tumour recurrence appears different to subsequent recurrences; incomplete resection and tumour cell reimplantation may dominate at this timepoint and are therefore influenced by clinicians before and immediately after resection [4]. Only later does genuine new tumour formation appear to increase in importance where chemopreventive agents may have its role in reducing recurrence. Nonmuscle invasive urothelial transitional cell cancer (UC) of the bladder is one of the most expensive cancers to manage on a per patient basis, because of its high prevalence, high recurrence rate, and the need for long-term cystoscopic surveillance. The total cost of treatment and 5-year followup of patients with NMIBC diagnosed during 2001-2002 in the United Kingdom was over £35 million [5]. Direct cost of managing bladder cancer in the USA is estimated to range from $96,000-$187,000 per patient from the diagnosis to death [6]. Furthermore, health technology assessments have demonstrated the addition of photodynamic diagnosis (PDD) and urinary biomarkers adds to the cost which can have implications towards service provision [7,8]. Developments in optical diagnostics might reduce the limitations of current methods of detection and surveillance in several ways. These emerging techniques aim at cystoscopy for better visualisation of bladder tumours or predict histopathologic diagnosis in realtime. We aim to review these relatively new technologies for their ability to improve the diagnostic yield. METHODS A PubMed literature search was performed for this nonsystematic review, and papers on photodynamic diagnosis (PDD), narrow-band imaging (NBI), Raman spectroscopy (RS), optical coherence tomography (OCT), virtual cystoscopy (VC), and endoscopic-microscope (EM) regarding bladder cancer were reviewed. PHOTODYNAMIC DIAGNOSIS (PDD) PDD is a technique that has been proposed to enhance tumour detection and resection. The principle of PDD is based on the interaction between a photosensitising agent with a high uptake by tumour cells and light with an appropriate wavelength, which is absorbed by the agent and reemitted with a different wavelength [9]. Several commercially available agents have been used to induce exogenous fluorescence, but presently hypericin, 5 aminolaevulinic acid (5-ALA), and its ester hexaminolevulinate (HAL) are applied most often. Optimal dose and instillation time have not yet been determined. One retrospective trial compared the performance of PDD with HAL and with 5-ALA [10]. No significant differences were found between 5-ALA and HAL in this study. A randomised comparison of both substances has never been performed. In order to predict short-and long-term risks of recurrence and progression, the EORTC-GU group has developed a scoring system and risk table based on number of tumours, tumour size, prior recurrence rate, T-category, presence of concurrent CIS, and tumour grade into low, intermediate, and high risk groups [2]. For detecting higher-risk tumours, median range sensitivity of PDD (89%, 95 percent CI, 6-100%) was higher than WLC (56%, 95 percent CI, 0-100%) whereas for lower-risk tumours, it was broadly similar (92%, 95 percent CI, 20-95%) versus 95%, (95 percent CI, 8-100%). The higher sensitivity of PDD was also reflected in the detection of CIS alone. Four randomised clinical trials (RCTs) involving 709 participants using 5 aminolaevulinic acid (5-ALA) as the photosensitising agent reported clinical effectiveness. Using PDD at transurethral resection of bladder tumour (TURBT) resulted in fewer residual tumours at check cystoscopy (relative risk [RR], 0.37, 95 percent CI, 0.20-0.69) and longer recurrence-free survival (RR, 1.37, 95 percent CI, 1.18-1.59), compared with WLC. However, the advantages of PDD at TURBT in reducing recurrence and progression in the longer term were less clear [8]. A recent study examined the frequency of HAL-fluorescence-detected residual tumours immediately after standard WL TURBT and the efficacy of immediate removal of residual tumour tissue or overlooked tumours on tumour recurrence. Patients were first inspected and resected with white light before undergoing inspection and further resection with blue light HAL fluorescence cystoscopy. It was therefore possible not only to identify previously overlooked lesions, but also to identify areas of residual tumour where the resection under white light had not been complete. Fluorescence-guided cystoscopy after complete WL TURBT identified residual tumour tissue in 44 of 90 patients (49%). The recurrence rate in patients followed for 12 months was 47.3% after WL TURB and 30.5% after HAL TURBT (P = 0.05) [12]. This study also demonstrates the potential of using PDD TURB as a teaching tool to offer clear visualisation of tumours and their margins, thereby facilitating improved TURBT. The European expert panel recommended HAL-guided PDD use on all patients with initial suspicion of bladder cancer [13]. However, a recent cost analysis demonstrated that PDD added a high cost to benefit ratio if applied to all patients in detection and surveillance of bladder cancer [8]. Nonetheless, PDD may be cost effective in selected group with high-risk disease for surveillance. This remains to be determined. Furthermore, PDD may have its role in patients with positive urine cytology but negative WLC. NARROW-BAND IMAGING (NBI) NBI is an optical image enhancement technique designed for endoscopy to enhance the contrast between mucosal surfaces and microvascular structures without the use of dyes. This narrow band of light is strongly absorbed by haemoglobin and penetrates only the surface of tissue, increasing the visibility of capillaries and other delicate tissue surface structures [14]. Level I evidence (meta-analysis) of NBI use is lacking in bladder cancer [15] but does exist in the field of gastro-enterology for detection of colonic adenoma [16], high-grade dysplasia and metaplasia in Barrett's oesophagus [17]. There are few studies published assessing the value of NBI in bladder cancer. All found a subjective improvement of visualisation of the tumours. Bryan et al. performed flexible WLC and subsequent NBI cystoscopy in 29 patients with recurrent NMIBC. NBI cystoscopy revealed 15 additional tumours in 12 patients [14]. However, the additional tumours were not confirmed with histology since all tumours were treated with diathermy ablation. In a further study involving 23 patients by the same group demonstrated that even "new users" to NBI technology demonstrate a significantly improved detection rate of bladder cancer using NBI versus WLC alone [18]. Herr et al. performed WLC with subsequent NBI cystoscopy in 427 consecutive patients with a history of NMIBC. Recurrence was found in 103 patients. In 56% of patients with a recurrence (n = 58), additional tumours were detected by NBI, and in 12% of patients (n = 13), the bladder tumours were detected only by NBI. A prospective controlled study of NBI was conducted in 104 consecutive patients with definite or suspected bladder cancer by Tatsugami and colleagues [19]. They reported a sensitivity and specificity for the detection of bladder tumours using NBI in all patients versus those with CIS of 92.7% and 70.9 versus 89.7% and 74.5%, respectively. The sensitivity and specificity for the detection of bladder tumours using NBI in patients with positive versus negative urine cytology were 85.4% versus 98.4% and 75.7% versus 66.3%, respectively. For WLC and NBI cystoscopy, the overall sensitivity was 87% and 100% and the overall specificity 85% and 82%, respectively [20]. A limitation of this study is the possible observer bias, since WLC and NBI were performed subsequently by the same urologist. In order to address observer bias a recent randomized trial confirmed that a "second look" did not compromise the superiority of NBI over standard WLI flexible cystoscopy for detecting primary NMIBC including CIS lesions [38]. Whether the specificity of NBI will be negatively influenced by previous intravesical instillations, inflammation, or scarring, like PDD, is yet unknown. Reassuringly, NBI cystoscopy does not appear to have a "learning curve" for its adaptation in surveillance for patients with bladder cancer [21]. RAMAN SPECTROSCOPY (RS) RS enables measurement of molecular components of tissue in a qualitative and quantitative way. The principle of this optical technique is based on the Raman effect or inelastic scattering. Raman molecular imaging (RMI) is an optical technology that combines the molecular chemical analysis of Raman spectroscopy with high-definition digital microscopic visualisation [9]. This approach permits visualisation of the physical architecture and molecular environment of cells in the urine. The Raman spectrum of a cell is a complex product of its chemical bonds. Several groups have determined the diagnostic accuracy of RS by comparing ex vivo Raman measurements of bladder samples with histology. Generally, the time needed to obtain the spectra is between 1 and 5 seconds [22]. Shapiro and colleagues investigated urine samples from 340 patients, including 116 patients without UC, 92 patients with low-grade tumors, and 132 patients with high-grade tumours. The Raman spectra from UC tissue demonstrate a distinct peak at a 1584 cm −1 wave shift not present in benign tissues. The height of this peak correlated with the tumour's grade. The signal obtained from epithelial cells correctly diagnosed bladder cancer with sensitivity of 92% (100% of the high-grade tumors), specificity of 91%, a positive predictive value of 94%, and a negative predictive value of 88%. The signal correctly assigned a tumour's grade in 73.9% of the low-grade tumors and 98.5% of the high-grade tumours [23]. RMI for diagnosis of bladder cancer is limited by the need for specialised equipment and training of laboratory personnel. For in vivo measurements, small, flexible fibre-optic probes compatible with the working channel of a rigid or flexible cystoscope have been developed [22], but the application has been hampered by many technical issues [9]. Therefore, further studies demonstrating human in vivo applicability of RS on bladder tissue are still awaited. OPTICAL COHERENCE TOMOGRAPHY (OCT) OCT produces high-resolution, cross-sectional images of tissue. The principle of this optical technique is analogous to B-mode ultrasonography except that light is being used instead of sound [24]. Karl et al. enrolled 52 patients who underwent transurethral bladder biopsy or TUR-BT for surveillance or due to initial suspicion of UC of the bladder. In total, 166 lesions were suspicious for malignancy according to standard white light cystoscopy. All suspicious lesions were scanned and interpreted during perioperative cystoscopy using OCT, and then subsequently biopsied by cold cup or TUR for pathological confirmation. There were no false-negative lesions detected by OCT. Sensitivity of OCT for detecting the presence of a malignant lesion was 100% and sensitivity for detection of tumor growth beyond the lamina propria was 100% as well. Specificity of OCT for presence of malignancy was 65%, due to the fact that a number of lesions were interpreted as false positive by OCT [25]. As a minimally invasive technique, OCT demonstrates high sensitivity for detection of malignant lesions as well as estimation of whether a tumour has invaded beyond the lamina propria [24,26,27]. However, specificity of OCT within the bladder is low, possibly due to a learning curve and/or the relatively low spatial resolution and visualisation depth of the OCT technology. Further studies and technical development are needed to establish an adequate surrogate for optical biopsy. Advantages of OCT can be found in the noninvasive, real-time, and high-resolution images that are comparable with histopathology and provide information about depth of tumour growth. However, reliable measurements of muscle-invasive tumours may be hampered due to insufficient imaging depth. OCT is less suitable for screening the entire bladder, thus in the absence of visually suspect lesions, it has to be used in combination with other methods (e.g., NBI or PDD) to direct to the region of interest. VIRTUAL CYSTOSCOPY (VC) With the progressive development in diagnostic imaging and medical computer software technologies, it is possible to generate virtual reality images to aid the clinician to inspect the interior of the bladder in real time. VC can be applied to any imaging modality be it computerised tomography (CTVC), magnetic resonance imaging (MRVC), or ultrasound (USVC). CT virtual cystoscopy is a noninvasive technique that can be used successfully for detection of bladder tumours >5 mm in selected cases during daily routine abdominopelvic work. UCs > 5 mm represent the majority of newly diagnosed bladder tumours: in the Bladder Cancer Prognosis Programme [29], only 6.7% of 1075 confirmed UCs were 5 mm or less in size (unpublished data). In some series, lesions as small as 2×3 mm have been detected [30], but this technique is still unable to detect flat lesions that may represent CIS [31]. Moreover, the effects of ionising radiation are a significant obstacle to such modalities being used routinely for diagnosis and surveillance of bladder cancer. ENDOSCOPIC MICROSCOPE (EM) EM is a novel, low-cost, high-resolution endoscopic microscope for obtaining fluorescent images of the cellular morphology of the epithelium. Its experimental use as a noninvasive point imaging system offers a method for obtaining real-time histologic information during endoscopy [32]. Whilst it is feasible to obtain high-resolution histopathologic information using the endoscopic microscope device, the difficulties of holding a fine (1.4-2.6 mm in diameter) instrument still to capture the image within the bladder makes obtaining interpretable images challenging. Future improvement and integration with wide field endoscopic techniques will aid in improving the sensitivity of detection of dysplasia and early cancer development in epithelial cancers like the oesophagus or bladder. CONCLUSION As described earlier, the 4 mechanisms of bladder UC recurrence are incomplete resection, tumour cell reimplantation, growth of microscopic tumours, and new tumour formation [3]. Incomplete resection and tumour cell reimplantation at the site of the primary tumour are proposed to be the most important causes of early UC recurrence, as well as the growth of very small volume or microscopic tumours that were present but overlooked at the time of the primary resection [3]. Improving optical technologies is one way to overcome these WLC shortfalls. In this regard, PDD and NBI are the most mature technologies and are currently in widespread clinical use. An overview of these two most studied approaches is outlined in Table 1. The other modalities described above may also facilitate the detection of very small volume or microscopic lesions, but their future is more likely to lie with "real-time" tumour staging. An instrument that combines some or all of these technologies may be the ultimate goal for the urologist, but this is some way off yet. Another strategy to improve our optical diagnosis of bladder UCs is by improving cystoscopy skills. Cystoscopy may not be taught as well as it should. This may be due to poor training and the assumption that junior urologists do not need formal training in performing cystoscopy and are not assessed in this. Moreover, delegation of the large volume of followup cystoscopy is usually to junior and inexperienced staff. Also, failure to use the best endoscopy equipment and not employing the 70-degree telescope are possible reasons for poor-quality TURBTs, even amongst experienced urologists. The absence of detrusor muscle in the first apparently complete, white light TURBT by less-experienced surgeons appears to be independently associated with an increased risk of recurrence [33]. TURBT should be performed in a timely fashion and under standard conditions. These include use of proper anaesthesia with a continuous-flow video resectoscope. The entire bladder must be visualised and all abnormal areas must be resected, with separate biopsies from each tumour's base, specifically to avoid understaging muscle-invasive disease. Restaging TURBT is recommended for all high-grade tumours, particularly if the muscle is not present in the specimen [34]. The main evidence for this is supported by Herr showing the large number of tumours missed and the high recurrence rate [35]. This is thought due to inadequate training, technique, or both. Users of PDD/NBI have often reported that "these techniques make one a better cystoscopist" by focusing the operator in taking longer and looking more thoroughly. From personal experience, when we first started using NBI flexible cystoscopy in 2005, we were surprised at just how many UCs we were missing with WLC, and quickly learnt to recognise some of the very subtle mucosal changes that were visible with WLC that accompanied the lesions that we had only initially seen with NBI. Other authors describe how they now routinely resect or diathermy a margin around the base of the primary UC as a result of what they have seen when performing TURs with PDD [36]. Data from this review indicate at evidence levels 1 and 2 (metaanalysis of randomised controlled and good-quality prospective cohort studies) that PDD detects significantly more tumour-positive patients than white-light cystoscopy alone. This may come at the price of reduced specificity. The detection benefit is higher in selected patients with carcinoma in situ (evidence level 2). PDD reduces residual tumour rates and increases tumour-free survival significantly (evidence level 1). Measures to make it more cost effective include using PDD in a more selective setting such as those with high-risk disease (EORTC-GU group tables) at first diagnosis or in patients with positive urine cytology but negative WLC. Although a clear benefit for PDD has been found for the detection of CIS, the value of this technique with respect to CIS recurrences and progression remains unclear. The benefit of PDD with respect to progression must also be demonstrated in papillary disease especially over the course of longterm surveillance. Presently no data exists on which to base a firm recommendation for the use of PDD for surveillance. Future studies will have to show if the intervals to followup cystoscopy can be altered when PDD is used and if PDD changes adjuvant treatment, for instance, due to the safe exclusion of CIS. Furthermore, the necessity of re-TUR after PDD must be evaluated. With both NBI and PDD there remain the issues of whether the procedures are carried out more diligently when the new modality is used [36], or whether the improvements are due to a "second look" cystoscopy [37]. However, a recent study has concluded that "a second look does not compromise the superiority of NBI over standard WLC flexible cystoscopy for detecting primary NMIBC including CIS [38]." Other techniques discussed, namely, RS, OCT, VC, and EM show promising preliminary results, but more research is needed before any of the techniques can be instituted in the diagnosis and surveillance of bladder cancer. There is clearly a need for larger, multicentre RCT in these emerging approaches, especially NBI and PDD for diagnosis and surveillance of bladder cancer. A CROES trial is currently recruiting in the NBI TUR setting, and other trials using NBI and PDD for bladder cancer surveillance are in planning.
2014-10-01T00:00:00.000Z
2011-12-29T00:00:00.000
{ "year": 2011, "sha1": "0eb340bf0ac3d45cad78eb862af648a00d05b65e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2011/412739.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6509ceff0c79df187e5acb7520f5bcef5927f40b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17547609
pes2o/s2orc
v3-fos-license
Effects of the magnetic moment interaction between nucleons on observables in the 3N continuum The influence of the magnetic moment interaction of nucleons on nucleon-deuteron elastic scattering and breakup cross sections and on elastic scattering polarization observables has been studied. Among the numerous elastic scattering observables only the vector analyzing powers were found to show a significant effect, and of opposite sign for the proton-deuteron and neutron-deuteron systems. This finding results in an even larger discrepancy than the one previously established between neutron-deuteron data and theoretical calculations. For the breakup reaction the largest effect was found for the final-state-interaction cross sections. The consequences of this observation on previous determinations of the ^1S_0 scattering lengths from breakup data are discussed. I. INTRODUCTION The study of three-nucleon (3N) bound states and reactions in the 3N continuum has improved significantly our knowledge of the nuclear Hamiltonian [1,2]. The underbinding of the triton and 3 He nuclei by modern nucleon-nucleon (NN) interactions was the first evidence for the necessity of including three-nucleon forces (3NF) [2] in addition to the pairwise NN interactions. Furthermore, results of Green function Monte-Carlo calculations [3] showed that the energy levels of light nuclei can be explained only when the pairwise NN interactions are supplemented by appropriate 3NF's. Additional evidence for 3NF effects came from the study of the cross-section minimum [4] in elastic nucleon-deuteron (Nd) scattering and the deuteron vector analyzing powers [5][6][7]. Despite the spectacular successes obtained in interpreting 3N data based on the concept of a 3N Hamiltonian with free NN interactions and supplemented by 3NF's, some dramatic discrepancies remain between theory and data that require further investigation. These discrepancies can be divided into two categories according to energy. One was discovered at incident nucleon lab energies in the 3N system above 100 MeV and is exemplified by the nucleon-vector [6,8,9] and deuteron-tensor analyzing powers [5,10] in Nd elastic scattering. Since the 3NF effects become more important with increasing energy [10,11], these discrepancies will play an important role in establishing the proper spin-isospin structure of the 3NF. The second category was found at lab energies below 40 MeV. In the following we will focus on these low-energy discrepancies. The most famous is the vector analyzing power in Nd elastic scattering. The theoretical predictions based on modern NN interactions, and including 3NF models, underestimate considerably the maximum of the nucleon analyzing power A y (θ) in p-d and n-d scattering, as well as the maximum of the deuteron vector analyzing power iT 11 (θ) in d-p scattering [10,12,13]. At low energies (up to ≈ 30 MeV) these two observables are very sensitive to changes in the 3 P j NN and/or in 4 P J Nd phase shifts [14,15]. Even very small changes of these phase shifts result in significant variations of A y (θ) and iT 11 (θ). Furthermore, low-energy neutron-deuteron (nd) breakup cross sections show clear discrepancies between theory and data [1,[16][17][18][19][20][21][22][23] for some special kinematical arrangements of the outgoing three nucleons. The most spectacular ones are the symmetrical space-star (SST) and the quasi-free scattering (QFS) configurations. In the SST configuration the three nucleons emerge in the c.m. system in a plane perpendicular to the incoming beam with momenta of equal magnitudes and directed such that the angle between adjacent particle momentum vectors is 120 • . The low-energy nd SST cross sections are clearly underestimated by theoretical predictions [16][17][18][19][20][21]. The calculated cross sections are insensitive to the NN potential used in the calculations [1,11]. They also do not change when any one of the present-day 3NF models is included [1,11]. The QFS refers to the situation where one of the nucleons is at rest in the laboratory system. In nd breakup, np or nn can form a quasi-free scattered pair. Both cases have been measured [22,23]. The picture resembles that for SST: the theoretical QFS cross sections are practically independent of the NN potential used in calculations, and they do not change when 3NF's are included [1,11]. The calculated QFS scattering cross section follows nicely the data when np is the quasi-free interacting pair [23]. However, when the nn pair is quasi-free scattered instead of np, the theory clearly underestimates the experimental cross sections [22,23], similarly to the nd SST case. A problem of a different kind arises in the nd breakup final-state-interaction (FSI) configuration where the two outgoing nucleons have equal momenta. The cross section for this geometry is characterized by a pronounced peak when the relative energy of the final-state interacting pair reaches zero (exact FSI condition). Due to the large sensitivity of this enhancement to the 1 S 0 NN scattering lengths, the FSI geometry has recently been studied in nd breakup with the aim of determining the nn 1 S 0 scattering length [24,25]. The np FSI cross-section measurements performed simultaneously in [24] and in two consecutive experiments in [25,26] seem to indicate that indeed, this configuration is a reliable tool for determining the NN 1 S 0 scattering length: the values obtained for a np agreed with the result known from free np scattering [27]. However, the values obtained for a nn in [24,25] are in striking disagreement with each other, with the result of [24] in excellent agreement with the accepted value for a nn . All previously published 3N continuum Faddeev calculations for the nd system were restricted to pure strong nuclear forces, while for the pd system the Coulomb interaction between the two protons had been included in addition to the nuclear force [13]. However, the rigorous inclusion of the Coulomb interaction is currently limited to elastic scattering. More subtle electromagnetic contributions, such as the magnetic moment interaction (MMI) between the nucleons have been neglected in exact Faddeev calculations for the 3N continuum. The approximate calculation by Stoks [28] at E N = 3 MeV, based on a quasi two-body approach represents only the leading term of a genuine 3N calculation. Since this calculation predicted only a tiny effect on A y (θ) in the region of the maximum it was generally concluded that the exact treatment of the MMI was not worth the effort. However, the speculative interpretation [32] of new data for n-d scattering at very low energies and the associated comparison to p-d scattering suggested that the MMI is indeed an important ingredient in the 3N continuum. It is the aim of the present paper to go beyond the approximate calculation referred to above and to study extensively the effects of the MMI on Nd elastic scattering and breakup observables using the Faddeev approach. Even though it is very unlikely that such subtle effects will have any significant influence on the QFS and SST cross sections, we found it worthwhile to calculate the magnitude of the MMI effects for these configurations. In section II we present the basic theoretical ingredients of our 3N calculations together with a short description of the nucleon magnetic moment interactions. The results for elastic scattering and breakup observables are presented and discussed in sections III and IV, respectively. In the breakup section we also focus on the consequences of the MMI effects on the extraction of the 1 S 0 scattering lengths from nd breakup data. Specifically, we present corrections induced by the MMI on the values for a np and a nn deduced from very recent measurements. We summarize and conclude in section V. II. THEORETICAL FORMALISM The transition amplitudes for Nd elastic scattering, < Φ ′ |U|Φ >, and breakup, < Φ 0 |U 0 |Φ >, can be expressed in terms of the vector T |Φ >, which fulfills the 3N Faddeev equation [1], as The incoming state |Φ >= | − → q 0 , φ d > is composed of the deuteron wave function φ d and the momentum eigenstate | − → q 0 > of the relative nucleon-deuteron motion. For elastic scattering the outgoing relative momentum changes its direction leading to a state |Φ ′ >, This formulation for nucleon-deuteron scattering assumes pairwise interactions between nucleons via a short-range force V, which generates through the Lippmann-Schwinger equation the transition-matrix t. Therefore, it excludes the treatment of the long-range Coulomb force, but allows for the inclusion of any electromagnetic contribution of short-range character, such as, e.g., the magnetic moment interactions (MMI) between nucleons. In the case of our "pd calculations," the value of the magnetic moment of the two neutrons in our nd breakup calculations is replaced by the value of the proton magnetic moment, i.e., the long-range Coulomb force is not included, neither is any interference between the Coulomb force and the MMI. In our approach we solve Eq.(1) in momentum space and partial wave basis using the magnitudes of the standard Jacobi momenta p = | − → p | and q = | − → q | to describe the relative motion of the three nucleons, supplemented by angular momenta, spin, and isospin quantum numbers. Due to the short-range assumption, the result is a finite set of coupled integral equations in two continuous variables, p and q. The equations are solved for the amplitudes < pqα|T |Φ > for each total angular momentum J and parity of the 3N system by generating the Neumann series of Eq.(1) and summing it up by the Pade method. The |α > is a set of angular, spin and isospin quantum numbers |α >≡ |(ls)j(λ1/2)I(jI)J(t1/2)T > , which describes the coupling of the two-nucleon subsystem and the third nucleon to the total angular momentum J and total isospin T of the 3N system. For details of the theoretical formalism and numerical performance we refer to [1]. To study effects of the MMI's of the nucleons we included in addition to the strong AV18 [29] or CD Bonn [30] potentials the interactions of the magnetic moments in the pp, np, and nn subsystems. The form and parametrization of the MMI's are given in Eqs. (8), (15), and (16) of ref. [29] (in the case of the np system the term − → L − → A in Eq.(15) of ref. [29] was neglected). The difference of the pp (nn) and np interactions in isospin t=1 states induces transitions between 3N states with total isospin T=1/2 and T=3/2. The strength of these transitions is determined through the known charge-independence breaking of the NN interactions [31]. This charge-independence breaking can be treated approximately by a simple "2/3-1/3 rule", for which the effective t=1 transition matrix is given by t = 2/3t pp(nn) + 1/3t np , and T=3/2 3N states are neglected. This procedure is sufficient for most of the 3N scattering observables [31]. Since it is not evident that such an approximate approach is sufficient when the MMI's are included, we performed also calculations where for each partial wave state |α > with isospin t=1 both values of the total 3N isospin T=1/2 and T=3/2 were taken into account. In all calculations we considered all basis states |α > with two-nucleon subsystem angular momenta up to j ≤ j max = 3. In order to obtain full convergence of the numerical results for elastic scattering and breakup observables, Eq.(1) was solved for total 3N angular momenta up to J = 25/2. Under this condition the maximal number of coupled integral equations, which is equal to the number of possible |α >'s, amounts to 62 for the approximate approach and increases to 89 when both values of the total isospin are taken into account. III. ELASTIC SCATTERING RESULTS The nucleon scattered off the deuteron can be either a neutron or a proton. Since they have magnetic moments of different sign and magnitude, we performed separate calculations for the pd and nd systems. In both cases the pp (nn) and np nuclear interactions of the AV18 and CD Bonn potentials were used, supplemented by the appropriate MMI's: pp and np for the pd system, and nn and np for the nd system. Comparisons of the theoretical predictions for elastic scattering observables were made between calculations obtained with and without the MMI's included. Among the unpolarized cross section, analyzing powers, spin correlation coefficients, and polarization transfer coefficients, only the vector analyzing powers [A y (θ) and iT 11 (θ)] show a significant influence of the MMI's (see Fig. 1, 2, and 3). As expected from the different signs of the proton and neutron magnetic moments, the effects of the MMI's have opposite signs for the pd and nd systems. For the pd system the MMI's raise the maximum value of A y (θ) and iT 11 (θ) by ≈ 4% at E lab n = 3 MeV, thus bringing it closer to the experimental pd data, while for the nd system they reduce the maximum of A y (θ) and iT 11 (θ) by ≈ 3%, therefore enlarging the discrepancy between theory and data. The magnitude of the MMI effects is energy dependent and decreases with increasing nucleon energy (see Fig. 3). The contribution of the MMI's is thus most significant at low energies. The relative magnitude of the effect is comparable for A y (θ) and iT 11 (θ) and is roughly independent of the strong NN interaction used in the calculations. We found that the evaluation of the effects induced by the MMI's does not require partial wave components with total isospin T=3/2. Restricting the calculations to the approximate "2/3-1/3 rule" leads to a fairly good estimate of the MMI effects. Our results once again exemplify the spectacular sensitivity of the low-energy vector analyzing powers to the 3 P-wave interactions. In spite of the relative smallness of the MMI contributions to the potential energy of the three nucleons, their effect is amplified by this 3 P-wave sensitivity, and they must be taken into account in any final solution to the A y (θ) puzzle. Recent experimental data for the low-energy nd A y (θ) (E n = 1.2 and 1.9 MeV) revealed a sizable difference with respect to pd data. The difference increases with decreasing centerof-mass energy [32]. This energy dependence was used in [32] to speculate that the difference between the nd and pd A y (θ) data at low energies is due to the MMI's of the three nucleons in the 3N continuum. The present work clearly supports this conjecture. IV. BREAKUP RESULTS The final deuteron breakup state requires 5 independent kinematical parameters to define it unambiguously. They can be taken as, e.g., laboratory energies E 1 and E 2 of two outgoing nucleons together with their angles to define the directions of their momenta. In order to locate regions in this 5-dimensional breakup phase space with large changes of the cross section due to the MMI's of the nucleons, we applied the projection procedure described in details in ref. [11]. The magnitude of the MMI effects on the exclusive breakup cross section We searched the entire breakup phase space for the distribution of ∆ values. For this purpose the phase space was projected onto three sub-planes: θ 1 − θ 2 , φ 12 − θ 2 , and E 1 − E 2 . Here, θ 1 and θ 2 are the polar angles which together with the azimuthal angles φ 1 and φ 2 (φ 12 = φ 1 − φ 2 ) define the directions of the nucleon momenta. The resulting projections of ∆ are shown in Fig. 4 at the three incoming proton energies of E lab p = 5, 13 and 65 MeV for the 2 H(p, pp)n breakup reaction. The largest changes found for the breakup cross sections reach ≈ 10%, and even at 65 MeV there are configurations with non-negligible MMI effects. The largest effects are located mostly in regions of the phase space that are characterized by FSI geometries. In most other parts of the phase space the effects of the MMI's are rather small. In particular, the SST and QFS cross sections are only slightly influenced. According to our calculations, for the QFS configuration the MMI effect depends on the laboratory angles of the quasi-free interacting nucleons and it is different for the pp, nn, or np pairs. However, the effects are quite small. At E lab p = 13 MeV they are never larger than 2%. For the SST configuration the effects are smaller than 1%. Changing the c.m. angle between the space-star plane and the beam axis to values different from 90 • results in only small changes of the cross section of less than 2% when the MMI's are included. As stated above, the largest changes of the cross sections of up to ≈ 10% occur for FSI configurations. Their relative magnitude depends on the incoming beam energy and on the production angle of the final-state interacting pair (the lab. angle between the momentum of the FSI pair and the beam axis). The sign of the effect depends on the type of the finalstate interacting pair (see Figs. 5-7). For the np FSI peak the cross section is increased and for the pp (nn) FSI peak it is decreased, according to the increased or decreased attraction caused by the MMI's in the corresponding NN subsystem. The relative changes of the FSI cross sections are comparable for the np and pp FSI peaks, and are a factor of ≈ 2 smaller for the nn FSI peak (see Fig. 5). This factor approximately corresponds to the ratio of the squares of the neutron and proton magnetic moments, µ 2 p /µ 2 n = 2.13. This behavior of the FSI cross sections is of interest in view of recently reported values for the 1 S 0 nn and np scattering lengths extracted from nd breakup measurements [24,25]. As stated earlier, for the np system both measurements resulted in comparable values for a np , which agree with the value obtained from free np scattering. However, the reported results for the a nn scattering length are strikingly different. In Fig. 6 we show the nn and np cross sections for the FSI configurations of ref. [24] and in Fig. 7 for the corresponding configurations of ref. [25], together with the effects induced by the MMI's on the FSI peaks. It is interesting to note that the theoretical cross sections for the np FSI obtained with MMI's included do not change significantly if the np pair is accompanied by either a nn or pp pair. This effect is slightly dependent on the production angle of the FSI pair (see Fig. 5). This observation, together with our findings about the dependence of the nn FSI peak on the magnitude of the MMI's, indicates a relatively simple mechanism by which the MMI's affect FSI geometries, resulting in a net effect that is dominated by the magnetic moments of the FSI pair. In view of the non-negligible effects of the MMI's on FSI cross sections found in the present work one would conclude that the values reported in refs. [24], [25], and [26] for the scattering lengths must be corrected for such effects. Based on the sensitivity of the theoretical point-geometry FSI cross sections to specific values of a np and a nn for the geometries of the experiments described in [24], [25], and [26], the MMI associated corrections for the a np and a nn scattering lengths are shown in column 3 of Tables I and II, respectively. While the correction to the nn scattering length is rather small, the correction to the np scattering length moves the a np values obtained from the nd breakup reaction away from the free np scattering result by ≈ 1 fm. The corrections given in Tables I and II are for point geometry, i.e., they do not include the finite geometry of the experimental setup and the associated energy smearing. Such sizeable corrections, especially for a np , if true, would cast doubt on the accuracy of previous results obtained from the nd breakup reaction. Evidence that this is not the case is shown in Table III. This table contains Tables I and II and column 5 in Table III). Therefore, the effective corrections, given by their sum, are nearly negligible. The final corrected values for a np and a nn are given in column 4 of Tables I and II, respectively. Clearly, the MMI's do not explain the different values for a nn obtained in the measurements of [24] and [25]. As a side remark one should mention a shortfall of the np 1 S 0 scattering length used in the CD Bonn potential. It was fitted to the experimental value of a np obtained from free n-p scattering without taking the MMI into account. V. SUMMARY AND CONCLUSIONS We performed an extensive study of effects induced by the magnetic moment interactions of nucleons in the 3N continuum. For elastic Nd scattering we found that only the vector analyzing powers show significant changes when the MMI's are included. For the nd system the MMI's increase the discrepancy between calculations and data for A y (θ) in the region of the A y (θ) maximum, while for the pd system they reduce the discrepancy. The effects for iT 11 (lower part) configurations of ref. [25]. Descriptions of curves are the same as in Fig. 6. Note the overlapping results for the pp-np and nn-np MMI's in the np FSI peak. http://arxiv.org/ps/nucl-th/0301022v1
2014-10-01T00:00:00.000Z
2003-01-08T00:00:00.000
{ "year": 2003, "sha1": "b5e4e22eba4f6e4350624bad5e637c04bc50f474", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0301022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "de195c2f66f8d8cdbc65212afbd7642c58433c32", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
203062859
pes2o/s2orc
v3-fos-license
Representations of Sensory Signals and Abstract Categories in Brain Networks Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated paradigms like decision-making are less used. Although automated decision-making systems are ubiquitous (driverless cars, pilot support systems, medical diagnosis algorithms etc.), achieving human-level performance in decision making tasks is still a challenge. At the same time, these tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these tasks and modeling them using deep neural networks could improve AI performance. Here we modelled some of the complex neural interactions during a sensorimotor decision making task. We investigated how brain dynamics flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We found that neural representations changed depending on context. We also trained deep recurrent neural networks to perform the same tasks as the animals. By comparing brain dynamics with network predictions, we found that computations in different brain areas also changed flexibly depending on context. Color computations appeared to rely more on sensory processing, while motion computations more on abstract categories. Our results shed light to the biological basis of categorization and differences in selectivity and computations in brain areas. A Flexible Decision Making Task We reanalyzed data previously published in (Siegel et al., 2015) using Representation Similarity Analysis (RSA; Kriegeskorte et al., 2008) and deep recurrent neural networks (RNNs). Monkeys performed the task shown in Fig. 1. They categorized motion direction and color of centrally presented, colored random dot stimuli ( Fig. 1A). Before stimulus onset, a central cue indicated which feature to categorize. Monkeys indicated their choice with a leftward or rightward saccade and held central fixation throughout each trial until their response. Monkeys were free to respond any time up to 3 s past stimulus onset. We analyzed data from the epoch after stimulus onset until average response latency (1s to 1.27s; t=0 corresponds to cue onset). Stimuli systematically covered motion direction, and color space between opposite motion directions (up and down) and opposite colors (red and green; Fig. 1B). There were 7 possible stimulus motion directions and 7 possible colors. In total, there were 42 stimulus conditions. Depending on the task cued at the beginning of each trial, the animals categorized either the motion direction (up vs. down) or color (red vs. green) of the stimulus. We recorded LFP data from 6 cortical areas shown in Fig. 1C. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0 indicated their choice with a leftward or rightward saccade and held central fixation throughout each trial until their response. Monkeys were required to respond within 3 seconds after the stimulus onset. For each trial, we analyzed the data from the stimulus onset to the average response latency (1s to 1.270s) (B) Stimuli systematically covered motion, direction, and color space between opposite motion directions (up and down) and opposite colors (red and green). All stimuli were 100% coherent, iso-speed, iso-luminant, and isosaturated. (C) Schematic display of the recorded brain regions. See also (Siegel et al., 2015) for more details. Differences in representations between brain areas To understand neural representations in different brain areas during this flexible decision task, we used two approaches. We computed 1) the similarity of neural representation in a brain area with the geometry of the sensory or category domain represented (which we call domain selectivity; motion vs color vs motion categories vs color categories). 2) The similarity of neural computation performed by a brain area with predictions from 2 deep RNNs: one trained to distinguish categories (like the behavioural task) and the other to process visual information (this we call, computation selectivity). The assumption here was that to perform the behavioural task both kinds of computations should take place in different brain areas, i.e. categorization required also sensory processing. The two approaches we used are distinct. Being selective to a sensory domain (domain selectivity) is not the same as performing computations like sensory processing and abstract categorization (computation selectivity). Domain selectivity refers to representation content only, while computation selectivity characterizes how these representations are manipulated and compared to each other to find their similarities and differences. Also, sensory processing requires integrating sensory inputs while abstract categorization requires combining these integrated inputs with prior knowledge about learned categories. All these computations take time. Thus, understanding which computations each area performs requires analyzing temporal information in brain dynamics. Although distinct, domain and computation selectivity should give similar results. We found this below. Domain Selectivity We first considered the selectivity of each brain area to motion direction and color (Fig. 1B). To understand what kind of representations (motion direction vs color, sensory processing vs categorization) were encoded in each brain area, we computed the dissimilarity between brain Representation Dissimilarity Matrices (RDMs) and sensory/category DMs (SDMs/CDMs) on the other. Brain RDMs were obtained using LFP recordings from each brain area. We followed (Kriegeskorte et al., 2008) and used the dissimilarity between dissimilarity matrices (called deviation) as metric to compare brain RDMs and SDMs/CDMs. These deviations are shown in panels of Figure 2. There are 6 panels (for the 6 brain areas). Each panel has 4 bars (deviation of each brain RDM from color SDM; motion SDM; color CDM; motion CDM). Interestingly, for most brain areas domain selectivity depended not only on the stimulus but also on the domain of categorization. It switched between the two domains depending on task (motion direction or color categorization). This is a surprising result, not previously shown to the best of our knowledge. Also, related work in the literature usually focuses on sensory perception only, and does not normally involve flexible switching between sensory domains, contrary to the paradigm considered here. Figure 2: Deviations between RDMs and SDMs/CDMs. (A) Motion categorization task. Each panel depicts deviations between RDM of a brain area and the SDM ("color", "motion" 1st and 2nd bars from left) or CDM ("color category", "motion category", 3rd and 4th bars) respectively. Error bars denote standard errors. All deviations were significant at the p<0.0001 level with the exception of those with "n.s" at the bottom(not significant; fixed-effects category-index randomization test, see Methods and (Kriegeskorte et al. 2008)). (B) Same results for the color categorization task. Note that deviation is based on correlation distance, thus smaller bars indicate better similarity between RDMs and SDMs/CDMs. Asterisks above each bar denote the significance level of the corresponding partial correlations. See (Pinotsis et al.,2019 for more details). V4 showed preference towards the color domain in both tasks and motion categories in the motion task. MT was more selective for the motion category domain in both tasks. FEF exhibited selectivity for motion in the motion task and color in the color task. PFC selectivity was for the motion domain in the motion task and the color domain in the color task. Finally, IT seemed to prefer more color in both tasks and color categories in the color task. See (Pinotsis et al., 2019) for more details. 1 We then confirmed the above results using deep neural networks. We turned to computation selectivity. This was defined based on the similarity of brain activity with predictions from RNNs performing either sensory processing or categorization. Computation Selectivity To understand this, we built deep RNNs. Although they comprised six LSTM layers (the same number of layers as the cortical network from which we recorded LFP responses), we use them only for simulating brain computation, not as precise descriptions of anatomy. We considered 2 variants of the same RNN. One trained to perform sensory processing and the other abstract categorization (sensory and category RNN respectively). We assumed that sensory processing would be based on low level visual features, while categorization would be based on information that the animal had learned after being trained to perform the task. Then we compared the RNN predictions to neural activity. We concluded that the computation a brain area performed would be similar to that of the RNN whose predictions were more similar to (had smallest deviations) and significantly correlated with brain activity. We trained them using LFPs as inputs and labels corresponding to different sensory stimuli or 1 Due to space limitations, we did not include further details here. categories as outputs (depending on whether the RNN was processing sensory information or categorizing). We used RSA again and compared brain and network RDMs. Results are presented in Figures 3 and 4. Bars in each panel depict deviations between RDM of a brain area and each layer in a deep RNN performing motion processing and categorization. There are six pairs of bars, equal to the number of layers. The left bar in each pair corresponds to deep RNN predictions when the network performs sensory processing, while the right bar corresponds to predictions during categorization. Error bars denote standard errors. All deviations were significant at the p<0.0001 level with the exception of those with "n.s" at the bottom (Kriegeskorte et al. 2008)). Asterisks above each bar denote the significance level of the corresponding partial correlations. Figure 4: Deviations between brain and network RDMs for color processing and categorization. This is similar to Figure 3 where the deep RNN has learned to process and categorize color as opposed to motion direction stimuli. The results of Figures 3 and 4 confirmed those of Figure 2: V4 showed preference towards sensory processing in the color task and motion categorization in the motion task. MT was more selective for categorization during both the motion and color tasks. FEF showed clear preference for sensory processing during both tasks. PFC seemed to prefer more sensory processing in motion task and categorization in the color task. Finally, IT seemed to prefer sensory processing in both tasks which also coincided with its domain selectivity. See (Pinotsis et al., 2019) for more details. Conclusions Our results fit well with earlier results by (Mante et al., 2013). We found that sensory information reaches PFC. Gating of sensory input is absent and filtering out of irrelevant (sensory) information by earlier brain areas did not occur. Also, (Mante et al., 2013) found that PFC responses during the motion and colour tasks occupy different parts of state space, and the corresponding trajectories are well separated along the axis of context (task). This can explain the flexible domain selectivity switching between tasks we found here. All in all, we found that representations changed flexibly depending on context (motion vs color task) and level of abstraction (sensory processing vs categorization). The motion task seemed to rely more on categorization, while the color task seemed to be driven by sensory computations. These results are in accord with earlier findings by (Brincatt et al., 2018). In that paper, coding in most areas was found to reflect a mixture of sensory and categorical effects. Similarly, we found significant similarities between brain RDMs and RDMs from neural networks that perform both sensory processing and abstract categorization. In the same work, categories arose gradually across the hierarchy. Our analysis, based on deep recurrent neural networks, revealed that gradual emergence is driven by sensory color and more abstract motion direction categorization. All in all, our analysis sheds light to the biological basis of categorization and differences in selectivity and computations among different brain areas. It paves the way for constructing neural networks that can replicate brain dynamics underlying complex sensorimotor decision making tasks. Elucidating such differences can be important for building automated systems for intelligent decision making in multidimensional domains, like driverless cars, pilot support systems, medical diagnosis algorithms etc. We hope our work can help make progress in this direction.
2019-09-17T01:07:39.011Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "fbd8edf998f70248f82a86c1c44b49991e4f55f1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32470/ccn.2019.1290-0", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "09f059903b2b17b66650597cfd4d0de894d43e81", "s2fieldsofstudy": [ "Computer Science", "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology" ] }
203642199
pes2o/s2orc
v3-fos-license
Neural Zero-Inflated Quality Estimation Model For Automatic Speech Recognition System The performances of automatic speech recognition (ASR) systems are usually evaluated by the metric word error rate (WER) when the manually transcribed data are provided, which are, however, expensively available in the real scenario. In addition, the empirical distribution of WER for most ASR systems usually tends to put a significant mass near zero, making it difficult to simulate with a single continuous distribution. In order to address the two issues of ASR quality estimation (QE), we propose a novel neural zero-inflated model to predict the WER of the ASR result without transcripts. We design a neural zero-inflated beta regression on top of a bidirectional transformer language model conditional on speech features (speech-BERT). We adopt the pre-training strategy of token level mask language modeling for speech-BERT as well, and further fine-tune with our zero-inflated layer for the mixture of discrete and continuous outputs. The experimental results show that our approach achieves better performance on WER prediction in the metrics of Pearson and MAE, compared with most existed quality estimation algorithms for ASR or machine translation. INTRODUCTION Automatic Speech Recognition (ASR) has made remarkable improvement since the advances of deep learning [1] and powerful computational resources [2]. However, current ASR systems are still not perfect because of the constraint of objective physical conditions, such as the variability of different microphones or background noises. Thus, quality evaluation (QE) is a practical desideratum for developing and deploying speech language technology, enabling users and researchers to judge the overall performance of the output, detect bad cases, refine algorithms, and choose among competing systems in a specific target environment. This paper focuses on such a situation where the golden references are not available for estimating a reliable word error rate (WER). In the research direction of ASR QE without transcripts, a twostage framework including feature extraction and WER prediction, has been a long-standing criteria. Classical pioneering works mainly rely on hand-crafted features [3] and utilize them to build a linear regression based algorithms, includes aggregation method with extremely randomized trees [4], SVM based TranscRater [5], and e-WER [6]. In this work, instead of heavily using manual labors, we propose to derive the feature representations from a pre-trained conditional bidirectional language model -speech-BERT, which aims to predict the relationships between the raw fbank features and utterances by analyzing them holistically. The training data required * indicates equal contribution. for speech-BERT is exactly the same as the one for conventional ASR, without any additional human annotations. Subsequently, during the WER prediction stage, we analyze the empirical distribution of WER for most ASR systems (in Fig. 1), and find that WER values are prone to distribute near 0 and the non-zero values approximately follow a Beta or Gaussian distribution between 0 and 1. Therefore, during the fine-tuning procedure, we introduce a neural zero-inflated regression layer on top of the speech-BERT, fitting the target distribution more appropriately. In summary, this paper makes the following main contributions. i) We propose a bidirectional language model conditional on speech features, which aims to improve the feature representations for ASR downstream tasks. A bonus experiment shows that tying the parameter of speech-BERT and Speech-Transformer can accelerate the convergence during training. ii) We introduce a neural zero-inflated Beta regression layer, particularly fitting the empirical distribution of WER. For the gradient back-propagation of neural Beta regression layer, we design an efficient pre-computation method. iii) Our experimental results demonstrate our ASR quality estimation model can achieve the state of the art performance with fair comparison. RELATED WORKS Transformer [7] has been extensively explored in natural language processing [8,9], and become popular in speech [10,11]. The motivation of our speech-BERT comes from the success of BERT [8] which has demonstrated the importance of bidirectional pre-training for language representations and reduced the need for many heavilyengineered task specific architectures. We will also adopt the loss function of masked language model p(x mask |x unmask ) as our training criteria, where x represents all tokens/utterances in one sentence. However, the major difference is that speech-BERT is conceptually a conditional language model, in order to capture the subtle correlations between speech features and utterances as well as the syntactic information. In order to build the conditional masked language model p(x mask |x unmask , s), where s is the speech features corresponding to x, we have to discard the single transformer encoder architecture of BERT since it is difficult to consume two sequences of different modalities 1 . Instead, we modify the speech-Transformer [10] by changing its auto-regressive decoder to a paralleled memory encoder, resulting in an encoder-memory encoder architecture, where the speech and text domains can be separately controlled by two different encoders. The memory means the outputs of the speech encoder by consuming the spectrogram inputs. When the feature representations are ready, the quality estimation task is typically reduced to either regression or classification problem, given the type of predicated values. In machine translation (MT) quality estimation [12], a similar real-valued metric translation error rate (TER) between 0 and 1 is the target of the model, and will be predicted at sentence level. The transformer based predictorestimator framework [13] established a state-of-the-art record in WMT 2018 QE competition, which restricts the output within the expected interval by applying a sigmoid function before the regression. However, a standard regression model is probably not suitable to fit the WER of ASR systems. Due to the subjectivity and nonuniqueness of the translation task, it is relatively easy to produce a significant gap between the machine and human translations. As aforementioned, we observe that the distribution of WER is empirically more zero-concentrated than TER, making a straightforward linear regression easily biased. We propose a neural zero-inflated regression layer, enlightened by the statistical inflated distribution [14,15], which is capable of simulating a mixed continuous-discrete distributions. For the random variable y following such distribution, one typical representation is given by a weighted mixture of two distributions. where Y is a finite set, and Iy=y i is an indicator function whose value equals to 1 if y equals to yi. In our case that y denotes WER, then we have Y = {0} and p0 = 1. Particularly, we recommend to assume p(y) as a Beta distribution for ASE-QE. Therefore, λ simply represents the probability of the event that y takes the value 0 or not, resulting a mixture of Bernoulli and Beta distribution. Additionally, we use one classification neural network to simulate the Bernoulli variable and a regression neural network to simulate the continuous variable, thus resulting a differentiable deep neural architecture that can be fine-tuned together with the parameters of speech-BERT. In this way, we can divide the ASR-QE modeling into a hierarchical multi-task learning, where the first step is to decide whether the ASR output is perfect or not, and the second step is only to regress the WER value for the imperfect one. Speech-BERT The backbone structure of speech-BERT originates from speech-Transformer [10] by adapting the transformer decoder to a memory encoder (in Fig. 2). To achieve this goal, we need two simple modifications. First, we randomly change 15% utterances in the transcription at each training step. We introduce a new special token "[mask]" 1 It is doable if using XLM [9]. analogous to standard BERT and substitute it for the tokens required masking. Notice that in practice 15% of utterances that required prediction during pre-training includes 12% masked, 1.5% substituted and 1.5% unchanged. Secondly, we also remove the future mask matrix in the selfattention of the decoder, which can be concisely written as a unified formulation. softmax where the indicator IST equals to 1 if the model architecture is speech-Transformer. The Kx, Qx, Vx are the output keys, queries, and values from the previous layer of the decoder or memory encoder. M is a triangular matrix where Mij = −∞ if i < j. In the case of the decoder, the self-attention represents the information of all positions in the decoder up to and including that position. In the case of the memory encoder, it represents the information of all positions excepted the masked positions. Other details are similar to the standard transformer referring to [7]. The advantage of using the unified formulation is that it allows us to straightforwardly implement a multi-task learning task in a weights-tying architecture via altering the mask matrix, resulting in the following loss. where model parameter θ is shared cross speech-BERT and speech transformer. The extra ASR loss also differentiates our model to the standard BERT whose additional loss is designed for next sentence prediction task. In the experiments of multi-task learning, we set λST = 0.15 to keep two different losses at a consistent scale. Neural Zero-Inflated Regression Layer The speech-BERT is able to unambiguously output a sequence of feature representations {ft} T t=1 corresponding to every single utterance in the transcription. Theoretically, we can use a single feature representation of arbitrarily selected token for many downstream tasks like "[CLS]" in standard BERT, since the self-attention mechanism has successfully integrated all syntactic information into every feature, but in different ways. Intuitively, it is reasonable to use another feature fusion layer to encode the sequence of features together. Thus, we use one Bi-LSTM [16] layer to re-encode the features and output a single final encode state as the feature h for the quality estimation task. Referring to Eq. (1), we can first define a binary classifier to indicate that whether the ASR result is flawless or not, i.e., following the Bernoulli distribution Bern(λ). For the subsequent regression model, it becomes not necessary to predict the case of zero WER due to the existence of above classifier. This fact naturally advocates the choice of Beta regression because the Beta distribution has no definition on zero. For statistical distributions, the most importance statistics are usually the first two moments, i.e., mean and variance. In our proposal, we mainly model the mean µ which is the actual target in our final prediction, and derive the variance of Beta distribution. where φ is a hyper-parameter that can be interpreted as a precision parameter, which can be estimated from the training data. The parameterized density distribution function is expressed as follows. Combining Eq. (4,5,6), the training objective with the neural zeroinflated beta regression layer is to maximize the log-likelihood of the proposed distribution of WER. max log p(WERx = 0) + IWER x>0 · log p(y) This is a hierarchical loss for two consecutive sub-problems but can be simultaneously optimized, where the first term requires the whole fine-tuning dataset, while the second term is only fed with the data of inaccurate transcriptions. With this loss function, we use the expected prediction during inference, i.e., p(WER > 0) · ypred. Discussion If Y has K > 1 elements, the Eq. (4) can be easily generalized to support a categorical distribution with K + 1 classes via softmax. Using a more succinct parameterization than the Eq. (1), the output K + 1 classes actually represents the probabilities, 1 − λ and λpy i , i = 1, ..., K, where λ is absorbed into py i as a single parameter. Gradient Pre-Computation A crucial issue for Beta regression layer is the involved gradient computation of log p(y) is not straightforward, since the direct autodifferentiation with respect to the training objective is obliged to calculate the gradient of a compound Gamma function Γ(gµ(x, s; θµ)), where gµ simply denotes the computational graph or function with the input x, s and the output µ. Instead, we utilize a gradient pre-computation trick such that the back-propagation becomes less cumbersome, by introducing an equivalent objectiveŁp as to log-likelihood log p(y). where The equivalence is essentially in the sense of gradient computation, in other words, the stochastic gradient optimization will still remain the same, because we can derive the following identical relation with some algebra calculations. In the new objective, we have successfully circumvented the direct gradient back-propagation with respect to Γ(gµ(·)), since the complicated term y * − µ * merely involves forward digamma function computation with a stop gradient operation, while the term gµ can readily and efficiently contribute to the back-propagation because it just consists of the common operations in deep neural networks. EXPERIMENTS In order to validate the effectiveness of our approach, the quality estimation model of ASR was evaluated by two popular measures, Pearson correlation (larger is better) and mean absolute error (MAE, smaller is better). Experimental Settings We conduct our experiments on two types of data. One is a large Mandarin speech recognition corpus containing 20,000 hours training data with about 20 million sentences which is used for speech-BERT pre-training. We evaluate the performance of pre-training via the prediction accuracy on masked tokens. The other is a small size speech recognition quality estimation data including 240 hours, which never appears in the pre-training dataset. The speech recognition system that we want to evaluate the quality is an in-house ASR engine based on Kaldi 3 . The WER computed from the ASR results and the ground truth transcripts is the target we will predict by our model. Correspondingly, we have two test sets of the quality estimation model for in-domain and out-of-domain, where both include 3000 sentences. The acoustic feature used for our implemented model are 80-dimensional log-mel filter-bank (fbank) energies computed on 25 ms window with 10 ms shift. We stack the consecutive frames within a context window of 4 to produce the 320dimensional features in purpose of computational efficiency for the speech encoder. The speech-BERT is trained on 8 Tesla V-100 GPUs for about 10 days until convergence. The quality estimation model is fine-tuned on 4 Tesla V-100 GPUs for several hours. We train the speech-BERT model with three different loss functions and summarize the results in Table 1. Basically, we observe that the jointly trained model can achieve comparable performance to the two separately training tasks. We also visualize the attention between speech encoder and the text decoder or memory encoder in 3. The attention weights are averaged over 8 heads, and the overall patterns between joint training and separate training are relatively similar. We prefer to adopt the simultaneously pre-trained model as our downstream quality estimation task, since we hypothesize that the more supervisions in multi-task learning may incorporate more syntactic information in the hidden representations. Fine-Turning Results For the quality estimation model, we first explore the advantage of zero-inflated model and Beta regression by varying p(y) the prior distribution of WER. The performances of setting five different last layers are evaluated at the out-of-domain test set and shown in Table 2. Notice that i) we cannot simply apply Beta regression to the last layer without zero-inflation, since the zero WER will violate the support of Beta distribution. ii) The linear regression does not necessarily mean a pure Gaussian distribution, since the output still has to conform with the interval [0, 1]. Thus, the mean of the Gaussian distribution cannot be arbitrarily large but be applied a sigmoid function in advance. iii) As the description in ii), the logistic regres- sion is merely different from linear regression in the loss functions (cross-entropy v.s. mean squared loss). iv) The precision parameter of Beta regression is a hyper-parameter estimated from the training data satisfying WER > 0. We employ the maximum likelihood estimation with another common parameterization φ = a + b, where . The second experiment we conduct on our in-domain test dataset is to compare our ASR-QE model as an integrated pipeline with the state-of-the-art quality estimation model QEBrain [13] in machine translation. Notice that for fair comparison, we have to modify the text encoder of QEBrain to the exactly same speech encoder of ours and the last layer to a zero-inflated regression one. In addition to previous metrics, we also introduce F1-OK/BAD to evaluate the recognition result is acceptable or not, which is prevalent in word-level quality estimation of machine translation. The overall results are illustrated in Table 3, where we label the acceptable recognition results with WER <= 0.14. It shows that speech-BERT outperform QEBrain in all aspects. Furthermore, we have a detailed analysis on the Pearson with respect to different sentence lengths in Fig. 4. We simply use two linear regressions fit the trend of performance decreasing when the sentence length grows. QEBrain demonstrates more excellent performance when the sentence is shorter, but speech-BERT has a stable performance across all length ranges. This finding makes sense because the longer sentence is likely to have lower even zero WER, which can be better dealt with by zeroinflated Beta regression layer. CONCLUSIONS In this study, we first proposed a deep architecture speech-BERT, which is seamlessly connected to speech-Transformer. The key purpose is to pre-train the model on large scale ASR dataset, so that the last layer of whole architecture can be directly fed as downstream features without any manual labors. Meanwhile, we designed a neural zero-inflated Beta regression layer, which practically coheres with the empirical distribution of WER. The main intuition is to regress a variable defined as a mixture of the discrete and continuous distributions. With the elaborated gradient pre-computation method, the loss function can still be efficiently optimized. However, we also notice that the disadvantage of our approach is the heavy model built upon speech-Transformer, even through no autoregressive property is proceeded during inference. Investigation of building the zero-inflation regression module on Kaldi framework
2019-10-03T03:19:55.000Z
2019-10-03T00:00:00.000
{ "year": 2019, "sha1": "8d54d9fbe12ceea95abe4421e87f5691b5c6ca27", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1910.01289", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8d54d9fbe12ceea95abe4421e87f5691b5c6ca27", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
260638380
pes2o/s2orc
v3-fos-license
Lifting the lid over the pearl: A histological insight Epithelial pearls and Keratin pearls are pathognomonic of squamous cell carcinoma. However, their histogenesis is not well understood. Only a handful of studies have been conducted in the past in this regard. This brief communication aims to understand the formation of these pearls with a few of our own experiences. Basic CK Acidic CK High molecular weight CK 1,2,3,4,5,6 9,10,12,13,14,15,16,17 Low molecular weight CK 7,8 18,19,20 They are usually found in pairs and different combinations have been reported in different epithelial cells.One such group of epithelia is stratified squamous epithelium comprising both keratinized and non-keratinized.These epithelial cells are known as keratinocytes, due to their ability to produce surface keratin.The difference between the two types lies in the ability of the cells to aggregate the CK filaments in thicker aggregates. [1]e process of keratinization is different from apoptosis. The cellular and extra-cellular cascades are not activated in keratinization in contrast to apoptosis.Also, the dead superficial cells are intact while apoptotic cells are fragmented. The pathologies which are associated with the process of cornification are hyperkeratinization, lack of keratinization, and dyskeratosis.Out of all these, dyskeratosis is the least discussed and one of the most intriguing topics in pathology. [2]skeratosis is defined as an event where an individual cell or a group of cells undergo maturation before reaching the surface.Keratin pearls and epithelial pearls are variants of dyskeratosis, found in the connective tissue in cases of squamous cell carcinoma. [2]The histogenesis of both events is different and will be explained in subsequent paragraphs. CK13 and CK19 are expressed in stratum spinosum and stratum basale respectively while CK 17 and 16 expression, in normal conditions, are absent in the epithelium.Mikami et al. [3] reported that CK 13 and 19 disappear in reciprocation with the emergence of the CK 17 and 16 in oral epithelial malignancies.Therefore, positive immunostaining for CK 17/16 is critical in the diagnosis of neoplastic conditions of epithelial tissue. EPITHELIAL PEARL FORMATION IN CARCINOMA IN SITU Epithelial pearls are characteristic histological features of carcinoma in situ (CIS) and are characterized by a central dyskeratotic cell surrounded by whorls of epithelial cells.The histopathogenesis has been described by Al-Eryani et al. [4] They postulated that the epithelial cells proliferate in the epithelial rete ridges confined by the basement membrane in the lesion of CIS before invasion into the lamina propria.These rete ridges grow extensively to entrap the blood vessels in the connective tissue papilla where these vessels are now known as intra-epithelially entrapped blood vessels (IEBVs).Due to the continued proliferation of the epithelial cells, these entrapped blood vessels collapse secondary to the narrowing of the connective tissue space.As a consequence, erythrocytes from these ruptured IEBVs are extravasated in the epithelium.These extravasated erythrocytes evoke haemolysis-derived oxidative stresses coordinated with haemophagocytosis which results in epithelial cells undergoing keratinization with induction of CK 17 expression.They emphasized that abundant IEBVs are a prerequisite for the formation of epithelial pearls in CIS. KERATIN PEARL FORMATION IN CONNECTIVE TISSUE Keratin pearls are whorl-shaped structures that are seen in histopathological examination of well-differentiated squamous cell carcinoma as concentric rings of keratin around a central core of keratin.It has been proposed that in an epithelial pearl, the malignant epithelial cells as a result of loss of cohesion, get arranged in a concentric manner. [2]When these cells undergo keratinization, keratin pearl is formed.The other school of thought states that a single malignant epithelial cell undergoes keratinization or degeneration, thus acting as a nidus around which other keratinized cells are arranged centrifugally in a circular fashion making a whorl of keratin as seen in psammoma bodies. We normally prepare cytological smears and cell blocks from the aspirate taken from the pathological lesions reporting to the department.To corroborate the above theory, we made cytological smears and cell blocks from the aspirate taken from metastatic lymph nodes.The cytological smears showed polygonal epithelial cells arranged individually and in groups [Figure 1a].The remaining aspirate was centrifuged to make a cell block and histopathological slides were made.The sections showed numerous keratin pearls in a fibrinous background [Figure 1b]. To further substantiate our claim, we made cell blocks from the aspirate of a few odontogenic keratocysts (OKCs).Keratin pearls have never been reported in OKC but the histopathological slides of the cell block showed keratin pearl formation [Figure 2].This reconfirms the hypothesis that a single cell acts as a nidus around which other cells are arranged in a concentric centrifugal pattern. To conclude, cell blocks are good adjuncts to conventional aspiration cytology to get better cellular and architectural details.Through this experiment, we have shown that keratin pearls are formed around a central nidus in a concentric pattern, irrespective of the type of pathology.However, more extensive studies on a larger sample size of aspirates need to be executed to further substantiate this hypothesis. Financial support and sponsorship Nil.
2023-08-07T14:14:07.268Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "e1615d49e855fc6924791680b421dd4a06cbf1d8", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f81467e4e6ee1fd5db94eda6f8933a2822a39148", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7153718
pes2o/s2orc
v3-fos-license
Measuring Text Readability by Lexical Relations Retrieved from Wordnet Current readability formulae have often been criticized for being unstable or not valid. They are mostly computed in regression analysis based on intuitively-chosen variables and graded readings. This study explores the relation between text readability and the conceptual categories proposed in Prototype Theory. These categories form a hierarchy: Basic level words like guitar represent the objects humans interact with most readily. They are acquired by children earlier than their superordinate words (or hypernyms) like stringed instrument and their subordinate words (or hyponyms) like acoustic guitar. Therefore, the readability of a text is presumably associated with the ratio of basic level words it contains. WordNet, a network of meaningfully related words, provides the best online open source database for studying such lexical relations. Our preliminary studies show that a basic level word can be identified by its frequency to form compounds (e.g. chair armchair) and the length difference from its hyponyms in average. We compared selected high school English textbook readings in terms of their basic level word ratios and their values calculated in several readability formulae. Basic level word ratios turned out to be the only one positively correlated with the text levels. Introduction Reading process is the core of language education. Teachers now have access to a vast amount of texts extractable from the Internet inter alia, but the materials thus found are rarely classified according to comprehension difficulty. It is not uncommon to see foreign language teachers using texts not compatible with the students' reading abilities. Traditional methods of measuring text readability typically rely on the counting of sentences, words, syllables, or characters. However, these formulae have been criticized for being unstable and incapable of providing deeper information about the text. Recently, the focus of readability formula formation has shifted to the search for meaningful predictors and stronger association between the variables and the comprehension difficulty. We start our research by assuming in line with Rosch et al.'s Prototype Theory [1] that words form conceptual hierarchies in that words at different hierarchical levels pose different processing difficulties. This processing difficulty is presumably correlated with the reading difficulty of the text containing the words. Putting the logic into templates, the measurement of text readability can be done by calculating the average hierarchical levels at which the words of a text fall. Our study comprises two stages. In the preliminary experiments, we utilized WordNet [2], an online lexical database of English, to identify basic level words. In the subsequent experiment, we compared selected readings in terms of their basic level word ratios and their values calculated in several readability formulae. Basic level word ratios turned out to be the only one positively correlated with the text levels. The remainder of this paper is organized as follows: Section 2 reviews the common indices the traditional readability formulae are based on and the criticism they have received. In Section 3, we first review an approach that centers on ontology structure, and then propose our own ontology-based approach. Section 4 is about methodology -how to identify basic level words, and how to assess the validity of our method against other readability formulae. Section 5 reports the results of the assessment and discusses the strength and weaknesses of our approach. In this section, we also suggest what can be done in further research. Literature Review In this section we first summarize the indices of the traditional readability formulae and then give an account of the criticism these formulae face. Indices of Readability -Vocabulary, Syntactic, and Semantic Complexity The earliest work on readability measurement goes back to Thorndike [3] where word frequency in corpus is considered an important index. This is based on the assumption that the more frequent a word is used, the easier it should be. Followers of this logic have compiled word lists that include either often-used or seldom-used words whose presence or absence is assumed to be able to determine vocabulary complexity, thus text complexity. Vocabulary complexity is otherwise measured in terms of word length, e.g., the Flesch formula [4] and FOG formula [5]. This is based on another assumption that the longer a word is, the more difficult it is to comprehend [6]. Many readability formulae presume the correlation between comprehension difficulty and syntactic complexity. For Dale and Chall [7], Flesch formula [4], and FOG index [5], syntactic complexity boils down to the average length of sentences in a text. Heilman, Collins-Thompson, Callan, and Eskenazi [8] also take morphological features as a readability index for morphosyntactically rich languages. Das & Roychoudhury's readability index [9] for Bangla has two variables: average sentence length and number of syllables per word. Flesch [4] and Cohen [10] take semantic factors into account by counting the abstract words of a text. Kintsch [11] focuses on propositional density and inferences. Wiener, M., Rubano, M., and Shilkret, R. [12] propose a scale based on ten categories of semantic relations including, e.g., temporal ordering and causality. They show that the utterances of fourth-, sixth-, and eighth-grade children can be differentiated on their semantic density scale. Since 1920, more than fifty readability formulae have been proposed in the hope of providing tools to measure readability more accurately and efficaciously [13]. Nonetheless, it is not surprising to see criticism over these formulae given that reading is a complex process. Criticism of the Traditional Readability Formulae One type of criticism questions the link between readability and word lists. Bailin and Grafstein [14] argue that the validity of such a link is based on the prerequisite that words in a language remain relatively stable. However, different socio-cultural groups have different core vocabularies and rapid cultural change makes many words out of fashion. The authors also question the validity of measuring vocabulary complexity by word length, showing that many mono-or bi-syllabic words are actually more unfamiliar than longer polysyllabic terms. These authors also point out the flaw of a simple equation between syntactic complexity and sentence length by giving the sample sentences as follows: (1) I couldn't answer your e-mail. There was a power outage. (2) I couldn't answer your e-mail because there was a power outage. (2) is longer than (1), thus computed as more difficult, but the subordinator "because" which explicitly links the author's inability to e-mail to the power outage actually aids the comprehension. The longer passage is accordingly easier than the shorter one. Hua and Wang [15] point out that researchers typically select, as the criterion passages, standard graded texts whose readability has been agreed upon. They then try to sort out the factors that may affect the readability of these texts. Regression analyses are used to determine the independent variables and the parameters of the variables. However, the researchers have no proof of the cause-effect relation between the selected independent variables and the dependent variable, i.e., readability. Challenge to the formula formation is also directed at the selection of criterion passages. Schriver [16] argue that readability formulae are inherently unreliable because they depend on criterion passages too short to reflect cohesiveness, too varied to support between-formula comparisons, and too text-oriented to account for the effects of lists, enumerated sequences and tables on text comprehension. The problems of the traditional readability formulae beg for re-examination of the correlation between the indices and the readability they are supposed to reflect. An ontology-based method of retrieving information Yan, X., Li, X., and Song, D. [17] propose a domain-ontology method to rank documents on the generality (or specificity) scale. A document is more specific if it has broader/deeper Document Scope (DS) and/or tighter Document Cohesion (DC). DS refers to a collection of terms that are matched with the query in a specific domain. If the concepts thus matched are associated with one another more closely, then DC is tighter. The authors in their subsequent study [18] apply DS and DC to compute text readability in domain specific documents and are able to perform better prediction than the traditional readability formulae. In what follows we describe the approach we take in this study, which is similar in spirit to Yan et al.'s [18] method. An Ontology-based Approach to the Study of Lexical Relations In this small-scaled study, we focus on lexical complexity (or simplicity) of the words in a text and adopt Rosch et al.'s Prototype Theory [1]. Prototype Theory According to Prototype Theory, our conceptual categorization exhibits a three-leveled hierarchy: basic levels, superordinate levels, and subordinate levels. Imagine an everyday conversation setting where a person says "Who owns this piano?"; the naming of an object with 'piano' will not strike us as noteworthy until the alternative "Who owns this string instrument?" is brought to our attention. Both terms are truth-conditionally adequate, but only the former is normally used. The word 'piano' conveys a basic level category, while 'string instrument' is a superordinate category. Suppose the piano in our example is of the large, expensive type, i.e., a grand piano, we expect a subordinate category word to be used in e.g. "Who owns this grand piano?" only when the differentiation between different types of pianos is necessary. Basic level is the privileged level in the hierarchy of categorical conceptualization. Developmentally, they are acquired earlier by children than their superordinate and subordinate words. Conceptually, basic level category represents the concepts humans interact with most readily. A picture of an apple is easy to draw, while drawing a fruit would be difficult, and drawing a crab apple requires expertise knowledge. Informatively, basic level category contains a bundle of co-occurring features -an apple has reddish or greenish skin, white pulp, and a round shape, while it is hard to pinpoint the features of 'fruit', and for a layman, hardly any significant features can be added to 'crab apple'. Applying the hierarchical structure of conceptual categorization to lexical relations, we assume that a basic level word is easier for the reader than its superordinate and subordinate words, and one text should be easier than another if it contains more basic level words. WordNet -An Ontology-Based Lexical Database of English WordNet [2] is a large online lexical database of English. The words are interlinked by means of conceptual-semantic and lexical relations. It can be used as a lexical ontology in computational linguistics. Its underlying design principle has much in common with the hierarchical structure proposed in Prototype Theory illustrated in 3.2.1. In the vertical dimension, the hypernym/hyponym relationships among the nouns can be interpreted as hierarchical relations between conceptual categories. The direct hypernym of 'apple' is 'edible fruit'. One of the direct hyponyms of 'apple' is 'crab apple'. Note, however, hypernyms and hyponyms are relativized notions in WordNet. The word 'crab apple', for instance, is also a hypernym in relation to 'Siberian crab apple'. An ontological tree may well exceed three levels. No tags in WordNet tell us which nouns fall into the basic level category defined in Prototype Theory. In the next section we try to retrieve these nouns. Experiment 1 We examined twenty basic level words identified by Rosch et al. [1], checking the word length and lexical complexity of these basic level words and their direct hypernyms as well as direct hyponyms in WordNet [2]. A basic level word is assumed to have these features: (1) It is relatively short (containing less letters than their hypernyms/hyponyms in average); (2) Its direct hyponyms have more synsets 1 than its direct hypernyms; (3) It is morphologically simple. Notice that some entries in WordNet [2] contain more than one word. We assume that an item composed of two or more words is NOT a basic level word. A lexical entry composed of two or more words is defined as a COMPOUND in this study. The first word of a compound may or may not be a noun, and there may or may not be spaces or hyphens between the component words of a compound. *A refers to "single word" and B refers to "compound". The results confirm our assumption. First, the average word length (number of letters) of both the hypernyms and the hyponyms is much longer than that of the basic level words. Second, the hyponyms have a lot more synsets than the hypernyms. Third, in contrast to the basic level words which are morphologically simple, their direct hypernyms and hyponyms are more complex. Many of the hypernyms are compounds. The hyponyms are even more complex. Every basic level word (except 'peach') has at least one compounded hyponym. Experiment 2 In this experiment, we examined the distribution of the compounds formed by the basic level words and their hypernyms and hyponyms. We also randomly came up with five more words that seem to fall into the basic level category defined by Rosch et al. [1]. These basic level words (e.g. 'guitar') are boldfaced in each item set in Table 2 Note: The symbol "#" stands for "number". Cpd refers to "compound". The three dots indicate that the number of hyponyms is too many to count manually. The number is estimated to exceed one thousand. The most significant finding is that basic level words have the highest compound ratios. In comparison with their hypernyms and hyponyms, they are much more frequently used to form compound words. Although some hyponyms like 'grand piano' and 'crab apple' also have high compound ratios, they should not be taken as basic level items because such compounds often contain the basic level words (e.g. 'Southern crab apple'), indicating that the ability to form compounds is actually inherited from the basic level words. Our data pose a challenge to Prototype Theory in that a subordinate word of a basic level word may act as a basic level word itself. The word 'card', a hyponym of 'paper', is of this type. With its high compound ratio of 25%, 'card' may also be deemed to be a basic level word. This fact raises another question as to whether a superordinate word may also act as a basic level word itself. Many of the basic level words in our list have three or more levels of hyponym. It seems that what is cognitively basic may not be low in the ontological tree. A closer look at the distribution of the compounds across the hyponymous levels reveals another interesting pattern. Basic level words have the ability to permeate through two to three levels of hyponyms in forming compounds. By contrast, words at the superordinate levels do not have such ability, and their compounds mostly occur at the direct hyponymous level. Experiment 3 The goal of this experiment is to show that whether a word belongs to the basic level affects its readability. This in turn affects the readability of a text and should be considered a criterion in measuring text readability. An easy text presumably contains more basic level words than a difficult one. Put in fractional terms, the proportion of basic level words in a text is supposed to be higher than that of a more difficult text. To achieve this goal, we need independent readability samples to be compared with our prediction. As readability is subjective judgment that may vary from one person to another, such independent samples are extremely difficult, if ever possible, to obtain. In this study, we resorted to a pragmatic practice by selecting the readings of English textbooks for senior high school students in Taiwan. Three textbooks from Sanmin Publishing Co., each used in the first semester of a different school year, were selected. We tried to choose the same type of text, so that text type will not act as a noise. Furthermore, since we do not have facility to run large-scale experiment yet, we limited the scope to two-hundred-word text at each level. Accordingly, the first two hundred words of the first reading subjectively judged as narrative were extracted from the textbooks (Appendix 1). All the nouns occurring in these texts, except proper names and pronouns, were searched for in WordNet [2]. Considering the fact that for a word with more than one sense, the distribution of hyponyms differs from one sense to another, we searched for the hyponyms of the word in the particular sense occurring in the selected readings. We know that this practice, if used in a large-scale study, is applicable only if sense tagging is available, and we hope that it will be available in the near future. Based on the results of the two preliminary experiments, we assume that basic level words have at least the following two characteristics: (1) They have great ability to form compounded hyponyms; (2) Their word length is shorter than the average word length of their direct hyponyms. These characteristics can be further simplified as the Filter Condition to pick out basic level words: (1) Compound ratio of full hyponym ≧ 25%; (2) Average word length of direct hyponym minus target word length ≧ 4. Note in passing that the second criterion differs fundamentally from the commonly used Results and Discussion The three selected readings contain sixty nouns in total, of which twenty-one conform to the proposed Filter Condition of basic level words. They are given in Table 3 below. A comprehensive list of all the sixty nouns are given in Appendix 2 at the end of this paper. Note in passing that the level numbers refer to the presumed difficulty levels of the selected readings. Level 1 is presumably the easiest; Level 3, the hardest. These numbers should not be taken as ratio measurement. Level 3, for example, is not assumed to be three times harder than Level 1. We intend these numbers to stand for ordinal relations. In order to measure the text difficulty, basic level word ratios of the selected texts were computed. Table 4 shows the statistics. Diagrammatically, it is clear in Figure 1 that the basic level word ratios are decreasing as the difficulty levels of the selected readings increase. The text from Level-1 has the highest basic level word ratio; the text from Level-3 has the lowest basic level word ratio. This finding conforms to the levels of these textbooks, and proves the usefulness of the basic level word concept in the measurement of readability. Table 5 shows the readability scores of the selected readings measured by several readability formulae. Figure 2 displays the overall tendency computed by these formulae: Level-1 is the easiest, while Level-2 and Level-3 are at about the same difficulty level. The readability formulae seem not to be able to decipher the difference between the texts of Level-2 and Level-3 while our basic level word ratio can easily show their different difficulty levels. This paper is just the first step to measure readability by lexical relations retrieved from WordNet [2]. Twenty-five percent of the twenty basic level words defined by Rosch et al. [1] are NOT identified by our Filter Condition (e.g. 'truck', 'shirt', socks'). Among the identified basic level words in the three selected texts, some look rather dubious to us (e.g. 'barometer', 'technology'). The filter condition proposed in this study certainly leaves room to be fine-tuned and improved in at least two respects. First, the two criteria of compound ratios and word length difference have been used as sufficient conditions. We will postulate the possibility of weighting these criteria in our subsequent research. Second, in addition to the lexical relations proposed in this study, there are presumably other lexical relations between basic level words and their hypernyms/hyponyms that are retrievable via WordNet [2]. Doubts can also be raised as to whether all basic level words are equally readable or easy. Can it be that some basic level words are in fact more difficult than others and some hypernyms/ hyponyms of certain basic level words are actually easier than certain basic level words? We thank our reviewers for raising the following questions, and will put them in the agenda of our subsequent study: (1) The examined words in this study are all nouns. Can we find relationships between verbs, adjectives, and even adverbs like the hypernym/hyponym relationships with the basic level "nouns"? The tentative answer is yes and no. Take the example of the verb 'run'. It has hypernyms in WordNet ('speed', 'travel rapidly', etc.). It also has subordinate lexical relation called 'troponym', which is similar to hyponym of nouns. Admittedly, English verbs do not constitute compounds so often as English nouns, but other lexical relations may exist between the verbs, and the relations are likely to be retrievable. between the lexical units in WordNet are correlated with word frequency. We hope we will be able to answer this question in a study of larger scale. Laying out the groundwork for further research, we aim to tackle the following issues too. All traditional readability formulae implicitly suppose an isomorphic relation between form and meaning as if each word has the same meaning no mater where it occurs. We acknowledge that one of the biggest challenges and the most badly needed techniques of measuring readability is to disambiguate the various senses of a word in text since the same word may have highly divergent readability in different senses. Another tacit assumption made by the traditional readability formulae is that the units of all lexical items are single words. This assumption overlooks many compounds and fixed expressions and affects the validity of these formulae. Although our research has provided the study of readability a brand new perspective and has offered exciting prospects, our challenges are still many and the road is still long.
2014-10-01T00:00:00.000Z
2008-09-01T00:00:00.000
{ "year": 2008, "sha1": "d4b655ae6c0ef1c6d45e9674de57b19cf732a606", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "d4b655ae6c0ef1c6d45e9674de57b19cf732a606", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
13967471
pes2o/s2orc
v3-fos-license
Atmospheric Chemistry and Physics Discussions Abstract. Total atmospheric OH reactivities (kOH) have been measured as reciprocal OH lifetimes by a newly developed instrument at a rural site in the densely populated Pearl River Delta (PRD) in Southern China in summer 2006. The deployed technique, LP-LIF, uses laser flash photolysis (LP) for artificial OH generation and laser-induced fluorescence (LIF) to measure the time-dependent OH decay in samples of ambient air. The reactivities observed at PRD covered a range from 10 s−1 to 120 s−1, indicating a large load of chemical reactants. On average, kOH exhibited a pronounced diurnal profile with a mean maximum value of 50 s−1 at daybreak and a mean minimum value of 20 s−1 at noon. The comparison of reactivities calculated from measured trace gases with measured kOH reveals a missing reactivity of about a factor of 2 at day and night. The reactivity explained by measured trace gases was dominated by anthropogenic pollutants (e.g., CO, NOx, light alkenes and aromatic hydrocarbons) at night, while it was strongly influenced by local, biogenic emissions of isoprene during the day. Box model calculations initialized by measured parameters reproduce the observed OH reactivity well and suggest that the missing reactivity is contributed by unmeasured, secondary chemistry products (mainly aldehydes and ketones) that were photochemically formed by hydrocarbon oxidation. Overall, kOH was dominated by organic compounds, which had a maximum contribution of 85% in the afternoon. The paper demonstrates the usefulness of direct reactivity measurements, emphasizes the need for direct measurements of oxygenated organic compounds in atmospheric chemistry studies, and discusses uncertainties of the modelling of OVOC reactivities. Introduction The hydroxyl radical (OH) is the primary oxidant in the troposphere.It reacts with most atmospheric trace gases and, thereby, controls their rates of removal from the atmosphere (Ehhalt, 1999).In many cases, oxidation of primary pollutants by OH leads to formation of hydroperoxy (HO 2 ) and organic peroxy radicals (RO 2 , R = organic group) which are important intermediates in the photochemical formation of ozone and organic aerosols.A good understanding of tropospheric OH and its related chemistry is therefore indispensable for reliable prediction of the atmospheric selfcleansing and the formation of secondary atmospheric pollutants (Brasseur et al., 2003). Tropospheric OH is produced primarily by a few relatively well-known processes, of which the UV photolysis of ozone is the most important one (Matsumi et al., 2002). Other relevant processes include the photolysis of nitrous acid and ozonolysis of alkenes. OH exhibits a high reactivity to many atmospheric trace components such as carbon monoxide (CO), nitrogen oxides (NO, NO 2 ) and volatile organic compounds (VOCs) Here, RO represents short-lived alkoxy radicals and R O indicates carbonyl compounds (aldehydes, ketones). OH loss can also occur by recombination reactions, which ultimately remove radicals from the atmosphere.In this class of reactions, association between OH and NO 2 is the most important example. Atmospheric OH is short-lived (<1 s) reaching a steady state of production and loss within a few seconds. In this equation, P OH is the total production rate of OH from primary production (e.g., R2) and radical recycling reactions (e.g., R9), while k OH represents the pseudo first-order rate coefficient of OH in ambient air.k OH is also called total OH reactivity and is equivalent to the reciprocal atmospheric OH lifetime, τ −1 OH . Here, [X i ] represents the concentration of a reactive component (CO, NO x , VOCs etc.) in ambient air, k OH+Xi denotes the corresponding bimolecular reaction rate constant and k OH+Xi [X i ] the reactivity of X i . A major uncertainty in atmospheric chemistry results from the incomplete knowledge of the number and abundance of reactive components that are present in the atmosphere.Besides well-known pollutants like CO and NO x , a large number of probably more than 10 5 different VOCs exists in the troposphere (Goldstein and Galbally, 2007), but less than one hundred VOCs are measured routinely in field campaigns.The measured VOC species are thought to represent the major organic reactivity.Recent field experiments, however, demonstrate that a considerable fraction of organic components missed by current measurement techniques may have significant influence on atmospheric photochemistry (Lewis et al., 2000;Di Carlo et al., 2004;Sadanaga et al., 2005;Holzinger et al., 2005;Goldstein and Galbally, 2007;Heald et al., 2008;Mao et al., 2009a).Thus, incomplete VOC measurements and unknown atmospheric components can introduce considerable uncertainty in model predictions, for example, of atmospheric OH (Poppe et al., 1994;McKeen et al., 1997). A step towards solving this problem was made by the development of perturbation techniques that introduce artificially generated OH into samples of ambient air and measure directly the total OH reactivity.Some field instruments measure k OH as the inverse of the atmospheric OH lifetime, either in a reaction flow tube with a movable OH injector (Kovacs and Brune, 2001;Mao et al., 2009a;Ingham et al., 2009) or by using a laser pump and probe technique (Calpini et al., 1999;Sadanaga et al., 2004b;Hofzumahaus et al., 2009).Another concept involves the comparative k OH measurement in a flow cell against the known OH reactivity of an added organic reagent (Sinha et al., 2008).Using these various techniques, k OH values have been observed between 1 s −1 in clean air and 200 s −1 in extremely polluted air in the atmospheric boundary layer (Table 1). Measurements of k OH provide valuable information and can be useful in atmospheric chemistry studies in different ways.First, experimental k OH data provide the chemical reactivity that would be expected by calculation from individual trace gas measurements (k calc OH ), if all atmospheric OH reactants are completely measured and if each contribution k OH+Xi [X i ] is correctly determined.The ratio k OH /k calc OH allows quantification of the amount of reactivity that remains unmeasured or unidentified in field experiments.Missing reactivity has been reported for different environments, including marine, rural and urban sites, with measured-tocalculated reactivity ratios as high as 3 (Table 1). Second, k OH data can be used to estimate the atmospheric VOC reactivity, if measured data of non-VOC compounds (Y i = CO, NO x etc.) are simultaneously available: Shirley et al. (2006) and Ren et al. (2005Ren et al. ( , 2006b) have used such k OH (VOC) data together with typical VOC speciation patterns as constraints for model predictions of OH, when direct VOC measurements were not completely available.Third, OH reactivity measurements are useful to evaluate experimentally the chemical OH budget (Martinez et al., 2003;Ren et al., 2003b;Shirley et al., 2006;Hofzumahaus et al., 2009).Possible unidentified OH production processes can be quantified by comparison of the experimental OH loss rate, k OH × [OH], calculated from measured OH and k OH , with P OH data derived from measured OH precursors.Such a study was carried out by Hofzumahaus et al. (2009) for data from a field campaign in the Pearl River Delta (PRD) in China in summer 2006.By comparing the total OH loss and production rates, a significant missing OH source was discovered, which sustained high levels of OH in the order of 10 7 radicals per cm 3 at conditions of low NO (<1 pbb) and high OH reactivities (∼ 20 s −1 ).Finally, k OH data can be used to test the capability of photochemistry models to simulate atmospheric OH loss rates.In this paper OH reactivities from the afore mentioned PRD field campaign are presented.Observed reactivities are compared to k calc OH calculations from individual trace gas measurements and to simulated reactivities k model OH from a photochemical box model.The paper presents the chemical speciation to reactants that contribute most to the OH loss rate at PRD and investigates, how well modelled, yet unmeasured secondary chemical products explain the missing reactivity observed during the campaign. Field site and campaign PRD is located in Southern China in Guandong province and represents one of the major industrialized regions in Asia, with more than 20 million inhabitants.PRD includes the megacities of Hong Kong, Shenzhen and Guangzhou, and many fast-developing medium and small sized cities that are linked by a dense traffic network (Zhang et al., 2008b).Owing to the fast growing economy and urbanization, air pollution has greatly increased in this region during the last decade (Richter et al., 2005;Shao et al., 2006;Tie et al., 2006;Chan and Yao, 2008;Zhang et al., 2007Zhang et al., , 2008b,a),a).Within the "Program of Regional Integrated Experiments of Air Quality over the Pearl River Delta" the photochemistry field campaign PRIDE-PRD2006 was conducted in summer 2006, aiming to investigate gas-phase chemistry and aerosol processes in a rural environment near Guangzhou city (Zhang et al., 2010, in preparation) (Hua et al., 2008).m High performance liquid chromatography.performed from 3 to 30 July 2006 at Backgarden (23.5 • N, 113.03 • E), a regional background site about 60 km northwest of Guangzhou.Backgarden is located near a water reservoir and is surrounded by farmland (peanuts, lichees, trees).It experienced little local emissions from traffic, but biomass burning was occasionally observed on nearby agricultural fields.Temperature and relative humidity were generally high, of 28-36 • C and 60-90%, respectively, as is expected for a subtropical region during the rainy season.Extended rain fall occurred during two periods under the influence of typhoon Bilis (15-18 July) and Kaemi (26-29 July).At the field site local wind speeds were generally low and had values mostly less than 2 m/s (Fig. 1).Such low wind speeds are typical in the inland of PRD during summer season and favor accumulation of air pollution (Chan and Yao, 2008). Measurement instruments were operated at Backgarden to characterize the local atmosphere with respect to its gas phase composition (Hua et al., 2008;Hofzumahaus et al., 2009) and aerosol abundance and properties (Liu et al., 2008a;Garland et al., 2008;Xiao et al., 2009;Li et al., 2010;Rose et al., 2010).Mixing ratios of VOCs, CO, SO 2 , O 3 , H 2 O 2 , NO x and photolysis frequencies were measured at 10 m above the ground on top of a hotel building that was exclusively used by the measurement team.Additional mea-surements of radical concentrations of OH and HO 2 , atmospheric OH reactivities, mixing ratios of HONO and meteorological data were performed nearby at a height of 7 m on top of two stacked sea containers.Atmospheric chemical species considered in this work are listed in Table 2 and 3. Measurement of k OH Total atmospheric OH reactivities were measured as the inverse chemical OH lifetimes by a newly developed instrument, which is based on a pump-probe concept similar to the one explored by Calpini et al. (1999) and developed for field measurement by Sadanaga et al. (2004b).Laser flash photolysis (LP) of ozone is used to produce OH in a sample of ambient air and laser-induced fluorescence (LIF) is applied to monitor the time dependent OH decay.A short description of the LP-LIF instrument is given below, while technical details will be reported elsewhere (Lou et al., manuscript in preparation). For measurement, ambient air is sampled through an 8 m long inlet-line (Silcosteel , 8 mm I.D.) and is passed through a laminar flow tube at a rate of 20 litre/min at atmospheric pressure.During the PRD campaign, the temperature of the sampling line and flow tube was kept at 40 • C, slightly above ambient temperature, to avoid possible condensation of water vapor in the instrument that was housed in an air-conditioned field container.The flow tube has a total length of 80 cm and an internal diameter of 40 mm.A pulsed laser beam (266 nm, 10-20 mJ, FWHM 10 ns) from a frequency-quadrupled Nd-YAG laser (Big Sky, CFR200) is expanded to a diameter of 30 mm and is passed longitudinally through the centre of the flow tube, thereby generating OH radicals by flash photolysis of ozone (reactions R1, R2).The initial OH concentration was always less than 5 × 10 9 cm −3 during the field campaign.The OH radicals react subsequently with the trace gases in the carrier flow on a time scale of tens to hundreds of milliseconds.About 50 cm downstream of the tube inlet, part of the air flow (3 litre/min) is diverted through an inlet nozzle into a low pressure cell for OH detection by LIF.The detection cell is essentially of the same type as for ambient OH radical measurement (Holland et al., 2003).The probe laser radiation (308 nm) comes from a tunable, frequency-doubled 8.5 kHz pulsed dye laser (New Laser Generation, Tintura) which is shared between the OH reactivity instrument and the instrument for measurement of atmospheric HO x concentrations.With this setup, the timedependent chemical OH decay is probed by LIF while the flowing air passes through the tube along the inlet of the detection cell.The OH fluorescence signal is collected in a time-resolved mode by a multichannel scaler (Becker and Hickl, PM400A) and is stored in an array of time bins of 1 ms width over a time period of 2 s.Decay curves are accumulated (signal-averaged) over successive photolysis pulses at a rate of 0.5 Hz to obtain a good signal-to-noise ratio.Examples of OH decay curves measured during the PRD field campaign 2006 are shown in Fig. 2. OH reactivities (reciprocal OH lifetimes) are obtained by non-linear least squares fits (CURVEFIT IDL-routine, Research Systems, Inc.) of the exponential decay curves.The precision of the derived k OH values is between 4-10% (1σ ).The integration time for the k OH measurements was typically 1-3 min. In clean synthetic air (Messer Griesheim Austria, 99.999%) with admixed water vapor (1%) and ozone (50 ppb, see below), zero decays resulting from OH wall loss were observed.At PRD, the zero decays had rate coefficients of 1.4 ± 0. measurements.Gaseous components contributed very little to the zero decay rates, i.e. 0.05 s −1 came from analyzed contaminants (VOCs, NO x ), 0.1s −1 from the added ozone and less than 0.04 s −1 from OH self-reaction (OH + OH).The corresponding total gas-phase reactivity (<0.2 s −1 ) was smaller than the uncertainty of the zero decay measurements and was neglected in the zero-decay correction of ambient air measurements. Ozone from an ozone generator was added to the gas samples when they contained less than 10 ppb O 3 .This was the case during zero decay measurements and during air measurements at nighttime when ambient ozone dropped to low values.On these occasions, a controlled flow of 0.4 l/min of synthetic air was ozonized and mixed into the main flow (20 l/min), resulting in a mixing ratio of about 50 ppb O 3 entering the flow tube.The marginal dilution (2%) of the main flow was corrected in the evaluation of the measured OH reactivities.The change of reactivity which may be caused by the reaction of air components with O 3 was neglected, because the expected depletion of reactants (e.g., NO, NO 2 , isoprene, or alkenes) by O 3 is less than 0.1% over the measured lifetimes of OH. The accuracy of the instrument was tested before and after the campaign using air mixtures (e.g., CO in synthetic air) of known reactivities in the range of 0-190 s −1 .The measurements were found to be linear for reactivities up to 60 s −1 and were accurate within 10% (Hofzumahaus et al., 2009).At k OH larger than 60 s −1 , an increasing negative bias was noted in the measurement data, which was caused by the non-exponential curvature of the OH decay curves in the first 20 ms (Fig. 2).The deviations from the true reactivity were, for example, -11% at 80 s −1 and -18% at 100 s −1 .This non-linearity affects less than 2% of the data points measured at PRD and was corrected using a parametrization based on results of the test measurements (Lou et al., manuscript in preparation).Note that the curvature responsible for the nonlinearity is independent of the chemical conditions.It partly comes from the rise of the OH fluorescence signal, when OH propagates initially into the LIF detection cell after the laser flash.Furthermore, the curvature may be influenced by nonhomogeneous spatial distribution of OH in the flow tube near the inlet nozzle which deflects some of the photolysis laser radiation. Recycling of HO 2 to OH (reaction R9) can slow down the observed OH decay in the reactor, if the reaction rate of HO 2 and NO approaches the OH decay rate (Kovacs et al., 2003).This is generally the case after most OH has initially reacted with CO and VOCs, provided that sufficient NO is available.As a result, the observed k OH becomes systematically smaller than the true k OH .The data correction for the NO dependent effect can be large in measurement systems that inject not only OH into the reactor, but also HO 2 .This is the case in flow-tube instruments with movable OH injectors which generate OH and HO 2 as co-products by 185 nm photolysis of water vapor (Kovacs et al., 2003;Shirley et al., 2006;Ingham et al., 2009).For example, a correction factor of 1.4 at 5 ppb NO has been reported by Kovacs et al. (2003). The LP-LIF technique is much less affected by NO since very little HO 2 is initially generated.Sadanaga et al. (2004b) noted that HO 2 which is produced by 266 nm laser photolysis of ambient formaldehyde (HCHO) can be neglected in their instrument.In our case, the estimated laser-generated HO 2 concentration is 4 × 10 6 cm −3 at 20 ppb HCHO and 10 mJ laser energy.This radical concentration is an upper limit, as tropospheric HCHO mixing ratios were less than 20 ppb at PRD (Sect.6.2).Given an initial HO 2 /OH ratio in the order of 10 −3 in our instrument, the conversion of laser generated HO 2 to OH is negligible and only radical recycling following OH consumption (R3-R9) may become relevant.The error caused by recycling reaches at most 5% at 5 ppb NO at PRD conditions.At higher NO (up to 30 ppb), biexponential behaviour of the OH decays became noticeable, which can be explained by radical recycling.In this case, a biexponential fit was applied to the measured OH decay curves.The faster of the two fitted decay rate coefficients was used as an estimator of the true k OH .The validity of this approximation was confirmed by numerical simulations and laboratory experiments, demonstrating that the error of this approach is less than 10% for PRD conditions (Lou et al., manuscript in preparation). The measurements in the flow tube were performed at an elevated fixed temperature, which was up to 12 • C higher than ambient temperature.This caused a small negative bias in the measured k OH for two reasons (cf., Eq. 2).First, the number densities of the reactants were reduced in the flow tube by −0.3% K −1 according to the ideal gas law.Second, the OH rate coefficients of the major reactants (see Results and Discussion) have activation energies, which are either zero (e.g., CO, HCHO) or negative (e.g., NO, NO 2 , alkenes, aromatics, other aldehydes).Considering the relative composition of measured and modelled species (cf.Sect.4), the kinetic temperature dependence of k OH is estimated to about −0.2% K −1 .Combining both temperature effects, the atmospheric reactivity is expected to be up to 6% larger than the k OH measured at the experimental conditions during the campaign. Trace gas measurements OH concentrations were measured by LIF spectroscopy (Holland et al., 2003;Hofzumahaus et al., 2009).Ambient air was sampled by gas expansion into a low-pressure (3.5 hPa) fluorescence cell, where the OH radicals were electronically excited by tunable, pulsed UV laser radiation at a wavelength of 308 nm.The resulting OH resonance fluorescence was detected by time-delayed gated photon counting.The measurement system was calibrated using quantitative photolysis of water vapor in synthetic air at 185 nm as an OH source.For the PRD campaign, the limit of detection was (0.5-1)×10 6 cm −3 at 5 min integration time and the accuracy was 20% (1 σ ). NO x and O 3 were measured by commercial instruments (Takegawa et al., 2006).NO was detected by NO-O 3 chemiluminescence (Thermo Electron, Model 42CTL), while NO 2 was first converted to NO in a photolytical reactor (Droplet Measurement Technologies, Model BLC).The instruments were calibrated using NO standard gas mixtures and gas phase titration for NO 2 .The 1-min detection limits for NO and NO 2 were 50 pptv and 170 pptv, respectively, and the corresponding accuracies were 7% and 13%.Ozone was measured using an ultraviolet (UV) absorption instrument (Thermo Electron, Model 49C) with a 1σ precision of 0.3 ppb and an accuracy of 5%. CO measurements were obtained by a NDIR gas analyzer (Thermo Electron, Model 48C) with an integration time of 1 minute.Air samples were dried before measurement in order to avoid the interference from water vapour.The overall precision and accuracy were estimated to be 4 ppb and 20 ppb, respectively, at a CO mixing ratio of 400 ppb. HONO was measured by a commercial instrument (QUMA, Wuppertal) using the LOPAP technique developed by Kleffmann et al. (2006).The instrument had a detection limit of 7 pptv at a time resolution of 5 min and the accuracy of calibration was estimated to be 10% (1σ ) (X. Li, personal communication). Measurements of alkanes, alkenes and aromatic compounds were performed using an automated gas chromatograph (Agilent, Model 6890 GC) equipped with dual columns and dual flame ionization detectors (Wang et al., 2008).Two sorbent traps were packed with different molecular sieves for a different range of VOCs.A porous-layer open tubular (PLOT) Al 2 O 3 /KCl column (Hewlett Packard) separated C 3 -C 6 and a DB-1 column (J&W) C 6 -C 12 .Calibration was performed by injecting various amounts of gas standard mixture containing the 51 target species with concentrations in the range between 3 ppb and 15 ppb (Spectra gas, Branchburg, NJ, USA).The accuracy for most of the measured VOCs estimated by comparing two traceable gas standards (68 C 2 -C 11 NMHCs, Scott Marrin Inc.; 57 C 2 -C 12 NMHCs, Spectra Gases Inc., USA) is within 10%.The time resolution was one hour, the precisions mostly 1-3% and detection limits 6-70 pptv. Photolysis frequencies were calculated from solar actinicflux spectra measured with a spectroradiometer (Meteorologie Consult) (Bohn et al., 2008).The radiometer was calibrated with a PTB-traceable irradiance standard before and after the campaign.The accuracy of derived photolysis frequencies is estimated to be about 10% at solar zenith angles smaller than 80 • . Model calculations A zero-dimensional chemical model was used to calculate concentrations of radicals and photochemical products of nitrogen and carbon compounds.The underlying chemical mechanism (Karl et al., 2006) has been used before by Rohrer and Berresheim (2006) and Hofzumahaus et al. (2009).It is based on the Regional Atmospheric Chemical Mechanism (RACM) (Stockwell et al., 1997) which was upgraded with the isoprene degradation scheme by Karl et al. (2006).The latter scheme is a modified version (26 reactions) of the mechanism by Geiger et al. (2003), who prepared a condensed version of the Mainz Isoprene Mechanism (MIM, 44 reactions) (Pöschl et al., 2000).The isoprene chemistry by Karl et al. (2006) replaces directly the complete isoprene scheme of RACM and differs from the preceding mechanisms (Stockwell et al., 1997;Pöschl et al., 2000;Geiger et al., 2003) by treating MVK (methyl-vinyl ketone) and MACR (methacrolein) as separate species rather than lumping them in a single species.Furthermore, it introduces CAR4 as a substitute for the group of C 4 and C 5 hydroxy carbonyl compounds plus 3-methyl furane.The isoprene degradation scheme is fully documented by Karl et al. (2006) in Table 4 of their paper. In this work, the model calculations were constrained to measurements of O 3 , HONO, NO, NO 2 , CO, CH 4 , C 3 -C 12 VOCs, photolysis frequencies, water vapor, ambient temperature and pressure (Table 2).Concentrations of ethane and ethene were set to fixed values of 1.5 ppb and 3 ppb, respectively, estimated from a few canister samples.The H 2 mixing ratio was assumed to be 550 ppb.The model was operated in a time-dependent mode with 5-min time resolution.First, a 5-min input dataset was generated by interpolation from measurements which had different time resolutions.During the model run, all measured input data were kept constant in each five minute interval and the calculated species concentrations were used as initial condition for the next 5-min time step.Each model run started with 2 days spin-up time to reach steady-state conditions for long-lived species.Additional loss by deposition with a corresponding lifetime of 24 h for calculated species was assumed to avoid build-up of unrealistic amounts of secondary products.Some additional model runs were performed as sensitivity studies.First, OH was constrained to match the observations.Second, the assumed value of the deposition lifetime for calculated species was varied.Details and results of the sensitivity runs are given below (Sect.6.3). Measurement results k OH measurements are presented in Fig. 3 together with the measured concentrations of four selected trace gases (CO, NO 2 , propene, isoprene) that react with OH.The time series of k OH has two major breaks which were caused by rainy weather conditions during typhoon Bilis on 15-18 July and an electrical power blackout at Backgarden on 22 July.The OH reactivity data exhibit diurnal patterns which are most clearly pronounced during the sunny period from 19 to 25 July, with daily k OH minima between 10 s −1 and 30 s −1 at noontime and peak values up to 120 s −1 at night.The temporal pattern is highly correlated with variations of anthropogenically emitted pollutants such as CO, NO 2 and propene (Fig. 3, lower panels).The pollutants accumulated in the shallow boundary layer at night and were depleted by photochemical degradation and vertical mixing within the rising boundary layer during daytime.Isoprene, which is emitted mostly by plants, behaved differently and reached maximum values in the late afternoon as expected from its temperature dependent emission rate.The observed isoprene (up to 5 ppb, equivalent to an OH reactivity of up to 12 s −1 ) made a significant contribution to k OH , but the variations of isoprene are not clearly discernible in k OH which was dominated by the diurnal cycle of anthropogenic pollutants. On two days (24-25 July) the k OH data appear to be more scattered than on other days.The rather large short-term variability cannot be explained by instrumental noise, but was likely caused by local emissions in the surrounding neighborhood.Smoke plumes from small fires were observed on these days on agricultural fields nearby where harvest residues were burnt during daytime.During the nights of 23-24 and 24-25 July, a smell was noticeable that came presumably from waste combustion.The observation of combustion coincides with a strong enhancement of measured aerosol mass concentration (factor 2-3 for PM1 and PM10) along with a change of aerosol optical properties which are indicative of smoldering fires (Garland et al., 2008).Finally, before midnight from 25 to 26 July, k OH dropped sharply by a factor of 3 coinciding with the onset of rain.The rain fall in the wake of typhoon Kaemi continued throughout 26 July and the reactivity remained at 20 s −1 . In order to test how well the measured k OH can be explained by the measured set of trace gases, OH reactivities were calculated according to Eq. ( 2) using the compounds listed in Table 2.The corresponding rate coefficients were taken from the RACM model and were applied for the instrumental conditions of the k OH measurements (1 atm, 313 K).The resulting k calc OH data correlate well with the measurements, but are lower on average by a factor of 2 (Fig. 4).The discrepancy is even larger on 25 and 26 July and reaches a factor of 3-4 before sunrise, when the combustion smell was noticeable.In general, the differences between measured and calculated reactivities are significantly larger than the experimental error of k OH (10%) and the estimated total uncertainty of k calc OH (20%), suggesting an additional missing reactivity.The measured k OH data and their mean diurnal variation are displayed in Fig. 5 (upper panel) showing a minimum mean value of 20 s −1 at local noon and a maximum mean value of 50 s −1 at daybreak.The relative contributions of measured CO, NO x and hydrocarbons are shown in the lower panel of Fig. 5. Interestingly, their cumulated contribution (black squared symbols) is nearly independent of time and explains half of the measured OH reactivity.The largest fraction of the explained reactivity comes from the group of measured VOCs which are further analyzed in Fig. 6.Here, the dominating speciated VOC reactivities are displayed versus time, normalized to the total reactivity of all measured hydrocarbons.Isoprene was seemingly the most important measured OH reactant during the daytime, with a contribution of up to 70% of the total hydrocarbon reactivity, whereas simple alkenes (propene, butenes, pentenes) and aromatic compounds (styrene, toluene, xylenes and trimethylbenzenes) were dominant at night.The question arises, which atmospheric components were responsible for the missing 50% of OH reactivity that is not explained by measured CO, NO x and hydrocarbons?Other compounds that were measured (O 3 , HONO, SO 2 , H 2 O 2 and CH 3 OOH), but not included in k calc OH , cannot explain the discrepancy, as their cumulated contribution is on average less than 1 s −1 and therefore negligible.The missing reactivity must be due to unmeasured primary reactants which were emitted by anthropogenic or biogenic sources directly, or to unmeasured secondary chemical species that have been formed photochemically in the atmosphere.These possibilities are discussed below. Model results In order to estimate the possible contribution of unmeasured secondary pollutants to the missing OH reactivity a photochemical box model was used (Sect.4).Total OH reactivities (k model OH ) were calculated from measured trace gases (Table 2) and all model-generated products, assuming instrumental conditions (1 atm, 313 K) for the OH rate coefficients.The resulting model-derived reactivities are higher than k calc OH by a factor of 1.4-2.6.They exhibit a similarly good correlation to measured k OH as k calc OH , but are in much better absolute agreement with the observations (Fig. 4).The model results agree well with the observed reactivities during daytime (except on 24 July, noon), but still underestimate the observations at night.The general trend can be seen more clearly in Fig. 7 (lower panel), where the mean diurnal profile of the modelled-to-measured k OH ratio is displayed.From 08:00 to 17:00 the model and measurement results agree within ±10%.At night, however, the model underpredicts the measured reactivity systematically, with the largest discrepancy of 30% at sunrise. Atmospheric k OH variability The OH reactivities observed at Backgarden (10 − 120 s −1 ) are among the highest ever measured in field campaigns (Table 1).While clean air can have a reactivity as low as ∼1 s −1 in the remote marine boundary layer (Brauers et al., 2001), the highest k OH values have been measured in polluted megacities with reported values of 10−200 s −1 in Mexico City (Shirley et al., 2006) and 10 − 100 s −1 in Tokyo (Sadanaga et al., 2005;Yoshino et al., 2006;Chatani et al., 2009) and New York City (Ren et al., 2003a(Ren et al., , 2006a)).Maximum values in Mexico City and New York City were observed during the morning and evening rush-hours, while minimum reactivities with mean values of 20 − 25 s −1 were measured in the afternoon at pollution levels of typically 20 ppb NO x .High reactivities reaching about 60 s −1 were also observed in the tropical rainforest of Suriname (Sinha et al., 2008) and Borneo (Ingham et al., 2009), where natural emissions of isoprene and possible other biogenic VOCs dominated the OH reactivity. The situation at the rural site in PRD is rather complex as it was strongly influenced by both anthropogenic and biogenic emissions.At night, the mean OH reactivity had values between 35 s −1 and 50 s −1 which were dominated by anthropogenic pollutants (Figs. 5 and 6).In the afternoon, the mean reactivity was about 20 s −1 , which is similar to the observed values in New York and Mexico City.However, unlike in these cities, the afternoon values at PRD were biogenically influenced by several ppb of isoprene at relatively low NO x (1 − 2 ppb).As mentioned above, about half of the observed reactivity at PRD can be explained by measured CO, NO x and hydrocarbons, of which isoprene, light alkenes and aromatics make the largest contributions to the explained reactivity. The high CO and NO x concentrations at night were probably caused by local combustion of coal for cooking and heating purposes, and presumably burning of waste and biomass.Furthermore, advected traffic emissions were strongly enhanced at night (Garland et al., 2008), as a result of local traffic regulations banning heavy Diesel trucks during daytime from 7:00 to 21:00 (Bradsher, 2007). VOCs can have many sources in PRD (Liu et al., 2008b).The possible origin of light monoalkenes is from car traffic and biomass burning, and aromatic compounds may come from traffic exhaust, painting and industrial use of solvents.Isoprene is predominantly a biogenic tracer, but can also be a product of biomass burning and car exhausts (Liu et al., 2008b). Missing reactivity The missing reactivity identified in Fig. 4 and 5 may come from unmeasured primary reactants emitted by anthropogenic or biogenic sources, or from unmeasured chemical species that were photochemically produced in the atmosphere.The good agreement between the modelled (base case) and measured reactivities during the time from 8:00 to 17:00 (Fig. 4 and Fig. 7, lower panel) suggests that photochemically formed products can explain the missing OH reactivity at daytime.The chemical partitioning of the modelled reactivity indicates that formaldehyde (HCHO), acetand higher aldehydes (ALD), and oxygenated isoprene products (OISO) are probably the most important OH reactants among the unmeasured, photochemical products (Fig. 7, upper panel).These oxidation products contribute 30-40% and other oxygenated compounds (ketones, dicarbonyl compounds, alcohols, hydroperoxides, nitrates etc.) 10-20% of the total OH reactivity.Overall, organic species dominated k OH and explain up to 85% (15:00 CNST) of the measured OH reactivity at daytime.It is noteworthy that isoprene (17%) and its oxidation products OISO (24%) played a prominent role in the afternoon, accounting for about 40% of the total reactivity.The major modelled oxygenated VOCs (OVOCs) have mean daytime concentrations of about 12 ppb HCHO, 3.8 ppb ALD, 1.1 ppb MVK, 0.6 ppb MACR, 0.9 ppb CAR4, and 0.44 ppb β-hydroxy hydroperoxide (ISHP, from isoprene).These compounds sum up to about 70% of the OVOC reactivity, while the remainder comes from small contributions of other model species.It is difficult to test the validity of the modelled concentrations rigorously, since in-situ measurements of OVOCs are missing, but the plausibility for some components can be checked.Formaldehyde, for example, may be compared to remote-sensing data that were measured during the field campaign by multi-axis differential optical absorption spectroscopy (MAX-DOAS).Assuming a well mixed atmospheric boundary layer, an average mixing ratio of 11 ± 7 ppb HCHO was determined from differential slant column densities on 9 cloud-free days, with higher mixing ratios (8-38 ppb) in the morning and lower values (3-12 ppb) in the afternoon (X.Li, personal communication).Ambient midday levels of aldehydes were also measured in PRD in summer 2003 by Feng et al. (2005), who observed mean values of 10 ppb HCHO and 5-6 ppb CH 3 CHO in downtown and semi-rural areas of Guangzhou.Our modelled aldehyde levels are of similar magnitude as the reported observations, though it must be noted that the modelled values have significant uncertainties (see Sect. 6.3).It can be further noted that the value of the modelled MVK+MACR to isoprene ratio (1.2) is in the range (1.1-1.4) that was measured during daytime by proton-transfer-reaction mass spectrometry (PTR-MS) in a rural area outside of Guangzhou in 2008 (M.Shao, personal communication).This comparison, however, is uncertain since the ratio of MVK+MACR to isoprene is variable (typically 0.4 to 2.4) depending on the photochemical age of the air mass (e.g., Karl et al., 2009;Shao et al., 2009). A large contribution of aldehydes and other OVOCs to atmospheric reactivity was also reported in other studies in a variety of environments (Shao et al., 2009;Mao et al., 2010b;Steiner et al., 2008;Emmerson et al., 2007;Yoshino et al., 2006;Lewis et al., 2005).In Central California, US, Steiner et al. (2008) compared VOC reactivities from the regional Community Multiscale Air Quality model (CMAQ) with calculated reactivities for urban and agricultural measurement stations.In general, aldehydes and other OVOCs were found to contribute 30-50% of the modelled urban VOC reactivity (up to 30 s −1 ), while OVOCs accounted for up to 90% of the modelled reactivity in agricultural regions.In large cities (Houston, New York City, Mexico City), total k OH was found to be dominated by anthropogenic hydrocarbons, CO and NO x , while OVOCs contributed between 11-24% during summertime (Mao et al., 2010b).In another urban study, Yoshino et al. (2006) analyzed OH reactivities measured in Tokyo at different seasons.In summer, measured hydrocarbons and OVOCs explained 26% and 14% of the www.atmos-chem-phys.net/10/11243/2010/Atmos.Chem.Phys., 10, 11243-11260, 2010 total reactivity, respectively.A missing reactivity of 25% was ascribed to unmeasured OVOCs summing up to an estimated total OVOC fraction of 39%.During the wintertime, hydrocarbons accounted for 25%, OVOCs for 3-4%, and unmeasured compounds for 5% of the total reactivity.These numbers indicate a much lower photochemical activity and lower OVOC production in winter (Yoshino et al., 2006).Detailed speciated measurements of VOCs and OVOCs were obtained during the summertime in Beijing by Shao et al. (2009), who analyzed the partitioning to the organic reactivity and its impact on photochemical ozone formation.The most abundant OVOCs were formaldehyde, acetaldehyde, acetone, methanol and ethanol, which contributed in total about 40% of the calculated organic reactivity.The measured diurnal pattern of acetaldehyde revealed that aldehydes were mostly photochemically formed (Shao et al., 2009).An essential role of OVOCs was also demonstrated in the TORCH campaign in south-east England in summer 2003, where aldehydes were not only an important sink of OH, but also a significant photolytical source of HO x (Emmerson et al., 2007).Even in relatively clean environments, OVOCs have been found to play a significant role.Lewis et al. (2005) report observations of organic species at a remote observatory (Mace-Head) on the North-Atlantic coast of Ireland, where acetone, methanol and acetaldehyde contributed up to 80% of the calculated organic reactivity.Their data analysis demonstrates that a large fraction of OVOCs were formed by secondary chemistry in the background atmosphere.Airborne measurements of atmospheric reactants over the relatively clean Pacific Ocean have shown a contribution of about 20% of OVOCs to the loss rate of OH (Mao et al., 2009a).Direct k OH measurements in the lower 2 km, however, indicate a missing reactivity of more than a factor of 2, suggesting that some highly reactive VOCs had not been measured (Mao et al., 2009a). In the present study, the box model underestimates systematically the measured OH reactivity at night and in the early morning by as much as 30% on average (Fig. 4 and 5).The discrepancies may be caused by systematic model errors (see below) or indicate unmeasured reactive emissions that are not captured by the model.Unknown emissions are the most likely cause of the large differences that occur during the nights of 24 and 25 July, when a combustion smell was clearly noticeable at the measurement site.As the burning material and the combustion conditions are not known, it is difficult to specify the emitted pollutants.A multitude of reactants can be emitted by combustion of organic materials, including various OVOCs, semivolatile organic compounds (SVOC), polyaromatic hydrocarbons (PAHs), halogen and nitrogen containing organic species (Andreae and Merlet, 2001;Lemieux et al., 2004), none of which were measured in this field study. Model uncertainty Given a missing reactivity of a factor 2, it is surprising how well the RACM model reproduces the observed OH reactivity during the daytime (Fig. 7, lower panel).The model produces the right amount of OVOCs and other oxidation products to fill the gap between the calculated and measured reactivity.It is an interesting question, whether the good agreement during the daytime is just fortuitous and whether the systematic underestimation of k OH by the model at night is significant? The error of the model is at least as large as that of k calc OH (±20%), which is determined by the measured air components and the corresponding rate constants.Additional uncertainty is caused by the reactivity of the modelled oxidation products.There are model uncertainties related to the loss and production of the calculated species.One source of error is the simplified treatment of the product loss by transport, assuming a fixed lifetime of 24 h for deposition of all calculated species in the model.This lifetime is equivalent to an assumed deposition velocity of 1.2 cm/s and a boundary layer height of 1000 m.The simplified treatment ignores the variability of the atmospheric boundary layer, wet deposition by rain events, and differences in the deposition velocities of different species.In order to test the sensitivity of k model OH to the assumed deposition lifetime, alternate lifetime values of 12 h and 48 h were used.The modified model yields time series of k model OH which look almost the same as for the base case (Fig. 8, lower panel).The oxidation product reactivities are 10-30% different, causing only marginal differences (5-15%) in the total OH reactivity.When the deposition loss is modulated by the diurnal variation of the boundary layer height (500-2000 m), derived from MAX-DOAS measurements (Li et al., 2010), similar small deviations from the base case result are found (not shown here).These tests demonstrate that the treatment of dry deposition loss in this work contributes only a relatively small error in k model OH .Another uncertainty is related to the inability of the model to reproduce afternoon OH concentrations which were measured together with k OH at Backgarden.Hofzumahaus et al. (2009) found that the same model is capable of describing measured OH at NO>1 ppb in the morning, but underestimates the observed OH concentration of (1-2)×10 7 cm −3 by a factor 3-5 at NO levels of 0.1-0.2ppb in the afternoon.This shortcoming influences the predicted concentrations of secondary products which are produced by OH oxidation of VOCs.In order to test the influence of OH on k model OH , additional sensitivity calculations were performed.The OH concentration was varied by a factor 2, using twice and half of the modelled OH from the base run as model input for the sensitivity runs.The resulting time series of k model OH look again very similar to the base case (Fig. 8, upper panel).The relatively small sensitivity to OH variations can be explained by the fact that OH reactions are not only responsible for the production of OVOCs, but also contribute to their chemical removal, which partly cancels the influence of OH. In order to be more realistic, the model was constrained to the measured OH concentrations in another test (Fig. 4, orange line).The modelled reactivity increases compared to the base case (Fig. 4, red line), improving the agreement with the measured nighttime reactivities.Daytime values of k model OH show relatively small changes on particular days (e.g. on 20, 21, 23 and 26 July) and remain in good agreement with the measurements.However, k OH is considerably overpredicted by the model in the afternoon on several days, i.e. by a factor of 1.5 on 13, 14 and 25 July, and a factor of 2 on 24 July.It was discussed by Hofzumahaus et al. (2009) that a hypothetical additional primary source of OH could be introduced in the model to match the calculated to the much higher observed OH values in the afternoon.However, because the OH to HO 2 ratio is largely maintained, this would also increase the modelled HO 2 by a large factor meaning a strong overprediction of the observed HO 2 concentrations.Likewise, OVOCs may be systematically overpredicted, if the model is constrained to measured OH, only.In order to match both the observed OH and HO 2 , Hofzumahaus et al. (2009) proposed an additional recycling mechanism.Two hypothetical reactions with an unknown reactant X were introduced into the model, RO 2 + → HO 2 followed by HO + X → OH, both of similar rate as in case of the NO reactions.With this modification, the observed OH and HO 2 concentrations could be well reproduced assuming a reactive equivalent of 0.85 ppb NO for X (Hofzumahaus et al., 2009).If this mechanism is applied in the present work, the modelled reactivity becomes larger than in the base case, but mostly smaller than in the case when only measured OH is used as an additional constraint (Fig. 4).The model result (RACM, plus additional recycling by X) is, with a few exceptions (see below), in general good agreement with the measured k OH .During daytime, the agreement is nearly as good as for the base case and during nighttime the agreement is much improved compared to the base case results. The strong overprediction of k OH in the afternoon of 13, 14, 24 and 25 July remains even if additional recycling by X is considered in the model.Given the relatively weak sensitivity of k model OH to ground deposition (see results above), it is unlikely that a specific error in deposition loss can explain the too large k model OH values on these particular days.In principle, heterogenous loss of OVOC species on particles (not included in the model) could have played a pronounced role on 24 and 25 July, when the atmospheric aerosol load was a factor of 2-3 higher than on the other days of the campaign (Garland et al., 2008).For example, unexpectedly low concentrations of gaseous glyoxal have been observed in urban air (Volkamer et al., 2007), consistent with reactive uptake onto particles reported from laboratory experiments (Liggio et al., 2005;Kroll et al., 2005).Glyoxal, however, contributes only about 2% of the total gas-phase reactivity in the present model.In order to compensate the overprediction of k OH in the model, heterogeneous uptake would be required for formaldehyde, acetaldeyhde, MVK and MACR etc. which together contribute the major OVOC reactivity in the present work.These carbonyl compounds are known to have low physical solubilities in liquid water (Sander et al., 2006;Iraci et al., 1999).Reactive uptake might play a role for acidic solutions, but published results remain controversial for latter conditions (e.g., Noziere et al., 2006;Jang et al., 2005;Kroll et al., 2005).Since the enhanced aerosol load on 24 and 25 July contained a large fraction of soot in PRD (Garland et al., 2008), reactive uptake is not a likely explanation for the smaller than expected OVOC reactivity on these particular two days.It would also not explain the discrepancy between modelled and measured k OH on the other two days (13-14 July), which did not exhibit elevated aerosol concentrations. It is interesting that the four exceptional days (13, 14, 24 and 25 July) lay directly ahead of two typhoons and showed a meteorological situation that was different from all other days.During most of the campaign, wind came from the south and southeast at slow wind speeds (Fig. 1).On 13-14 July and 24-25 July, local wind slowed down to almost zero and turned into opposite direction, coming then from the north.The four days exhibited the highest isoprene concentrations during the campaign, peaking in the afternoon at around 3-4 ppb (Fig. 3).The elevated isoprene concentrations and the very low wind speed point to the influence of a strong local isoprene source at close distance north of the field site.This is plausible, since there was an extended area with trees in the direct neighborhood north of the measurement building, whereas a drinking water reservoir with a fetch of about 1 km was located in the southern direction.It is likely that freshly emitted isoprene from the local source (north) was not significantly photochemically aged when it reached the sampling point of the k OH instrument, but was mostly photochemically degraded downwind of the measurement site.The model, however, builds up isoprene oxidation products during daytime until the products reach steady state within few hours, a condition which was likely not fulfilled at the measurement site on 13-14 July and 24-25 July.In contrast, on the other days with continuously prevailing southern wind, primary VOCs were probably photochemically aged when they reached Backgarden, implicitly assumed by the model.Figure 9 summarizes the measurement model comparison showing mean diurnal profiles for the whole period of 11-26 July.k model OH increases on average by about 30% compared to the base case (red line), when additional radical recycling by X is included in the model (green line).This leads to good agreement with measured k OH at night, but to a strong overprediction by a factor of 1.5 in the afternoon.When the data for northern wind direction (13-14 July and 24-25 July) are excluded, the overprediction during daytime is removed.The remaining deviations of the model from measured k OH lie on average within ±20% (blue line) which is not significant in view of the general model uncertainties discussed above. Conclusions Total atmospheric OH reactivities have been measured by a newly developed instrument at a rural site in the densely populated Pearl River Delta in Southern China in summer 2006.The deployed instrument uses laser flash photolysis of ozone to produce pulses of OH in samples of ambient air and applies laser-induced fluorescence to monitor the timedependent decay of laser-generated OH.The experimental reactivities were derived as the inverse of chemical OH lifetimes with a total accuracy of about 10% at a time resolution of 1-3 min.In addition, a comprehensive set of atmospheric trace gases and meteorological parameters were measured. The observed OH reactivities are among the highest ever reported by direct methods (Table 1), spanning a range from 10 s −1 to 120 s −1 .On average, the reactivities exhibited a clear diurnal profile with a mean maximum value of 50 s −1 at daybreak and a mean minimum value of 20 s −1 at noon.The magnitude of these values and their diurnal variation is similar to what has been observed by other research groups in New York, Tokyo, or Mexico City, which are highly polluted, anthropogenically influenced locations.The measurement site in this study, however, was a green, rural site in the densely populated PRD.The OH reactivity was characterized by a background of anthropogenic pollutants at night and was dominated by local biogenic emissions of isoprene during the day. The comparison of reactivities calculated from measured CO, NO x and hydrocarbons with measured k OH has revealed a missing reactivity of unmeasured species of about a factor of 2 at day and night.Box model calculations suggest that the missing reactivity is mainly contributed by OVOCs that were photochemically formed by oxidation of hydrocarbons.Despite the simplicity of the model, the agreement between measured and modelled k OH is surprisingly good in view of the simplified model assumptions.The model reproduces the observed, time-dependent variability of k OH and agrees within 10% at daytime, but underpredicts the measured nighttime values by 30%.If the model is additionally constrained by the measured concentrations of OH, agreement at night is improved, but daytime values become episodically overpredicted by as much as a factor of 2. This overprediction is found only when the wind came from northern direction and the measurement site was influenced by local isoprene emissions which were probably not significantly photochemically degraded to OVOCs.If additional radical recycling is included in the model as postulated by Hofzumahaus et al. (2009) in order to explain both the observed OH and HO 2 concentrations, the model calculates an OH reactivity that is in general good agreement with the measured k OH when the prevailing wind came from southern directions.For the latter condition, the systematic differences between modelled and measured reactivities are in the range of ±20% which is not significant in view of the general model uncertainties. Based on the measured trace gases and model calculated secondary products, k OH was found to be dominated by VOCs and OVOCs, with a maximum total organic contribution of 85% in the afternoon.The major VOCs were light alkenes and aromatics at night and isoprene during day.According to the model, aldehydes and oxidized isoprene products were the dominating OVOC species, having a larger share of reactivity than the hydrocarbons during the daytime.The importance of OVOCs has also been recognized in other studies, which have analyzed atmospheric OH reactivities in marine, rural and urban air (see discussion above).In these environments, the OVOC reactivity was often of similar magnitude as that of hydrocarbons, if conditions were favoring photochemical oxidation of VOCs.This common result emphasizes the need for widespread OVOC measurements of high data quality, yet, at present, such measurements are not generally available on a routine basis (Apel et al., 2008).Improved measurement techniques for organic species and further measurements of atmospheric OH reactivities will be needed to make further progress in understanding tropospheric chemistry and its role in photochemical formation of ozone and aerosols. Fig. 1 . Fig. 1.Wind speed and direction at the Backgarden field site from 5 to 30 July.The ordinate scale refers to the north-south component of wind speed.The orientation of the black arrows denotes wind direction and the arrow length represents total wind speed.The horizontal blue double arrows show the time periods of typhoon Bilis and Kaemi. Fig. 2 . Fig.2.OH decays measured in real-time by LP-LIF in different samples of air at the Backgarden field site in PRD on 11 July 2006.Dots denote the measured time-dependent fluorescence signals recorded in time bins of 1 ms width, after an off-resonance background signal (ca. 1 -2 rel.units) has been subtracted.The solid lines are exponential fits to the decay curves, yielding OH reactivities as reciprocal 1/e liftetimes (red line 17 s −1 ; black line 30 s −1 ; blue line 54 s −1 ).The decay rate coefficients include the contribution of OH wall loss (1.4 s −1 ) in the flow tube, which needs to be subtracted to obtain atmospheric OH reactivities. Fig. 3 . Fig. 3. Total OH reactivity (a) and volume mixing ratios of CO (b), NO 2 (c), isoprene (d) and propene (e) measured at Backgarden site in PRD from 11 July to 26 July 2008.The right axis of panels (b)-(e) show corresponding reactivities of the individual trace gases.Data gaps at 15-18 July and 22 July were caused by heavy rain during the typhoon Bilis and an electrical power blackout, respectively.Vertical lines denote midnight. Fig. 4 . Fig. 4. Comparison of measured, calculated and modelled k OH data at 1 atm and 313 K. Dots: measured values.Blue line: k calc OH values calculated from measured trace gases (CO, NO x , hydrocarbons).Red line: model results, k model OH (RACM, base case).Green line: model results, k model OH (RACM, plus additional recycling by X).Orange line: base case model using measured OH concentrations as an additional model constraint.Vertical lines denote midnight.Note the different y-axis scale in the lower panel. Fig. 5 . Fig. 5. Upper panel: diurnal variation of measured total OH reactivities at PRD for the days of 11-26 July 2006.Individual k OH data are represented by gray dots and their half-hourly mean diurnal profile by the solid line.Lower panel: cumulative reactivities contributed by measured trace gases, normalized to the measured k OH : CO (triangles), CO + NO x (open circles), CO + NO x + hydrocarbons (squares).The time of day is given in Chinese Standard Time (CNST = UTC+8h), with solar noon at 12:30 CNST. Fig. 8 . Fig. 8. Sensitivity test of modelled OH reactivities.Upper panel: variation of the OH concentration in the model runs.The black line represents the RACM model (base case) as shown in Fig. 4. The blue and green lines represent model results prescribing twice and half of the OH concentrations calculated in the base case run, respectively.Lower panel: variation of the deposition lifetime of modelled species.Black line: 24 h lifetime (base case); blue line: 48 h lifetime; green line: 12 h lifetime.Vertical lines denote midnight. . OH Published by Copernicus Publications on behalf of the European Geosciences Union.S Table 1 . OH reactivities measured in the atmospheric boundary layer. a Range of measured data.Measurements are ground-based unless otherwise noted.b Missing reactivity (MR) expressed by the ratio k OH /k calc OH .c Measured species that have been used to calculate k calc OH .Abbr.: S = inorganic compounds (O 3 , CO, NO x , etc.) plus hydrocarbons (including isoprene); F = formaldehyde; O = OVOCs, other than formaldehyde; B = biogenic VOCs, other than isoprene.d Median ± standard deviation.e Measurements aboard an aircraft.f Average value of two hours of measurements.g k calc OH from measured methane, isoprene and OVOCs. Table 2 . Atmospheric compounds used to calculate k OH values. l Median value Table 3 . Volatile organic compounds measured by online GC. Upper panel: cumulative OH reactivities of measured and modelled compounds normalized to the modelled, total OH reactivity.The data have been averaged for the days of 11 -26 July 2006.Note: CO, NO x , alkanes, alkenes, aromatics, and isoprene were used as model constraints (Table2).HCHO, ALD, OISO and Other are products calculated by the RACM model (base case). Ratio of modelled to measured k OH versus time of day.Red line: model base case.Green line: model plus additional radical recycling by X, all days.Blue line: model plus additional radical recycling by X, days with northerly wind (13/14/24/25 July) excluded. www.atmos-chem-phys.net/10/11243/2010/
2015-03-07T18:39:34.000Z
2009-08-12T00:00:00.000
{ "year": 2009, "sha1": "efd361ee665e4d1bd7cebe1b15b14a51c0712fb0", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/10/11243/2010/acp-10-11243-2010.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "efd361ee665e4d1bd7cebe1b15b14a51c0712fb0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
255642380
pes2o/s2orc
v3-fos-license
Schisandrin B Alleviates Diabetic Cardiac Autonomic neuropathy Induced by P2X7 Receptor in Superior Cervical Ganglion via NLRP3 Diabetic cardiovascular autonomic neuropathy (DCAN) is a common complication of diabetes mellitus which brings about high mortality, high morbidity, and large economic burden to the society. Compensatory tachycardia after myocardial ischemia caused by DCAN can increase myocardial injury and result in more damage to the cardiac function. The inflammation induced by hyperglycemia can increase P2X7 receptor expression in the superior cervical ganglion (SCG), resulting in nerve damage. It is proved that inhibiting the expression of P2X7 receptor at the superior cervical ganglion can ameliorate the nociceptive signaling dysregulation induced by DCAN. However, the effective drug used for decreasing P2X7 receptor expression has not been found. Schisandrin B is a traditional Chinese medicine, which has anti-inflammatory and antioxidant effects. Whether Schisandrin B can decrease the expression of P2X7 receptor in diabetic rats to protect the cardiovascular system was investigated in this study. After diabetic model rats were made, Schisandrin B and shRNA of P2X7 receptor were given to different groups to verify the impact of Schisandrin B on the expression of P2X7 receptor. Pathological blood pressure, heart rate, heart rate variability, and sympathetic nerve discharge were ameliorated after administration of Schisandrin B. Moreover, the upregulated protein level of P2X7 receptor, NLRP3 inflammasomes, and interleukin-1β in diabetic rats were decreased after treatment, which indicates that Schisandrin B can alleviate the chronic inflammation caused by diabetes and decrease the expression levels of P2X7 via NLRP3. These findings suggest that Schisandrin B can be a potential therapeutical agent for DCAN. Introduction Diabetic cardiovascular autonomic neuropathy (DCAN) is a serious and common complication of diabetes mellitus (DM), which may affect the prognosis of disorders of autonomic nerve fibers and cardiovascular damage [1]. Previous research has reported that the clinical manifestations of cardiovascular autonomic neuropathy include myocardial ischemia, exercise intolerance, resting tachycardia, and orthostatic hypotension [2,3]. The prevalence rate of DCAN is 2%-91% in type 1 DM and 25%-75% in patients with type 2 DM. Once diagnosed as DCAN, whether the type 1 or type 2 DM, the five-year mortality rate is 16%-50%, most of which can be attributed to sudden cardiac death [4]. However, treating diabetic complications including DCAN brings a tremendous economic burden to society [5], and the 5-year mortality rate of DCAN is five times higher for patients with DCAN than for patients without it. To date, there was no efficient and systemic drug in the treatment of DCAN. The pathogenesis of DCAN is related to the dysregulation of superior cervical ganglion (SCG), which is the biggest sympathetic trunk ganglion and an important element of the autonomic nervous system [6,7]. There is a mutual connection between the cardiac sensory afferent neurons and the superior cervical sympathetic ganglion neurons [8]. The nociceptive signals of myocardial ischemia are transmitted to the SCG via afferent fibers of sympathetic nerve, and the efferent signals from central nervous system are conducted through the postganglionic fibers to modulate the function of the cardiovascular system. Nociceptive signals of acute myocardial ischemia and hypoxia in patients with DCAN can enhance the excitability of the heart through the postganglionic sympathetic nerve, resulting in an increase of heart rate, blood pressure, and cardiac sympathetic nerve excitability, which may cause severe injury to patients with myocardial ischemia [4,9]. In addition, the dysfunction of SCG in patients with DCAN may be related to high expressions of some receptors which participate in the regulation of cardiovascular system and modulating the expression of these receptors maybe a possible way for the treatment of DCAN [10]. Both type 2 DM and nerve injury have been found to be related to chronic inflammatory responses, which can initiate the expression of large number of P2 receptors [11]. P2X receptor is a group of trimeric ligand-gated ion channel gated by ATP [12]. Activated P2X receptor enables flow of receptors present on many cell types. It facilitates the flow of Na+, Ca2+, and K+ across the cell membrane and mediates a series of biological functions [13,14]. P2X7 receptor is a kind of P2X receptor, which is widely distributed in many organs including SCG. Because the SCG is a key part of autonomic nerve and cardiovascular system, the postganglionic sympathetic nerve endings of the superior cervical sympathetic nerve distribute in the heart and coronary vessels, and the overexpression of P2X7 receptor in the SCG can disturb the function of cardiovascular system [15]. Previous studies found that P2X7 receptor overexpressed in the SCG of rats with DCAN, also suggesting that the P2X7 receptor in the SCG mediates the pathological changes of cardiac sympathetic postganglionic excitatory reflex mediated by myocardial ischemia [16,17]. The detection of P2X7 receptor expression in SCG may be used as a basis for the diagnosis of cardiovascular sympathetic neuropathy. Recently investigators have examined the effect of brilliant blue G (antagonist of P2X7 receptor) on the SCG and found that these drugs could inhibit the nociceptive signaling induced by myocardial ischemic injury and offer the cardioprotective action. Thus, it was hypothesized that some other drugs might be selected to treat DCAN mediated by P2X7 receptor of SCG. Schisandra chinensis is a widely used natural compound of traditional Chinese medicine which can inhibit proliferation of tumor tissue as well as prevent memory deficiency [18]. Schisandrin B is a bioactive compound of Schisandra chinensis, and recent studies have shown that Schisandrin B can provide antioxidant, anti-inflammatory, anti-immune, and cardioprotective effects, its biosafety has also been proven [18][19][20][21]. As an empirical traditional Chinese medicine, Schisandrin B may have some complex mechanism of action which has not been elucidated, but recent studies have found the related pathway of Schisandrin B. It may reduce the expression of P2X7 protein which contributes to the activation of NLRP3 inflammasome and the production of the mature IL-1β, alleviating neuroinflammation [22,23]. The existing treatment of DCAN is mainly focused on reducing blood glucose before adjuvant treatment for neurological disorders. Previous studies have shown that the pathogenesis of autonomic neuropathy may involve oxidative stress and inflammatory response [9]. As mentioned before, Schisandrin B was reported to mitigate myocardial injury in cardiovascular disease through its anti-inflammatory and antioxidative effect. It also possesses minimal toxicity and is difficult to be metabolized, which makes it worthy for studying its clinical application. However, there has been little evidence for the exact mechanism and impact of Schisandrin B on cardiovascular system of patients with DCAN. In this study, because of the anti-inflammatory and cardioprotective effects of Schisandrin B, we aimed to investigate if Schisandrin B could alleviate the symptom induced by DCAN. In addition, because of the role of P2X7 receptor in cardiovascular system, P2X7 as a possible marker in the diagnosis of cardiovascular sympathetic neuropathy is further confirmed, it is of great importance to explore the pathway of Schisandrin B as a new treatment agent in affecting P2X7 receptor so as to elucidate the possible effect of Schisandrin B on DCAN. Experimental Animals and Grouping. Male Sprague-Dawley rats (180 g-220 g) provided by Laboratory Animal Center of Medical College of Nanchang University were used for the studies. The animal used in this experiment was reviewed and approved by Animal Care Committee of Nanchang University. Rats in control group (Ctrl, n = 8) were given water and normal diet (53% carbohydrate, 23% protein, and 5% fat). High-fat and high-sugar food (77.8% normal diet, 10% sugar, 10% lard oil, 2% cholesterol, and 0.2% sodium cholate) was given to rats for 4 weeks to make type 2 diabetic model rats, and then, rats were given streptozotocin (STZ, 30 mg/kg) by intraperitoneal injection for making diabetic model [24]. Blood glucose was measured to test whether rats have become diabetic after a week of STZ injection. The standard of type 2 diabetic rats is the fasting blood glucose ≥ 7:8 mM or postprandial blood glucose ≥ 11:1 mM [25]. Hyperglycemic rats were divided into 5 groups (n = 8 in each group): DM group (type 2 diabetes mellitus), DM + P2X7 shRNA group, DM + NC shRNA group (negative control), DM + Schisandrin B group, DM + PBS group. Rats in DM + shRNA/DM + NC shRNA group were injected P2X7 shRNA/NC shRNA into the SCG. The P2X7 shRNA and NC shRNA were provided by Biotransduction Lab Company of Wuhan (Wuhan, China), which was mixed with the Entranster™-in vivo transfection reagents (Engreen Company, Beijing, China) [26]. Following the procedure of transfection reagent, each rat in the DM + P2X7 shRNA group was given 5 μg P2X7 shRNA with transfection reagent once only on the first day after grouping. The same dose of scramble shRNA and transfection reagent was given to the DM + NC shRNA group. Rats in the Schisandrin B group were given 3.3 ml/kg Schisandrin B solution by intraperitoneal injection every day in the first two weeks (Figure 1(a)). The same 2 Disease Markers volume of PBS was used in the DM + PBS group. The rats were sacrificed under the anesthesia of intraperitoneal injection of sodium pentobarbital four weeks after grouping for subsequent studies. Measurement of Blood Pressure and Heart Rate. The softron BP-2010 Blood Pressure Meter was used to measure the heart rate and blood pressure. The heart rate, systolic blood pressure (SBP), mean blood pressure (MBP), and diastolic blood pressure (DBP) of six groups of rats were displayed on the screen by using indirect tail-cuff method according to the instructions. Measurement of Electrocardiogram and Sympathetic Nerve Activity. Both electrocardiogram (ECG) and sympathetic nerve discharge (SND) were measured by using the RM6240B Data Acquisition and Analysis System of Biological signal (Chengdu Instrument Factory, Chengdu, China). 3% sodium pentobarbital (3 ml/kg) was given by intraperitoneal injection to achieve general anaesthesia. Three electrodes connected to the RM6240B system were fixed under the skin of both upper limbs and right lower limb. For measuring SND, the cervical sympathetic nerve was found and attached to silver electrodes which were connected to the RM6240B system. To reduce the interference, the nerve was isolated from muscle and immersed with paraffin. The settings for measuring SND were as follows: recording sensitivity (25-50 μV), scanning speed (1.0 s/div), power gain (50 μV), and time constant (0.001 s), and frequency filtering (20 kHz) [27]. Western Blotting. After measurement of SND, following the cervical sympathetic nerve to the cranial end, the SCG can be seen behind the bifurcation of the common carotid artery. The SCG can be taken out by cutting the cephalic and centripetal nerves connected to the SCG with ophthalmic scissors, isolated SCGs were washed with PBS smoothly [28,29]. To extract P2X7 protein, the SCG sample was lysed by mechanical disruption in RIPA buffer on ice. The extract was centrifuged at 12000 × g for 15 min at 4°C. Then, the supernatant was stored at -20°C for the further study. For western blotting with 10 μg protein being used to perform electrophoresis, the sample was loaded on 10% SDSpolyacrylamide gels making by PAGE Gel Quick Preparation Kit (10%; Yeasen Biotech. Co.) according to the instruction, which was more stable and took less time compared to nonfat dry milk. 10 2.6. Statistical Analysis. All data were presented as mean ± SD (standard deviation). Differences between groups were determined by analysis of variance (ANOVA) followed by LSD post hoc tests using the SPSS 25.0 software (IBM, Chicago, IL, USA). P value <0.05 was considered statistically significant. Effect of Schisandrin B on Expression of P2X7 Protein in SCG. To determine whether the use of Schisandrin B can influence the expression of P2X7 receptor in the SCG, we assessed the levels of P2X7 protein by western blotting (Figure 1(b)). The results were normalized to their individual β-actin internal control. Compared with the control group, the integrated optical density (IOD) of P2X7 protein in the SCG of the DM group, DM + NC shRNA group, and DM + PBS group were significantly higher (P < 0:05; Figure 1(c)). However, the difference among the control group, DM+ P2X7 shRNA group and DM+ Schisandrin B group was not statistically significant (P > 0:05; Figure 1(c)), and IOD of 3 Disease Markers P2X7 protein in the DM + Schisandrin B group was significantly less than that in the DM group, which indicated that Schisandrin B treatment could lead to reduction of P2X7 expression in SCG. 3.2. Changes of Sympathetic Nerve Activity. Cervical sympathetic nerve discharge of DM rats was significantly strengthened and became more frequent compared to the control group ( Figure 2). The SND in DM rats treated with Schisandrin B and shRNA was significantly diminished and became regular compared to that in DM rats treated with NC shRNA and PBS. The results suggested that Schisandrin B could counteract the excitability caused by DCAN, and also improve the sympathetic function. Changes of Heart Rate and Blood Pressure. The effect of Schisandrin B on the blood pressure and heart rate of diabetic rats are displayed in Table 1. In the DM group, DM + NC shRNA and DM + PBS group, the heart rate (HR), systolic blood pressure (SBP), and mean blood pressure (MBP) were increased in comparison to the control group (P < 0:05), indicating compensatory hypertension after myocardial ischemia in the DM rats. The elevated HR, SBP, and MBP declined after using P2X7 shRNA and Schisandrin B (P < 0:05), yet the diastolic pressure did not change significantly (P > 0:05). The results demonstrated that Schisandrin B could ameliorate the dysregulation of cardiovascular system. Effect of Schisandrin B on Heart Rate Variability. The effects of Schisandrin B on heart rate variability (HRV) in DM rats are demonstrated in Table 2. The total power frequency (TP), very-low frequency (VLF), low frequency (LF), and high frequency (HF) in the DM group, DM + NC shRNA group and DM + PBS group were reduced significantly in comparison to those in the control group (P < 0:01), whereas the LF/HF ratio was increased. Because the LF represents sympathetic activity, HF represents parasympathetic activity, the increase of LF/HF ratio indicates that both sympathetic activity and parasympathetic activity were inhibited and the inhibition of parasympathetic activity was stronger than the sympathetic activity. The decreased TP, VLF, LF, and HF and the increased LF/HF ratio were alleviated in diabetic rats after being treated by P2X7 shRNA and Schisandrin B (P < 0:01), indicating that the Schisandrin B improved the pathological sympathetic and parasympathetic activity. In addition, no significant difference was shown among the DM + NC shRNA, DM + PBS group, and DM group, nor among DM + Schisandrin B, DM + P2 X7 shRNA group, and control group (P > 0:05). Double-Label Immunofluorescence Staining Detected Expression of P2X7 Receptor and GS in SCG. GS as a marker of SGCs was detected with P2X7 receptor by double-label immunofluorescence staining. The GS and P2X7 was surrounded the neurons rather than colocalized with the neuron, which indicated that P2X7 receptor was expressed in the SGCs of SCG (Figure 3). Effect of Schisandrin B on Expression Level of IL-1β in SCG. The results from western blot showed that the protein level of inflammatory factors IL-1β in the DM group, DM + NC shRNA group and DM + PBS group were upregulated compared with that in the control group (P < 0:05). However, administration with P2X7 shRNA and Schisandrin B could reduce the expression of IL-1β protein mass in DM group (P < 0:01, Figure 4). Data are mean ± SD from six independent experiments in each group. * P < 0:05, * * P < 0:01 compared with control group; ##P < 0:01 compared with DM group. Discussion DCAN as a cardiovascular disease can be induced by diabetes, and the pathogenesis may be related to P2X receptors. Nociceptive stimulation, nerve injury, and chronic hyperglycemia lead to release of a large number of ATP from nerve endings and stressed nerve cells. Inflammatory cytokines like ATP initiate the expression of P2X receptor, which involves in nociceptive signal transmission and innervation of cardiac autonomic nerve. It is reported that abnormal expression of P2X7 receptor in SCG can induce cardiac autonomic neuropathy by initiating nerve inflammation. Injury of cardiac autonomic nerve and nociceptive signals from ischemic heart caused by DCAN result in increased excitability of heart and abnormal changes of blood pressure, heart rate, and sympathetic activity, causing more severe injury to patients with DCAN [4,30]. In this experiment, high expression of P2X7 receptor in protein level was observed in rats with DM and the overexpression of P2X7 receptor was downregulated by the administration of P2X7 shRNA and Schisandrin B. The elevated HR, BP, and SND of rats with high P2X7 expression were also mitigated after the treatment of P2X7 shRNA and Schisandrin B. In line with other studies, we found that P2X7 receptor activation induced by DM may participate in the pathogenesis of cardiovascular complications of diabetes mellitus [15,31]. HRV evaluation is a general method to reflect the status of cardiac autonomic nervous system as well as injury of the sympathetic and parasympathetic nerve. HRV decreases in diabetic rats suggest impairment of the autonomic nervous system and an increased risk for adverse cardiac events [32]. Autonomic fibers that innervate cardiovascular system are damaged by diabetes, and the decreased sympathetic and 6 Disease Markers parasympathetic nerve activity leads to reduced HRV, which is used as an indicator of DCAN. In this study, most of the HRV parameters of diabetic rats decreased as a result of P2X7 receptor activation, illustrating injury of sympathetic and parasympathetic nerve injury. However, LF/HF increased in diabetic rats which suggested that the balance of sympathetic nerve activity and parasympathetic nerve activity was broken to initiate more severe damage to heart [33]. After treating diabetic rats with P2X7 shRNA and Schisandrin B, the abnormal TP, TLF, LF, HF, and LF/HF were alleviated, suggesting that the P2X7 receptor mediates the pathogenesis of diabetic autonomic neuropathy, and that administration of Schisandrin B may relieve DCAN by inhibiting P2X7 receptor expression. Schisandrin B as a Chinese traditional medicine has not been included in the therapy of DCAN, and most of previous research focused on its antioxidant and antiviral effect [34]. Scientists found that Schisandrin B can reduce inflammation by suppressing the P2X7 receptor expression in acute lung injury [35]. In this study, P2X7 shRNA was used for reducing P2X7 protein expression at SCG as a contrast to the effect of Schisandrin B. Western blotting result showed that increased P2X7 receptor expression in diabetic rats was inhibited by the treatment of P2X7 shRNA and Schisandrin B, which implied that Schisandrin B could reduce P2X7 receptor expression. Therefore, it plays a role in modulation of cardiovascular system. The elevated sympathetic activity, heart rate, blood pressure, and the abnormal traces in ECG induced by overexpression of P2X7 receptor were improved after Schisandrin B administration, suggesting that Schisandrin B can mitigate the damage of autonomic nerves and provide a cardioprotective action. Chronic inflammation induced by diabetes is related to the excretion of proinflammatory cytokines like IL-1β, which exacerbates nerve damage, induces microvascular complications of diabetes, and disturbs the regulation of cardiovascular system [36]. Because overexpression of P2X7 receptor promotes production of these inflammatory factors [37], we speculated that Schisandrin B could alleviate inflammation by reducing P2X7 protein expression [12]. Increased production of IL-1β was observed in diabetic rats with high P2X7 expression, and western blot results showed that increased inflammatory cytokines in diabetic rats were mitigated after the treatment of P2X7 shRNA and Schisandrin B, suggesting that P2X7 receptor triggers inflammation in diabetic rats. To further investigate the relationship between P2X7 receptor and inflammation, the P2X7-NLRP3-IL-1β pathway was studied, which has been proved to be related to chronic hyperglycemia in preclinical studies [37,38]. K+ efflux induced by extracellular ATP goes through a purinergic P2X7-dependent channel, resulting in the assembly and activation of the NLRP3 inflammasome [22]. Activation of NLRP3 inflammasome leads to the maturation and release of IL-1β, and our data showed that the expression of NLRP3 inflammasome and IL-1β increased in diabetic rats. Furthermore, the treatment of P2X7 shRNA and Schisandrin B led to reduction of these cytokines, which is consistent with previous studies that Schisandrin B can reduce inflammation by inhibiting activation of NLRP3 [37]. Compared with previous research involving the effect of long noncoding RNA or scramble siRNA on DCAN, studying on Schisandrin B has more clinical significance because Schisandrin B is given by intraperitoneal injection rather than intrathecal injection [15]. Previous research on Schisandrin B mostly focused on its antioxidant and anticancer effect [39], and our study was the first to apply Schisandrin B for the treatment of cardiovascular diseases. Studies on the molecular pathway of anti-inflammatory function are mostly related to NF-κB and Nrf2, but few are associated with P2X7 receptor [40,41]. Our study found that the ability of P2X7 receptor to mediate NLRP3 inflammasome involvement identifies a potentially central role for purinergic receptor in the link between Schisandrin B and inflammation in neural tissues. DCAN as a prevalent and dangerous disease had no specific drug at present. Based on the therapeutic effect of Schisandrin B in this experiment, Schisandrin B can be considered as an 7 Disease Markers option for the treatment of DCAN. Finally, Schisandrin B treatment could inhibit the overexpression of P2X7 receptor at SCG in rats with DCAN, alleviate the nerve injury induced by inflammation, offering a cardioprotective action for patients with DCAN. This is the first time that Schisandrin B has been used in the study of DCAN, and future application of Schisandrin B in treatment of DCAN as supplemented regimens needs validation through further study. Data Availability All data generated or analyzed during this study are available from the corresponding author on reasonable request. Ethical Approval All experiments were conducted in accordance with the animal ethics committee of Nanchang University and followed the IASP guidelines on animal pain research. Conflicts of Interest The authors declare that they have no conflicts of interest.
2023-01-12T16:10:01.118Z
2023-01-10T00:00:00.000
{ "year": 2023, "sha1": "2cb00a84d6b6bb333906e43fe47ed1d72e657c15", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/dm/2023/9956950.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9026ce77f8b21b0943f79d172d5dccd6f9af53fb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
6039316
pes2o/s2orc
v3-fos-license
DeepPurple: Estimating Sentence Semantic Similarity using N-gram Regression Models and Web Snippets We estimate the semantic similarity between two sentences using regression models with features: 1) n-gram hit rates (lexical matches) between sentences, 2) lexical semantic similarity between non-matching words, and 3) sentence length. Lexical semantic similarity is computed via co-occurrence counts on a corpus harvested from the web using a modified mutual information metric. State-of-the-art re-sults are obtained for semantic similarity computation at the word level, however, the fusion of this information at the sentence level provides only moderate improvement on Task 6 of SemEval’12. Despite the simple features used, regression models provide good performance, especially for shorter sentences, reaching correlation of 0.62 on the SemEval test set. Introduction Recently, there has been significant research activity on the area of semantic similarity estimation motivated both by abundance of relevant web data and linguistic resources for this task. Algorithms for computing semantic textual similarity (STS) are relevant for a variety of applications, including information extraction (Szpektor and Dagan, 2008), question answering (Harabagiu and Hickl, 2006) and machine translation (Mirkin et al., 2009). Wordor term-level STS (a special case of sentence level STS) has also been successfully applied to the problem of grammar induction (Meng and Siu, 2002) and affective text categorization (Malandrakis et al., 2011). In this work, we built on previous research on word-level semantic similarity estimation to design and implement a system for sentence-level STS for Task6 of the SemEval'12 campaign. Semantic similarity between words can be regarded as the graded semantic equivalence at the lexeme level and is tightly related with the tasks of word sense discovery and disambiguation (Agirre and Edmonds, 2007). Metrics of word semantic similarity can be divided into: (i) knowledge-based metrics (Miller, 1990;Budanitsky and Hirst, 2006) and (ii) corpus-based metrics (Baroni and Lenci, 2010;Iosif and Potamianos, 2010). When more complex structures, such as phrases and sentences, are considered, it is much harder to estimate semantic equivalence due to the noncompositional nature of sentence-level semantics and the exponential explosion of possible interpretations. STS is closely related to the problems of paraphrasing, which is bidirectional and based on semantic equivalence (Madnani and Dorr, 2010) and textual entailment, which is directional and based on relations between semantics (Dagan et al., 2006). Related methods incorporate measurements of similarity at various levels: lexical (Malakasiotis and Androutsopoulos, 2007), syntactic (Malakasiotis, 2009;Zanzotto et al., 2009), and semantic (Rinaldi et al., 2003;Bos and Markert, 2005). Measures from machine translation evaluation are often used to evaluate lexical level approaches (Finch et al., 2005;Perez and Alfonseca, 2005), including BLEU (Papineni et al., 2002), a metric based on word ngram hit rates. Motivated by BLEU, we use n-gram hit rates and word-level semantic similarity scores as features in a linear regression model to estimate sentence level semantic similarity. We also propose sigmoid scaling of similarity scores and sentence-length dependent modeling. The models are evaluated on the Se-mEval'12 sentence similarity task. Semantic similarity between words In this section, two different metrics of word similarity are presented. The first is a language-agnostic, corpus-based metric requiring no knowledge resources, while the second metric relies on WordNet. Corpus-based metric: Given a corpus, the semantic similarity between two words, w i and w j , is estimated as their pointwise mutual information (Church and Hanks, 1990): I(i, j) = logp (i,j) p(i)p(j) , wherep(i) andp(j) are the occurrence probabilities of w i and w j , respectively, while the probability of their co-occurrence is denoted byp(i, j). These probabilities are computed according to maximum likelihood estimation. The assumption of this metric is that co-occurrence implies semantic similarity. During the past decade the web has been used for estimating the required probabilities (Turney, 2001;Bollegala et al., 2007), by querying web search engines and retrieving the number of hits required to estimate the frequency of individual words and their co-occurrence. However, these approaches have failed to obtain state-of-the-art results (Bollegala et al., 2007), unless "expensive" conjunctive AND queries are used for harvesting a corpus and then using this corpus to estimate similarity scores (Iosif and Potamianos, 2010). Recently, a scalable approach 1 for harvesting a corpus has been proposed where web snippets are downloaded using individual queries for each word (Iosif and Potamianos, 2012b). Semantic similarity can then be estimated using the I(i, j) metric and within-snippet word co-occurrence frequencies. Under the maximum sense similarity assumption (Resnik, 1995), it is relatively easy to show that a (more) lexically-balanced corpus 2 (as the one cre-1 The scalability of this approach has been demonstrated in (Iosif and Potamianos, 2012b) for a 10K vocabulary, here we extend it to the full 60K WordNet vocabulary. 2 According to this assumption the semantic similarity of two words can be estimated as the minimum pairwise similarity of their senses. The gist of the argument is that although words often co-occur with their closest senses, word occurrences cor-ated above) can significantly reduce the semantic similarity estimation error of the mutual information metric I(i, j). This is also experimentally verified in (Iosif and Potamianos, 2012c). In addition, one can modify the mutual information metric to further reduce estimation error (for the theoretical foundation behind this see (Iosif and Potamianos, 2012a)). Specifically, one may introduce exponential weights α in order to reduce the contribution of p(i) and p(j) in the similarity metric. The modified metric I a (i, j), is defined as: . (1) The weight α was estimated on the corpus of (Iosif and Potamianos, 2012b) in order to maximize word sense coverage in the semantic neighborhood of each word. The I a (i, j) metric using the estimated value of α = 0.8 was shown to significantly outperform I(i, j) and to achieve state-of-the-art results on standard semantic similarity datasets (Rubenstein and Goodenough, 1965;Miller and Charles, 1998;Finkelstein et al., 2002). For more details see (Iosif and Potamianos, 2012a). WordNet-based metrics: For comparison purposes, we evaluated various similarity metrics on the task of word similarity computation on three standard datasets (same as above). The best results were obtained by the Vector metric (Patwardhan and Pedersen, 2006), which exploits the lexical information that is included in the WordNet glosses. This metric was incorporated to our proposed approach. All metrics were computed using the Word-Net::Similarity module (Pedersen, 2005). N-gram Regression Models Inspired by BLEU (Papineni et al., 2002), we propose a simple regression model that combines evidence from two sources: number of n-gram matches and degree of similarity between non-matching words between two sentences. In order to incorporate a word semantic similarity metric into BLEU, we apply the following two-pass process: first lexical hits are identified and counted, and then the semantic similarity between n-grams not matched durrespond to all senses, i.e., the denominator of I(i, j) is overestimated causing large underestimation error for similarities between polysemous words. ing the first pass is estimated. All word similarity metrics used are peak-to-peak normalized in the [0,1] range, so they serve as a "degree-of-match". The semantic similarity scores from word pairs are summed together (just like n-gram hits) to obtain a BLEU-like semantic similarity score. The main problem here is one of alignment, since we need to compare each non-matched n-gram from the hypothesis with an n-gram from the reference. We use a simple approach: we iterate on the hypothesis n-grams, left-to-right, and compare each with the most similar non-matched n-gram in the reference. This modification to BLEU is only applied to 1-grams, since semantic similarity scores for bigrams (or higher) were not available. Thus, our list of features are the hit rates obtained by BLEU (for 1-, 2-, 3-, 4-grams) and the total semantic similarity (SS) score for 1-grams 3 . These features are then combined using a multiple linear regression model: whereD L is the estimated similarity, B n is the BLEU hit rate for n-grams, M 1 is the total semantic similarity score (SS) for non-matching 1-grams and a n are the trainable parameters of the model. Motivated by evidence of cognitive scaling of semantic similarity scores (Iosif and Potamianos, 2010), we propose the use of a sigmoid function to scale D L sentence similarities. We have also observed in the SemEval data that the way humans rate sentence similarity is very much dependent on sentence length 4 . To capture the effect of length and cognitive scaling we propose next two modifications to the linear regression model. The sigmoid fusion scheme is described by the following equation: where we assume that sentence length l (average length for each sentence pair, in words) acts as a scaling factor for the linearly estimated similarity. The hierarchical fusion scheme is actually a collection of (overlapping) linear regression models, each matching a range of sentence lengths. For example, the first model D L1 is trained with sentences with length up to l 1 , i.e., l ≤ l 1 , the second model D L2 up to length l 2 etc. During testing, sentences with length l ∈ [1, l 1 ] are decoded with D L1 , sentences with length l ∈ (l 1 , l 2 ] with model D L2 etc. Each of these partial models is a linear fusion model as shown in (2). In this work, we use four models with l 1 = 10, l 2 = 20, l 3 = 30, l 4 = ∞. Experimental Procedure and Results Initially all sentences are pre-processed by the CoreNLP (Finkel et al., 2005;Toutanova et al., 2003) suite of tools, a process that includes named entity recognition, normalization, part of speech tagging, lemmatization and stemming. The exact type of pre-processing used depends on the metric used. For the plain lexical BLEU, we use lemmatization, stemming (of lemmas) and remove all non-content words, keeping only nouns, adjectives, verbs and adverbs. For computing semantic similarity scores, we don't use stemming and keep only noun words, since we only have similarities between non-noun words. For the computation of semantic similarity we have created a dictionary containing all the single-word nouns included in WordNet (approx. 60K) and then downloaded snippets of the 500 top-ranked documents for each word by formulating single-word queries and submitting them to the Yahoo! search engine. Next, results are reported in terms of correlation between the automatically computed scores and the ground truth, for each of the corpora in Task 6 of SemEval'12 (paraphrase, video, europarl, WordNet, news). Overall correlation ("Ovrl") computed on the join of the dataset, as well as, average ("Mean") correlation across all task is also reported. Training is performed on a subset of the first three corpora and testing on all five corpora. Baseline BLEU: The first set of results in Table 1, shows the correlation performance of the plain BLEU hit rates (per training data set and overall/average). The best performing hit rate is the one calculated using unigrams. Semantic Similarity BLEU (Purple): The performance of the modified version of BLEU that incorporates various word-level similarity metrics is shown in Table 2. Here the BLEU hits (exact matches) are summed together with the normalized similarity scores (approximate matches) to obtain a single B 1 +M 1 (Purple) score 5 . As we can see, there are definite benefits to using the modified version, particularly with regards to mean correlation. Overall the best performers, when taking into account both mean and overall correlation, are the WordNetbased and I a metrics, with the I a metric winning by a slight margin, earning a place in the final models. Regression models (DeepPurple): Next, the performance of the various regression models (fusion schemes) is investigated. Each regression model is evaluated by performing 10-fold cross-validation on the SemEval training set. Correlation performance is shown in Table 3 both with and without semantic similarity. The baseline in this case is the Purple metric (corresponding to no fusion). Clearly the use of regression models significantly improves performance compared to the 1-gram BLEU and Purple baselines for almost all datasets, and especially for the combined dataset (overall). Among the fusion schemes, the hierarchical models perform the best. Following fusion, the performance gain from incorporating semantic similarity (SS) is much smaller. Finally, in Table 4, correlation performance of our submissions on the official SemEval test set is Conclusions We have shown that: 1) a regression model that combines counts of exact and approximate n-gram matches provides good performance for sentence similarity computation (especially for short and medium length sentences), 2) the non-linear scaling of hit-rates with respect to sentence length improves performance, 3) incorporating word semantic similarity scores (soft-match) into the model can improve performance, and 4) web snippet corpus creation and the modified mutual information metric is a language agnostic approach that can (at least) match semantic similarity performance of the best resource-based metrics for this task. Future work, should involve the extension of this approach to model larger lexical chunks, the incorporation of compositional models of meaning, and in general the phrase-level modeling of semantic similarity, in order to compete with MT-based systems trained on massive external parallel corpora.
2014-07-01T00:00:00.000Z
2012-06-07T00:00:00.000
{ "year": 2012, "sha1": "76d3d111d24391a2fe153e16b1a4d120d9bb7c3d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "76d3d111d24391a2fe153e16b1a4d120d9bb7c3d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
126250682
pes2o/s2orc
v3-fos-license
Relationship between dosimetric leaf gap and dose calculation errors for high definition multi-leaf collimators in radiotherapy Background and purpose Dosimetric leaf gap (DLG) is a parameter to model the round-leaf-end effect of multi-leaf collimators (MLC) that is important for treatment planning dose calculations in radiotherapy. In this study we investigated on the relationship between the DLG values and the dose calculation errors for a high-definition MLC. Materials and methods Three sets of experiments were conducted: (1) physical DLG measurements using sweeping-gap technique, (2) DLG adjustment based on spine radiosurgery plan measurements, and (3) DLG verification using films and ion-chambers (IC). All experiments were conducted on a Varian Edge machine equipped with HD120 MLC for 6X, 6XFFF, and 10XFFF (FFF: flattening filter free). The Analytical Anisotropic Algorithm was used for all dose calculations. Results The measured physical DLGs were 0.39 mm, 0.27 mm, and 0.42 mm for 6X, 6XFFF, and 10XFFF respectively. The calculated doses were lower by 4.2% (6X), 3.7% (6XFFF), and 6.8% (10XFFF) than the measured, while the adjusted DLG values with minimum errors were 1.1 mm, 0.9 mm, and 1.5 mm. The IC measurement errors were < 1%, and the film gamma pass rates (3%/3 mm) were greater than 97% for the spine plans. Conclusions The calculated doses were systematically lower than measured doses with the physical DLG values. It was necessary to increase the DLG values to minimize the dose calculation uncertainty. The optimal DLG values may be specific to individual MLCs and beams and, thus, careful evaluation and verification are warranted. Introduction Multi-leaf collimator (MLC) is an important component of a modern linear accelerator, delivering intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) with individual leaves sweeping across treatment fields. Accurate modeling is of great importance for modern radiation therapy, especially for intracranial stereotactic radiosurgery (SRS) and extracranial stereotactic body radiation therapy (SBRT), where an ablative dose is delivered in a single or in only a few treatment sessions. Due to the nature of small SRS and SBRT target volumes, micro-or high-definition (HD) MLCs, i.e. MLCs with fine leaf width, are desired to improve the prescription isodose-totarget conformity [1][2][3]. American Association of Physicists in Medicine (AAPM) Task Group (TG) Report 72 provides different types of MLC designs, physical properties, and quality assurance (QA) recommendations [4]. AAPM TG Report 142 recommends routine MLC quality assurance tasks [5]. In addition to the MLC commissioning and QA prior to its clinical usage, MLCs need to be properly modeled in the treatment planning system (TPS) for accurate dose calculation. Despite the complex MLC designs, beam quality variations, and intensity changes across the fields, TPSs usually take a simple approach with a small number of parameters for modeling, including dosimetric leaf gap (DLG) and mean transmission factor. The mean transmission factor is the percentage of radiation passing through and between the MLC leaves. The DLG takes into account the difference between the nominal leaf positions and the radiological leaf positions to incorporate the round-leaf-end effect in dose calculations. It also incorporates the minimal physical gap between leaves to prevent collision. For DLG measurements, the sweeping gap technique [6] is most widely used in clinics. However, there have been reports where the measured DLG values for an HD MLC were found clinically unacceptable [7,8]. Further, the cause of the discrepancy is unknown to date [8]. In this study, we present our experiment results to quantify the discrepancy and to find the relationship between the DLG values and the dose calculation errors of an HD MLC mounted on a radiosurgery treatment unit. In this report, the term "error" is loosely defined as the difference between the calculated and measured doses. Linear accelerator, MLC, and dose calculation algorithm All experiments were conducted on a Varian Edge machine equipped with an HD120 MLC (Varian Medical System, Palo Alto, CA). The machine has two flattening-filter-free (FFF) 6X and 10X photon modes in addition to a conventional flattened 6X. Their respective maximum dose rates are 1400, 2400, and 600 monitor units (MU) per minute. The HD120 MLC has 120 leaves. The central 64 leaves (32 leaf pairs) and the outer 56 leaves (28 leaf pairs) have the projection leaf width of 2.5 mm and 5.0 mm at source-axis distance of 100 cm, respectively. Thus, the resulting maximal field height is 22 cm (= 28 * 0.5 + 32 * 0.25 = 14 + 8 cm). The maximum field size is 22 × 40 cm 2 for static fields and 22 × 32 cm 2 for intensity-modulated fields. Other related parameters are listed in the supplementary material Table S1. All dose distributions were calculated in Eclipse TPS (Varian Medical Systems Inc., Palo Alto, CA) using Analytical Anisotropic Algorithm (AAA, v13.6.23). Physical DLG measurement via sweeping-gap dynamic MLCs A plastic cube phantom with a size of 15 × 15 × 15 cm 3 (Reinstein EZ-Cube Phantom, Radiation Products Design, Inc., Albertville, MN) was setup on the treatment table with source-to-surface distance (SSD) of 100 cm. A 0.6 cc Farmer ionization chamber (PTW TN30006-0379, Freiburg, Germany) was placed perpendicular to the leaf-traveling direction in the phantom at a depth of 2.5 cm. The chamber sensitive volume was 23.6 mm long with 6.1 mm diameter. The dose conversion factor N dw co60 was 5.433 Gy/C, calibrated at an ADCL (Accredited Dosimetry Calibration Laboratory) in 2015. For each energy, the phantom was exposed to radiation beams with MLCs open, closed, and dynamic sweeping gaps of 2, 4, 6, 10, 14, 16, and 20 mm [6]. For all beams, the jaws were set to 10 × 10 cm 2 and delivered 100 MU. Then, the net charge without the transmission radiation at the sweeping gap g, Q g ( ) net was calculated as: where the Q(g) was the total charge collected for the beam with the gap g, and the second term was the transmitted charge when the detector was blocked by the MLC leaves. The mean charge from radiation transmitted through the MLC leaves Q T was measured as: T c l o s e db a n k A c l o s e db a n k B . . where Q closed bankA B . / are respectively the collected charges with MLC leaves closed by bank A and B. The vendor provided MLC plans had sixteen control points, which was sufficient for proper off-axis leaf position correction [9]. The travel length was 120 mm for all dynamic sweeping gaps. The Q g ( ) net was then plotted and fitted as a linear function of g and measured the intersection on the horizontal axis of g as DLG; i.e., Optimal DLG determined with IC measurements for spine SBRT plans Five spine SBRT plans at the vertebra of C5, C2, C6, T1, and L4 were selected from previously treated patients. All plans were VMAT with two or three full arcs. Each plan was re-optimized with the use of 6X, 6XFFF, 10XFFF for a same prescription dose of 16 Gy in one fraction. Thus, a total fifteen plans were created and then mapped to a stereotactic QA phantom (PMMA, StereoPhan, Sun Nuclear, Melbourne, FL) using AAA dose calculation algorithm. The phantom was repositioned on the treatment table according to the 3D/3D automatic CBCT co-registration with the reference planning CT image. The doses at the isocenter for individual plans were measured using a 0.015 cc PinPoint ion-chamber of 5 mm length and 2 mm diameter (TN31006, PTW, Freiburg, Germany). The chamber calibration factor was obtained in the same way as AAPM TG119, i.e., by correlating the measured charge of two opposing beams to the AAA calculated dose. The optimal DLG value per energy was determined by analyzing the measured isocenter dose with planned isocenter doses at four different DLG values. Validation of optimal DLG with EBT3 film dosimetry on spine plans To validate the optimal DLG values, transmission factors, and other MLC configurations in Eclipse TPS, the 15 spine plans (5 sites × 3 energies) were delivered to a polystyrene phantom for 2D dose distribution measurements using 8″ × 10″ radio-chromic films (Gafchromic EBT3, Radiation Product Design Inc., Albertville, MN) [10]. Films were scanned using a flatbed document scanner in positive film transmission mode with 75 DPI, RGB per pixel, and 16 bit per color. The pixel values in the green and red channels were converted to optical density as = − OD log pixel value ( /65535) 10 . The OD values, then, converted to doses using calibration curves (see Supplementary material for the details of film calibration). The average of the red and green doses were used as the final film dose; . The post-irradiation colorization time was kept to be greater than 12 h for all films, and one calibration film was exposed for each measurement session to minimize dosimetry uncertainty. Gamma analysis of 3% dose difference (DD) and 3 mm distance to agreement (DTA) was used to measure the similarity between the film and plan dose distributions [11]. Validation -IC measurements on lung and liver SBRT and brain SRS plans In addition to the spine plans, the machine is used for other types of stereotactic treatments. In order to validate the chosen DLG values, ten lung and liver SBRT and five brain SRS plans were randomly selected from previously treated patient database, and measured the delivery errors using the stereotactic QA phantom and PinPoint ion-chamber (TN31006, PTW, Freiburg, Germany). For all plans, the chamber volume was contoured as a cylinder (2 mm diameter × 5 mm height) and the mean dose of the volume was compared to the corresponding measured dose. All plans were VMAT and their plan properties are presented in the Supplementary material Table S2. Validation -IC measurements on AAPM TG199 IMRT plans To be complete, we also measured errors on five regular IMRT plans using the data sets (CT, contour set, and dose criteria) from AAPM Task Group 119 [12]. They consisted of prostate, head and neck, multitarget, hard and easy C shape plans with 7-9 static gantry IMRT beams. The plans were mapped to stereotactic QA phantom (PMMA, Stereo-Phan, Sun Nuclear, Melbourne, FL), and the errors in seven dose regions were measured using the PinPoint ion-chamber. As suggested by the task group report, the chamber was calibrated for dosimetry using the TPS doses of AP/PA square beams. Dose error dependency on sweeping gap and jaw size As Kielar et al. [8] noted the cause was yet unknown as to why the TPS doses were lower in plan QA measurements for HD120 MLC when the measured physical DLG values were used for dose calculations. In order to better understand the source of errors, we repeated the sweeping gap MLC measurements for 6XFFF with gaps extended up to 100 mm. A solid water phantom with a calibrated Farmer chamber (0.6 cc) was used with settings of 10 × 10 cm 2 field size, 95 cm SSD, and 5 cm depth. The corresponding doses were calculated in Eclipse with the final DLG value of 0.9 mm for comparison. The same set of measurements were repeated with different jaw sizes of 5 × 5, 10 × 10, 15 × 15 to 20 × 20 cm 2 to understand the dose error dependency on the jaw sizes. . The fitted lines for 6X, 6XFFF, and 10XFFF had the same slope of 0.0083 nC/mm, indicating the same net charge increase per unit gap increase. The estimated net charges with zero MLC gap (the line intersections with the Q net axis) were 3.2 pC, 2.2 pC, and 3.5 pC for 6X, 6XFFF, and 10XFFF respectively (Fig. 1b). The calculated DLG values (the line intersections with the g axis) were 0.39 mm, 0.27 mm, and 0.42 mm respectively for 6X, 6XFFF, and 10XFFF. The measured transmission factors of three energies were 1.1%, 0.9%, and 1.1%. DLG adjustment -IC measurements on spine plans The measured doses for spine SBRT plans were higher by 4.2% (6X), 3.7% (6XFFF), and 6.8% (10XFFF) than the corresponding plan doses when the physical DLG values were used for dose calculations (Table 1, Fig. 2). The errors decreased as DLG increased at rates of 6.2%/mm for both 6X and 10XFFF and 5.9%/mm for 6XFFF. Based on the plots in Fig. 2, the optimal DLG values (the intersections with the DLG axis) were 1.1 mm for 6X, 0.9 mm for 6XFFF, and 1.5 mm for 10XFFF. The small remaining dose errors of −0.2% (6X), 0.0% (6XFFF), and −0.1% (10XFFF) confirmed the optimal DLG values. On the other hand, the intersections with the y axis (the estimated errors with zero DLG values) were 6.6% (6X), 5.3% (6XFFF), and 9.3% (10XFFF). The final DLG values, transmission factors, and other MLC related parameters entered in our planning system are available in the Supplementary material Table S3. Validation -EBT3 film dosimetry on spine plans The gamma pass rates (γ < 1.0, 3%/3 mm) of the EBT3 film analysis ranged from 97% to 100% (Supplementary material Table S4). The lowest pass rates were 98% for 6X, 99% for 6XFFF, and 97% for 10XFFF. In the relative magnitude of film doses with respect to the corresponding plan doses, the film doses were overall slightly lower than plan doses, except the 1% higher SBRT1 case for 6XFFF. The largest difference was 5% for SBRT3 (10XFFF). The dose distributions and gamma maps of the best and worst cases are shown in Fig. 3. The SBRT1 (6XFFF) case had a good agreement between the film and plan doses with 100% gamma pass rate. The dose agreement in the sharp dose fall off region from the planning target volume (PTV) to the spinal cord was very good (Fig. 3c). For the SBRT3 case, the gamma pass rate in the high dose region was 97%. The posterior region of PTV failed as shown red in the gamma map (Fig. 3 h). With the film dose scaled up by 5%, i.e. in a relative gamma analysis, the gamma pass rate became 100%. Validationlung and liver SBRT and brain SRS plans The measurement errors for the ten lung and liver SBRT and five brain SRS plans were small with a mean difference of −0.2% (Supplementary material Table S2). The largest error was −2.3% for a brain case (Brain 1). The error was likely due to its small PTV volume (0.3 cc). The two 10XFFF liver plans had small errors as well; −0.2% and −1.7% respectively. Validation -IC measurements on AAPM TG199 IMRT plans The measurement errors for the AAPM TG199 IMRT plans were in the range of −1.8% to 0.8% (Supplementary material Table S5). The errors in the intermediate and high dose regions were all < 1.0%. The mean and standard deviation (SD) of the errors were −0.3% and 0.9% respectively. The corresponding 95% confident limit (CL = |mean| ± 1.96SD) was 2.0. Dose error dependency on sweeping gap and jaw size The dose discrepancy between the calculated and measured doses was a function of gap as shown in Fig. 4. The measured doses were lower than calculated for the small sweeping gaps while it was larger for the large gaps. The zero-crossing occurred at gap 32 mm. In a relative scale (Fig. 4b), the differences were bigger with small gaps due to the small corresponding plan doses. At the 2 mm gap, the error reached down to -12.5%. With the larger gaps, the absolute error was larger, but the relative error was small (< 1%) because of the relatively larger doses. When the sweeping gap was small, the fraction of time the ionchamber saw the radiation source was very small (1.6% of total time for 2 mm gap), and for the most of time the chamber was blocked by the MLC leaves. Therefore, the errors are expected to come mostly from the uncertainties in the scatter dose estimation. Fig. 5 shows the measurement results with the same set of sweeping gaps, but with different jaw sizes of 5 × 5, 10 × 10, 15 × 15 to 20 × 20 cm 2 . As shown, the errors were also affected by the jaw settings, where the larger errors were associated with the smaller jaw openings. Discussion Measuring DLG using the sweeping gap MLC patterns is widely used in clinics. It is a convenient way of measuring DLG using an ionchamber with a simple phantom. There is no need of cumbersome film processing, which is often used for measuring static radiation field offset (RFO, roughly the half of DLG) [8]. For convenience, the vendor provided the MLC files and the procedure guideline for the sweeping gap DLG measurement as well. However, as clearly shown in this study, it was necessary to adjust the measured physical DLG values to reduce dose calculation errors for the system investigated. Using the original measured DLG values (0.39 mm/6X, 0.27 mm/6XFFF, and 0.42 mm/ 10XFFF), the TPS underestimated the doses by 4.2% ∼ 6.8% on average for spine SBRT plans (Fig. 2), which is not clinically acceptable. In order to reduce the errors, it was necessary to increase the DLG values by factors of 2.8, 3.3, and 3.6 for respective energies (1.1 mm/6X, 0.9 mm/ 6XFFF, and 1.5 mm/10XFFF). It should be noted, however, that the magnitude of adjustment, as well as the absolute values in Figs. 4 and 5, may vary from case to case since the measured DLG values can be different based on the measurement settings such as field size, depth, and ion chamber [13,14]. In the study of Wasbo and Valen [14] with Millennium MLCs, the measured DGL values for 6X increased approximately by 0.1 mm (from 1.7 mm to 1.8 mm) with the measurement depth changed from 5 cm to 15 cm and the field size from 6 × 6 cm 2 to 14 × 14 cm 2 . Mullins et al. [13] also reported higher measured DLG values when measured with a smaller volume (0.125 cc, 6.5 mm length) ion-chamber, primarily due to the chamber polarity effect. These variations and uncertainties in measuring the physical DLG values support the idea of determining the final DLG values based on a set of clinical plan measurements. However, care must be taken since any source of systematic uncertainties in the plan dose measurements and calculations will induce a systematic uncertainty in the final commissioned system. There are many potential sources of uncertainties, which include, but not limited to, the volumeaveraging effect from finite chamber size, chamber calibration, dose calculation grid resolution, and et cetera. Therefore, validating the final DLG values using independent dosimeters such as films and diode arrays is of great importance. As noted in the introduction section, similar approaches were reported in other papers. The measured and adjusted DLG values of three different treatment units with HD120 MLC are summarized in the Supplementary material Table S6. The measured DLG values in the Wen et al. study [7] were 0.5 mm (6XFFF) and 0.6 mm (10XFFF) on the same machine as that of this study. The DLG of 6X was not presented in the study. They increased the values to 0.7 mm (6XFFF) and 1.0 mm (10XFFF) to reduce plan measurement errors. On the other hand, Kielar et al. [8] delivered a series of step-and-shoot MLC patterns onto films, measured radiation field offset (RFO), and then took twice the value as their measured DLG (0.5 mm) for the energies of 6X, 6XFFF, 10X, 10XFFF, and 15X. Then, the measured value was increased to 1.7 mm to lower their plan measurement errors within 2%. Interestingly, they reported that there was no difference in measured DLG values among different energies, and also their TPS (Eclipse AAA v8.9) allowed only one DLG value for all energies at the time. In regards of the final DLG values, our values were higher than those of Wen et al. study, but lower than that of Kielar et al. study. In addition, Yao and Farr also proposed a method to determine optimal DLG values and tested on four Varian and one Siemens machines [15]. Their method for the Varian machines also used a set of sliding window MLC patterns. However, unlike the conventional approach, the leaves "marched" at the same speed but at difference positions so that the gaps have a certain length of exposed tongues and grooves, denoted as "T&G extension" in the study. In order to determine the optimal DLG values, the measured doses were compared to the TPS doses for a set of MLC patterns with gaps ranging from 5 to 30 mm and T&G extensions ranging from 5 to 20 mm. Their reported optimal DLG for 6X was smaller (0.6 mm) than the values of other studies for a HD120 MLC equipped on a TrueBeam STx machine. No other energy was investigated in the study. There have been several DLG related publications for Millennium 120 MLCs [9,[15][16][17]. As expected, the reported values were larger than those of HD120 MLCs due to more scattering and transmitting radiation through round leaf ends with smaller curvature radius (8 cm) than that of HDMLC (16 cm [17]. Chauvet et al. proposed a method to find a slit width (sweeping gap) first that produces equivalent uniform dose distribution between the planning system and the measured dose, and then to determine the DLG and transmission (T) factor combination that generates such optimal slit width in the planning system [16]. The approach however seems limited from a practical perspective because, as they reported, different (DLG, T) combinations produced the same optimal slit width, and the method may not be applicable for the FFF mode. In the study, the optimal slit width was found to be 6 mm and the (DLG, T) combinations were in the range of (1.7 mm, 1.6%) ∼ (2.0 mm, 1.5%) for 20X photon energy (CL23EX, CadPlan Helios v 6.3.6, Varian Medical Systems, Palo Alto, CA). Yao and Farr [15] reported optimal DLG values as well for three Millennium 120 MLCs; 2.3 mm, 2.3 mm, and 2.5 mm respectively, which were larger than other aforementioned studies. There are various types of MLCs, and their designs and controls are complex [4]. A number of physical and dosimetry parameters are generally measured to characterize different types of MLCs, which include, but not limited to, the leaf position/alignment accuracy, readout and radiation field congruence, static and dynamic leaf gap, tongueand-groove effect, asymmetric radiation penumbra, leaf transmission and leakage, and leaf travel speed [6,18]. Further, the associated controller software may affect the dosimetry properties and delivery accuracy as well [9]. Proper modeling for accurate dose calculation is therefore required to accommodate those variations. However, the AAA photon dose calculation algorithm in Eclipse (version 13.6, Varian Medical System, Palo Alto, CA) requires only two parameters per energy: the dosimetric leaf gap (DLG) and mean transmission factor through the closed leaves. The mean transmission factor takes into account both interleaf leakage and leaf transmission. According to the vendor documentation [Eclipse Photon and Electron Algorithms Reference Guide (version 13.7)], the tongue-and-groove effect is also taken into account, but the groove effect is ignored from a small expected error. There is no associated parameter to enter in TPS for the tongueand-groove effect. The MLC leaf ends are round in shape for better offaxis dosimetric characteristics. The AAA algorithm models the leaf ends as sharp edges, as opposed to rounded edges, but instead pulls the leaves back to each end by the half of DLG value in order to increase the effective MLC opening. Further, the use of a single DLG value is insufficient since the difference between radiological and nominal leaf position differs at different off-axis leaf position. This simple modeling approach may contribute errors in the calculated dose to some extent. The MLC transmission factor, DLG, and tongue-and-groove effect algorithm all affect the fluence map (Eclipse Photon and Electron Algorithms Reference Guide ver. 13.7), and the final fluence map is used for dose calculation. The smallest fluence map resolution was 1 × 1 mm 2 when the dose calculation grid resolution was set to 1 mm in Eclipse, which might be another source of error for HD120 MLC with a small 2.5 mm leaf width in contrast to other regular MLCs of 5-10 mm leaf widths. For more details on the dose calculation and source modeling, see a vendor white paper by Torsti et al. [19]. As demonstrated in Figs. 4 and 5, the dose errors were dependent on the MLC sweeping gap size as well as the primary collimator opening (jaw size). In other words, there was no one optimal DLG value for all settings. With a larger DLG value, the overall plan dose increases because of the wider effective MLC opening. Therefore, based on Fig. 4, a larger DLG value will make less errors for plans with larger MLC gaps, but more errors for plans with smaller MLC gaps, and vice versa. For the five spine SBRT plans in this study, the mean gaps were in the range of 14.6 mm and 22.9 mm (Supplementary material Fig. S2). The corresponding maximum measurement error was 1.6% with the final DLG value. Because the machine was mainly for cranial and extra-cranial radiosurgery program in this study, we tuned the DLG values based on the radiosurgery plans using a micro ionization chamber. However, the resultant DLG values also produced good 2D film dose measurement results for spine plans as well as small IC measurement errors for regular IMRT plans. The measurement 95% confident limit (CL = |mean| ± 1.96SD) of AAPM TG119 plans was 2.0%, compared to the median 4.4% CL of ten participating institutions in the TG119 report [12]. It is not presented in this report, but we also measured doses with static rectangular MLC fields of sizes 4 × 4 cm 2 ∼ 30 × 20 cm 2 . The maximum difference between calculated and measured doses was only 0.6%. Therefore, the method of adjusting the DLG values based on a set of clinical treatment plans is a viable option for determining DLG values. Further, the linearity between DLG and dose error (Fig. 2) may allow one to find the final DLG values via interpolation based on two measurement points. In summary, the use of physical DLG values, measured by the conventional sweeping gap MLC patterns, produced lower calculated doses than expected. It was necessary to increase the measured DLG values to minimize the discrepancy. The optimal DLG values were also dependent on the plan characteristics, including the MLC gap statistics and jaw sizes. A set of extensive validation work presented in this study is suggestive that determining the DLG values based on a set of clinical treatment plan measurements is a clinically viable method. Conflict of interest. None of the authors has conflict of interest. Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.phro.2018.01.003.
2019-04-22T13:10:39.838Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "618e6ea764d31133fe015625196a718a332e667f", "oa_license": "CCBYNCND", "oa_url": "http://phiro.science/article/S2405631617300532/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e499c0345f70c4eba4f4851ac5b66caaef02d5f3", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
51603548
pes2o/s2orc
v3-fos-license
Qualitative study of Ebola screening at ports of entry to the UK Introduction In response to the 2013–2016 West African outbreak of the Ebola virus disease (EVD), Public Health England introduced enhanced screening at major UK ports of entry. Our aim was to explore screeners’ and screened travellers’ perceptions of screening as part of an evaluation of the screening programme. Methods We undertook qualitative focus groups and semistructured interviews with screeners and travellers who had returned from affected countries before and after the introduction of screening in England. The study was conducted in two airports: one international rail terminal and one military airport. Research topic guides explored perceptions of the purpose and implementation of the process, potential improvements and reactions to screening. The data were analysed using the framework method. Results Twenty-four screeners participated in 4 focus groups (one for each port of entry) and 23 travellers participated in interviews. Three themes are presented: ‘Context’, ‘Screeners’ experience of the programme’ and ‘Screening purpose and experiences’. The programme was implemented rapidly, refined over time and adapted to individual ports. Screeners reported diverse experiences of screening including negative impacts on their normal roles, difficult interactions with passengers and pressure to identify positive EVD cases. Screening was considered unlikely to identify individuals with symptoms of EVD, and some participants suggested it was driven by political concerns rather than empirical evidence. The screening process was valued for its provision of information and reassurance. Conclusion This qualitative study found that the UK EVD screening process was perceived to be acceptable to assess individual risk and provide information and advice to travellers. Future programmes should have clear objectives and streamlined processes to minimise disruption, tailored to the nature of the threat and developed with the needs of humanitarian workers as well as general travellers in mind. Introduction In response to the 2013-2016 West African outbreak of the Ebola virus disease (EVD), Public Health England introduced enhanced screening at major UK ports of entry. Our aim was to explore screeners' and screened travellers' perceptions of screening as part of an evaluation of the screening programme. Methods We undertook qualitative focus groups and semistructured interviews with screeners and travellers who had returned from affected countries before and after the introduction of screening in England. The study was conducted in two airports: one international rail terminal and one military airport. Research topic guides explored perceptions of the purpose and implementation of the process, potential improvements and reactions to screening. The data were analysed using the framework method. results Twenty-four screeners participated in 4 focus groups (one for each port of entry) and 23 travellers participated in interviews. Three themes are presented: 'Context', 'Screeners' experience of the programme' and 'Screening purpose and experiences'. The programme was implemented rapidly, refined over time and adapted to individual ports. Screeners reported diverse experiences of screening including negative impacts on their normal roles, difficult interactions with passengers and pressure to identify positive EVD cases. Screening was considered unlikely to identify individuals with symptoms of EVD, and some participants suggested it was driven by political concerns rather than empirical evidence. The screening process was valued for its provision of information and reassurance. Conclusion This qualitative study found that the UK EVD screening process was perceived to be acceptable to assess individual risk and provide information and advice to travellers. Future programmes should have clear objectives and streamlined processes to minimise disruption, tailored to the nature of the threat and developed with the needs of humanitarian workers as well as general travellers in mind. InTroduCTIon The 2013-2016 West African outbreak of Ebola virus disease (EVD) was the largest since the discovery of the virus in 1976, 1 with more cases and deaths than all other outbreaks combined. 2 UK healthcare and military volunteers supported the response primarily in Sierra Leone, with Guinea and Liberia also significantly affected. The West African EVD outbreak was declared a public health emergency of international concern by the WHO on 8 August 2014. Some direct flights to the UK from the region had been discontinued by airline operators before this date: all were discontinued on 27 August 2014. Public Health England (PHE) introduced enhanced screening for EVD on 14 October 2014 at major ports of entry in England as part of activities in response to the risk to public health. Prior to this date, people returning to England from EVD affected countries were not routinely identified on entry to the UK. Travellers were advised to seek medical attention as soon as possible if they developed any symptoms compatible with EVD via information Key questions What is already known? ► Public Health England introduced enhanced screening to assess the health status of persons returning from Ebola virus disease-affected countries and ensure they were aware of what actions to take if subsequently taken unwell. What are the new findings? ► Differential views were elicited between the intended purpose of screening, the experience of those receiving screening and the outcomes achieved. ► The screening programme was viewed as acceptable to screeners and screened travellers; however, the experience and perceived needs of healthcare workers taking part in the humanitarian response can differ from that of general travellers. What do the new findings imply? ► Future programmes should have clear objectives from the start and processes streamlined to minimise disruption, tailored to the nature of the threat and, where safe, developed with stakeholder experience in mind. ► Future programmes should consider the specific needs of healthcare workers taking part in the humanitarian response as well as general travellers. BMJ Global Health provided in the media, PHE's website, notices at ports of entry and travel companies. Advice from occupational health teams before departure and varied systems for supervision on return were also provided by some organisations (eg, non-governmental organisations). Enhanced screening was introduced to: provide information to persons returning from EVD affected countries, assess their health status and ensure they were aware of what actions to take if subsequently taken unwell. 3 4 Although the detection of large numbers of EVD cases was not anticipated, screening was introduced both to advise travellers and to provide a degree of reassurance for the UK public. 5 Entry screening involved completion of a questionnaire assessing health and travel history, potential risk of EVD exposure via occupation and contact with infected individuals, 6 assignment of a risk category (0=low to 3=high) (table 1) and tympanic temperature measurement. Screeners provided advice to all travellers including actions to take if symptoms developed. Category 0 travellers were provided with reassurance and advice to continue with their routine activities. Category 1 travellers were additionally advised to take their temperature if they felt unwell and to phone the National Health Service (NHS) telephone helpline if it was ≥37.5°C. If EVD signs or symptoms were observed or disclosed, travellers were assessed by a clinical screener who determined the need for referral to specialist care. 6 Category 2 travellers were provided with a monitoring kit and asked to record their temperature twice daily during the 21-day period since the last day in an EVD-affected country (used as a proxy measure for last exposure to risk) and report if they felt unwell or had a temperature of ≥37.5°C. Category 3 travellers were also provided with a monitoring kit and asked to record their temperature twice daily, additionally having to report their temperature to PHE by midday. 7 Two sources of information were used to identify eligible travellers: the Returning Healthcare Workers (HCWs) Scheme (RHWS) and Advanced Passenger Information (API). The RHWS involved organisations registering with PHE and providing details of deployed individuals' return dates. API includes the passengers' full name, date of birth, gender, nationality and passport number and is collected by airlines prior to travel either when a flight is booked or automatically through passport details obtained at check-in. API enabled the pre-entry identification of most eligible travellers expected on each flight. Self-identification was relied on to detect eligible travellers not identified via RHWS or API. As none of these systems were 100% effective, it was assumed that not all eligible travellers would be identified. It is important to understand how screening programmes are experienced by those delivering and receiving them to ensure that the design and implementation of similar initiatives in the future are acceptable and adhered to. The aim of this paper is to explore screeners' and screened travellers' perceptions of this EVD screening programme. MeTHods research design JMK undertook qualitative focus groups (May-August 2015) with screeners implementing screening processes (eg, taking temperatures, administering questionnaires and undertaking clinical assessments). Focus groups are useful for conducting exploratory investigations into emerging areas of interest and provide a relatively efficient data collection method. Interactions between participants generate new information as participants build on the responses of others and focus groups facilitate insights into the extent to which the experience of screeners agree or diverge. JMK conducted semistructured one-to-one interviews (August-November 2015) with travellers who were referred for further investigation of febrile illness after returning to the UK from affected countries prior to the introduction of screening and screened travellers (hereafter referred to as prescreening and postscreening). CC conducted one interview. As travellers' places of residence were geographically widespread, one-to-one interviews were conducted by telephone. Additionally, the experiences of passengers were expected to be more diverse than screeners (who were following a protocol); therefore, one-to-one interviews were appropriate. The evaluation was conducted at two airports (ports 1 and 3), one international rail terminal (port 2) and one military airfield (port 4), chosen to reflect different port size, screened traveller volume and mode of travel. Port 2 could not use API and relied on travellers being asked, and reporting, whether they had been to an affected country. One focus group per port was conducted with screeners from the same port to enable participants to recall collective experiences. Participant recruitment and consent Invitations and information sheets were posted or emailed to potential participants. To achieve a sample with maximum variation in views we used a purposeful sampling approach. Local screening managers organised focus groups with screeners and ensured diversity in relation to: experience of screening travellers within each category of risk, professional backgrounds and role within the process (eg, clinical and non-clinical roles) and gender. PHE and military databases of screened travellers were used to sample participants based on: port of entry, gender, category of risk and development of EVD signs and symptoms either at, or post, screening. Travellers who had returned from affected countries prior to the introduction of screening were selected using the same criteria from the PHE Health Protection (HP) Zone database. A sample framework was devised based on the stated variables, and travellers were then selected at random using the 'sample' command in Stata V.13.1. Travellers who declined to participate or did not respond after a maximum of three invitations were replaced by someone randomly sampled with similar characteristics. In total, 73 travellers were invited to interview (14 Informed consent was obtained from all participants. data collection The interview topic guide used with travellers who returned prescreening explored whether they felt informed about actions to take if they felt unwell. Topic guides for screeners and screened travellers similarly allowed comparison and exploration of the perceptions of the purpose and implementation of the screening process, barriers and facilitators to following the protocol (screeners only), potential improvements and reactions to screening (appendix A and B). Travellers were also asked about the impact on their knowledge, willingness to adhere to advice and behaviour. All interviews and Individuals who have visited an EVD affected area but had no direct contact with an EVD infected individual or been exposed to any other high-risk event. This category also included those working in laboratories assured to be operating to a UK standard. Reassure and provide written and verbal information. Advise to take temperature if feeling unwell and phone 111 if ≥37.5°C. Continue normal activities. Category 2 (low-risk exposure) Individuals having direct (close) contact with an infected person or their body fluids but who did not have direct physical contact during clinical care and had no known breaches of protective equipment/clothing during this contact. Reassure and provide written and verbal information. For 21 days following last exposure: ► Self-monitor and record temperature and symptoms twice daily and report if feeling unwell and/or temperature≥37.5°C. ► Travel -no restrictions. ► Normal activities except: -Postpone non-essential medical or dental treatment (including vaccination), and inform healthcare provider of contact if any essential treatment needed. If reluctant to comply with recommendations, consider actions on a case-by-case basis. Category 3 (high-risk exposure) High-risk individuals who had direct contact with symptomatic infected individuals and potential exposure to bodily fluids including breaches to protective equipment/clothing or had worked in a laboratory not assured to be working to UK standard. Reassure and provide written and verbal information. For 21 days following last exposure: ► Self-monitor and record temperature and symptoms twice per day and report to designated person by noon daily. If there is inability to take temperature a face-to-face arrangement will be made with relevant health services. ► Continue normal activities (while no symptoms) except: -Local travel only. -If a healthcare worker, no patient contact for 21 days. -Postpone non-essential medical or dental treatment (including vaccination) and inform healthcare provider of contact if any essential treatment needed. -Do not share toothbrushes or razors. -Use barrier contraception or avoid unprotected sex for 21 days. If reluctant to comply with recommendations, consider actions on a case-by-case basis. BMJ Global Health focus groups were digitally recorded. Data collection and analysis occurred concurrently to allow for consideration of the adequacy of the sample size. The decision to end data collection was informed by factors relating to the concept of 'information power': the breadth of the aim, the sample specificity (characteristics of the participants relating to the phenomenon under study), quality and depth of the interview data and the analysis approach which, in this evaluation, did not aim to capture the entire range of experiences but present sufficient information to explore perceptions of screening. 8 Additionally, pragmatic considerations of resources including time and availability of screeners guided the decision. Analysis Audio recordings were fully transcribed and anonymised. Data were analysed using the framework method, 9 supported by QSR NVivo V.10 software. After data familiarisation, codes identifying key issues within the data were assigned to three transcripts. Initial codes were discussed by JMK and SA and refined to produce a coding framework, which was applied to the remaining transcripts and revised in response to new information. Coded data were inserted into a matrix in NVivo, which plotted the codes against each participant, condensing the volume of data with a summary capturing the meaning. resulTs Participants Table 2 details the characteristics of 24 screeners who participated in four focus groups. A recruitment rate cannot be presented for screeners as focus groups were organised by local screening managers and in ports 1 and 2 depended on who was available on the day. No one declined to participate directly to the researcher. The average length of focus groups was 81 min (range: 53-103 min). Twenty-three (31.5% recruitment rate) travellers participated in interviews (mean 36 min, range: 20-57 min) (table 3). Two travellers declined to participate citing the length of the interview and being too busy as reasons; the remaining 51 either did not respond to the invitation or the email failed to deliver. Of the 15 invited directly by the military, no reason for non-participation was received by the research team and no screeners contacted the research team to decline participation. Three themes with illustrative quotes are presented below to reflect the range of views from participants. Context Screening was established in a short timescale under challenging circumstances and the process was refined over time. You have to remember it is an evolving thing. So initially we were all in a bit of a situation I think but over a period things started improving in all spheres actually. Port 1, screener 2 Although the screening protocol was standardised, implementation was dependent on the port context. Ports varied in relation to facilities, identification processes, disruption to the traveller journey caused by the process (ie, distance between facilities and border control/luggage collection), duration of the process and volume of screened travellers. screeners experience of the programme Ensuring adequate staffing levels was initially challenging, in part due to the unpredictable volume of travellers. Over time, the proportion of PHE staff seconded to the programme teams became more robust. Screeners in ports 1 and 2 reported variable degrees of original line managerial support for screening involvement. In the beginning it was a lot of different staff day in and day out and I think now in the past couple of months I think a lot of us have come on secondment (…) You're here all the time so you know the process. So when you get people who haven't been for a couple of months you can say 'Everything's the same' or 'We do this now'. Port 1, screener 4 Given the diverse background of screeners, the relevance of training content was variable. In two ports, screeners felt that there was a lack of clarity on the risks to screeners including what screeners should do to protect themselves if they were exposed to a suspected EVD case. Despite recognising procedural modifications as improvements, the dynamic nature of the programme made keeping up to date with the procedures challenging. One of the most stressful things was all the paperwork actually and you know thinking am I catching up, have I read everything, am I update on the different versions? Port 2, screener 2 Negative aspects of screening related to: the impact on the screeners' routinely employed roles; difficult interactions with passengers; perceived pressure concerning BMJ Global Health the identification of positive EVD cases; and autonomy to adapt the process. In contrast, collaborating with internal (eg, different PHE departments) and external bodies (airport personnel including Border Force) was a positive aspect of delivering the programme. screening purpose and experiences Comparison of the formally documented objectives of the screening process to the perceptions of those who experienced it can help to understand the delivery of, and response to, the screening process. Protection or politically driven? Screeners and most travellers described screening as intending to protect the public and prevent the spread of EVD in the UK. I do think there was a genuine desire to keep people safe but also psychologically I think it's important that you BMJ Global Health know, we are seen to be doing something. Traveller 7, prescreening and postscreening, ports 1 and 2 However, screeners viewed the introduction of screening as driven by political concerns rather than empirical evidence. A small number of travellers did not believe that screening could protect the UK population. Instead, screening was viewed as 'over cautious' (traveller 14, male, prescreening and postscreening, port 1), 'window dressing', an 'umbrella hoisting exercise' (traveller 6, male, prescreening and postscreening, port 1) and politically driven. It was just a political move, it wasn't backed by health professionals. Port 1, screener 4 Categorising risk of EVD exposure Screening was described by all screener groups and some travellers as categorising travellers in relation to their level of risk, based on self-reported behaviours performed while in an affected country. [Travellers] don't have a good idea of what their risk is. They know that they have done this or done that but they don't know how much that activity puts them at risk. So we are an expert who assess their activity and say 'You are at a high risk or low risk or very low risk' and give them appropriate advice which I'm sure many people wanted. Port 1, screener 2 The screening questionnaire was described by screeners and travellers as in-depth and appropriate but was also criticised as limited by its reliance on self-reporting. Journalists commented that the questions appeared to be designed for HCWs and were unable to capture some of their behaviours such as observing burials rather than funerals. The questions are very basic and it would be very easy for you if you did not want to be held up or put in quarantine to just tick 'no' to everything, 'no I've not been to a funeral, no I've definitely not got Ebola' and as long as your temperature wasn't raised at that point, which you could lower by taking Ibuprofen if you so wished, or you may not be in that stage of incubation, you could quite easily pass through the screening. Traveller 3, prescreening and postscreening, ports 1 and 2 Travellers generally perceived risk categorisation as useful, and discussing the questionnaire responses with a screener was considered valuable when the risk category was unclear or if travellers wished to be recategorised. Screeners were also able to help travellers who found questionnaire completion challenging due to extreme tiredness. I said 'well you know, the way I've answered these questions, yes I suppose I would be lowest [category of risk] but I think you should probably bump me up the scale simply because you know, I have been into the cemeteries as they're burying Ebola victims, you know, I was sitting there filming, I've been you know fairly close to people who are likely to have had Ebola being collected from the streets of Sierra Leone, you know, I've been into treatment centresalthough I haven't actually gone to the very centre of them, we have had cameras that have gone in which I've then had contact with (…) I think I should probably be a bit higher up' and I think in the end they said 'oh alright, why not' and then I think I was a 2. Traveller 17, postscreening, port 1 However, there was a general perception by screeners that risk categories were less useful for HCWs who had prior knowledge of EVD. Identifying possible EVD signs and symptoms Most travellers and screeners in port 3 felt that screening aimed to identify individuals displaying EVD signs and symptoms. There is the actual 'let's see if we can't catch people who are infected before they go off and infect people'. Traveller 7, prescreening and postscreening, ports 1 and 2 However, several travellers thought identifying individuals with symptoms was unlikely. In terms of the risk and incubation period and stuff like that, in terms of actually being able to travel while you're sick, like the chances of someone actually displaying symptoms at screening is small. Traveller 22, prescreening and postscreening, ports 1, 2 and 3 Furthermore, port 2 screeners and one traveller felt the process should not be called 'screening', because it was unable to confirm the presence or absence of EVD. Indeed, using temperature measurements to screen was not seen as evidence based by some travellers and a small number of travellers raised issues relating to the accuracy of the thermometer readings due to the type of thermometer and the 'non-touch' method whereby screeners placed the thermometer into the ear without making skin-to-skin contact with the traveller. However, the latter is recognised as a valid method of taking temperature. I think screening is not the right word because it's about getting people into the system rather than actually checking whether people have got Ebola. Port 2, screener 7 Referral to NHS if signs and symptoms identified Screeners and travellers described screening as aiming to ensure appropriate specialist assessment for those displaying signs and symptoms. To ensure that if a person is identified who's at…. who's symptomatic, then….systems were to be set that that person could be safely transported to the [name of hospital]. Port 3, screener 1 However, delayed transfers to secondary care were reported by travellers and screeners, and a small number of travellers disagreed with transfer decisions. Screeners were mostly considered by travellers as calm and sympathetic when dealing with travellers with signs and symptoms, but one traveller described a screener as panicking. BMJ Global Health He took my temperature and it shot right up, it was pretty high by then, and he… you could see that he basically shat himself!! Ha-ha… and then I was taken away into a smaller room, and then a doctor came in -she asked me loads of questions, took my temperature again but she had no protection on, then she disappeared for 20 min, and then came back, and said I could go -which to my mind was the wrong thing to do, I would have thought I should have been carted off to hospital there and then. Traveller 16, prescreening and postscreening, ports 1 and 5 Raising awareness of appropriate actions and advice provision Screeners and travellers identified the aim of screening as raising awareness of the appropriate actions to take should EVD signs and symptoms develop including using the NHS telephone helpline rather than going directly to hospital. Although several travellers talked about screening providing such practical advice, most did not feel they gained new knowledge and the evidence base for advice to restrict travel for asymptomatic individuals was questioned. We had like leaflets that…it was all quite basic stuff like about the Ebola outbreak so I guess that most people probably knew more than the PHE guys about it if they'd been living and working it. Traveller 22, prescreening and postscreening, ports 1-3 We appeared to have science at the start of this which suggested that unless you were leaking body fluids, you were no danger to anybody, and yet here we are speaking to somebody who's just come back from Sierra Leone whose chance even as a red zone worker of contracting the disease was vanishingly low, that you now cannot travel on public transport in case you infect anybody. Traveller 6, prescreening and postscreening, port 1 Reassurance? All screener groups, and some travellers, described screening as being implemented to reassure travellers and the public. Although screeners recognised the low risk posed to the public, addressing risk perceptions and providing reassurance was viewed as an important function. It's a public confidence thing, and so I guess that the purpose is to protect us and the purpose is to be seen to be protecting us. Traveller 7, prescreening and postscreening, ports 1 and 2 However, port 2 screeners identified a mismatch between the public's perception and the reality of what screening could achieve. This gave rise to concerns that if a positive case was not detected the public would think screening had 'failed'. There seems to be a bit of a gap between the public reassurances because the public think that we're checking whether or not anyone's got Ebola. I suppose actually we are just putting people on a system and following them up. So then if somebody gets Ebola then people think we've failed. Port 2, screener 7 A small number of HCWs and non-HCWs described reassurance from screening as personal or for family, friends, colleagues or the wider public. That [letter] PHE sent, saying 'you are Category 1, you are not seen as a risk, you can continue your life as normal, your normal duties' -that was a massive help, because there was so much fear here at that time and I remember coming back and people being quite worried about being close to me! So having that piece of paper was, I almost kind of had it stamped to my forehead! To say that 'look I've been screened, it's fine.' Traveller 9, prescreening and postscreening, ports 1 and 2 In contrast, one traveller asserted that he and his family relied on his own awareness, rather than the screening, for reassurance. Screeners and a small number of travellers perceived some other travellers to be anxious about screening. Foreign nationals, particularly West Africans, were perceived to find the process more intimidating than British nationals due to the uncertainty about the process, the stigma of EVD and wariness about whether screening related to immigration processes. Segregating travellers for screening was thought by some screeners and travellers to give the impression that they had been detained by immigration, and a few travellers found the identification process and handling by Border Force officials stigmatising. We know what we're going to do but they don't know what we are going to do, so they are a bit anxious. Port 1, screener 2 Inconvenience? Despite the intention to reassure, screening reportedly led to some annoyance among travellers. All screener groups described travellers as wanting to get through the process quickly and displaying annoyance at delays. Almost half the travellers described annoyance about the delay to their journey and suggested that similar processes in the future should avoid this by ensuring sufficient staffing levels; however, the other half viewed the delay as acceptable and screening as efficient. The whole thing for me ran smoothly it was only a 5-10 min process, you know, it didn't delay me. Traveller 11, postscreening, port 3 Screeners in all groups provided examples of negative responses to multiple screening occasions, caused by the discontinuation of direct flights, for example, screening on both exit and entry for a single journey and on stopovers, and multiple screening during the 21-day incubation period. Participant 4: They were [name of country] diplomats who'd been screened here 2 days ago and had travelled to [European city] on business and had come back into Port 1. And because our procedure that we have been passed down from above … is that they have to be rescreened…. BMJ Global Health Monitoring system and adherence to advice Screening involved monitoring the temperature of category 3 travellers throughout the potential incubation period, enabling early isolation and treatment if any travellers developed symptoms. The appropriateness of this process was questioned by travellers because they were already motivated to report signs and symptoms. Indeed, some screeners felt that HCWs should be allowed to take responsibility for protecting themselves and the public. Despite this, travellers reported adhering to screeners' advice -monitoring their symptoms and adhering to PHE recommended actions appropriate to their category of risk (eg, not taking public transport). However, a small number of travellers described being unsure when to report signs and symptoms and some viewed screening and related restrictions on return to the UK as a poor way to treat HCWs. [The monitoring process is] intrusive and, (…) paternalistic and patronising because, (…) as a HCW, I'm not going to be so cavalier about my own health and the health of my family or the nation that if I suddenly started to feel ill, having been in contact with an Ebola patient, I wouldn't have thought, 'Ooh, perhaps I'd better talk to somebody'. So the fact that we then sort of are made to take our temperatures and are made to ring somebody up to say that we're feeling well, to be perfectly honest, I really think that was a complete waste of time. Traveller 6, prescreening and postscreening, port 1 Acceptability of screening Most travellers felt that screening was acceptable and, several travellers commented that familiarity with temperature checking while in West Africa, contributed to the acceptability of UK screening. Despite some negative responses, most screened travellers were also described by screeners as accepting and valuing the process. Screeners suggested that returning HCWs were broadly supportive of screening even though it was not expected to have any direct impact on them, although those with higher risk exposures were described by screeners as being less receptive to screening than those at lowest risk. A huge vast majority of people have been really receptive and have really welcomed. (…) So I think from a public health safety perspective it's been very, very valued erm, not only from people in the UK but from people coming in as well. Port 1, screener 3 dIsCussIon This paper presents an exploration of screeners and travellers perceptions of the UK EVD port of entry screening programme. The findings highlight ways to enhance the acceptability of similar initiatives in the future. Screeners and travellers felt that the screening process served diverse functions consistent with the stated objectives of the programme, although there were some tensions around its intended purpose and actual perceived benefits. Screening was seen as providing public health guidance and advice, which was generally appreciated by travellers and was considered by both travellers and screening staff as reassuring for the public. However, its effectiveness as a 'screening' service to identify undiagnosed EVD cases was questioned, consistent with national scientific debate 10 and technical guidance 11 at the time. There was a degree of frustration arising from the inconvenience caused. screening process and improvements We and others 12 highlight the responsive and evolving nature of screening, due in part to the short timescale prior to implementation and lessons learnt, for example, from instances of inadequate staffing. If a similar process is to be considered for other public health emergencies, then formal plans should be developed taking into account the lessons from this outbreak. It is also important to build in flexibility that can respond to the availability of appropriate facilities, the nature of the threat 13 and factors that may affect acceptance and perception of risk. 14 15 We highlight accounts of frustration caused by multiple screening episodes in one journey. Previous research suggests that detection of symptomatic travellers through exit screening-requiring international collaborationmay be a beneficial approach. Nonetheless, there are limitations in this approach, notably the risk of passengers becoming symptomatic during travel, challenges in monitoring passengers through diverse routes of travel and individual countries duty to protect their citizens. Furthermore, experiences of delayed transfers to specialist care highlight a need to develop and exercise plans for the management of high-consequence infectious diseases including the prompt and safe transfer of symptomatic persons from ports. Screening was dependent on the ability to identify eligible travellers; the systems that support this have limitations and ways to facilitate and improve the identification of travellers need consideration. Direct flights from the main EVD-affected countries to the UK were withdrawn resulting in people travelling via alternative routes. On arrival to the UK, additional efforts were therefore required to identify travellers coming from affected countries, but there was the potential for travellers to be missed. 10 16 Greater publicised clarity about the programme's objectives may further encourage travellers to self-identify. Screening relied on self-reported information from travellers who may not have reported the presence of signs and symptoms for a number of reasons including a perception of journey delays. 10 Screener experiences of delivering the programme suggest organisational support for internally seconded staff involvement is important. Training for similar initiatives, if screening is part of the response, should be tailored to the experience of attendees, and training for managing challenging passenger encounters may be necessary. BMJ Global Health Acceptance This evaluation highlights interesting contradictions between programme acceptance and the perceived purpose and experience of screening. The process of screening appeared to be valued overall and was perceived to be primarily for the provision of information and reassurance. Some participants considered screening was unlikely to identify individuals with EVD signs and symptoms, suggesting it was driven by political concerns rather than empirical evidence. Indeed, the appropriateness of the term 'screening' to describe the process was queried. Research suggests that if exit screening was effective, then screening on entry should only identify those travellers who developed symptoms during the flight. 10 16 Modelling estimated that exit screening would identify 35.6% of infected travellers, screening after a further 24-hour period would identify 5.9% of EVD cases, and after a 12-hour period, 3.4% of EVD cases. 17 In line with estimates for other infections, 18 the positive predictive value of screening for EVD was expected to be very low. 16 Due to the non-specific symptoms of EVD, and the evolving symptom progression, screening was expected to produce false-positive and false-negative results. 12 Acceptance in spite of these limitations, and a degree of inconvenience and disruption 19 involved, may be explained by the sense of reassurance experienced from screening. An alternative explanation is that screening promoted a false sense of reassurance. Nevertheless, our findings suggest that having a screening process was appreciated. A study exploring experiences of an EVD screening programme in the USA found that screened US citizens were less concerned about becoming unwell and the onward transmission of EVD than citizens of EVD-affected countries. 20 Additional benefits of screening could include the identification of persons suffering emotional distress who could be referred to mental health services. 14 Acceptable to whom? Acceptance of screening by travellers has previously been reported. 9 In this current evaluation, screening was generally seen as less relevant and impactful for returning HCWs. This contrasts with members of the public or non-healthcare professionals who appeared to value the use of risk categories. It is noteworthy that screening was perceived to have been introduced in part to reassure the public. There is some evidence that HCWs may underestimate their risk of contracting EVD, and some may overestimate their knowledge of the symptoms of disease. 21 The requirement for category 3 HCWs to report temperature readings to PHE was viewed by some as unacceptable and unnecessary but may have allowed for more objective assessment of the likelihood of disease and provide an opportunity to discuss EVD signs and symptoms. A previous evaluation suggested that a majority of those monitored do not trust their thermometer readings to be accurate. 20 Enhancing acceptability There have been previous reports of stigma associated with return from an EVD-affected country, 14 15 and monitored persons have reported negative consequences including not being allowed to work and being shunned by family. 20 Screening could potentially provide reassurance and formal permission to continue routine activities. To enhance acceptability and prevent stigmatisation, the objectives and intended outcomes of screening should be clear to staff and travellers, particularly returning HCWs. In this instance, this could have included emphasising that screening was designed to provide tailored advice and information in addition to appropriate access to further investigations when needed. A description of the programme and increased awareness of the credentials of screeners may have also helped improve acceptability. Involving those likely to be affected by screeningscreeners and travellers-in all aspects of the design and implementation of future initiatives is also likely to enhance acceptability. Furthermore, acceptability is likely to increase through efforts to limit delays to travellers, and approaches to develop a more streamlined system for frequent travellers should be considered. Strengths and limitations This is the first qualitative evaluation of enhanced UK EVD screening, which elicited a diverse range of experiences from those delivering and receiving screening. The researcher (JMK) conducting most interviews was not a PHE employee and was initially unfamiliar with the programme, both of which were explained to the participants at the beginning of the interview to help ensure participant honesty. This also facilitated a critical distance in the interpretation of the data. The researcher gained familiarity with the research setting and programme by observing screening and or the screening set-up in all four ports. Focus groups with screeners began approximately 7 months after the start of screening, which meant the screeners had time to adjust to the process. This study cannot make conclusive statements about whether screening achieved its objectives; it focused on eliciting accounts from those involved as opposed to more objective measures of programme activity. Although a purposeful sample of screeners was requested, a limitation of this approach was the inability to control this process and thereby assess whether a biased sample in relation to the perceptions of participants was achieved. Furthermore, the screeners may have felt conflicted between offering a professional versus personal opinion especially given that the focus groups were conducted at the screening site or within the screeners working environment. An additional useful aspect would be to capture the feelings of stakeholders involved in the higher level management of the programme, who may have given greater insights into programme design and implementation decision making processes, though they contributed to the design of the study and the drafting of the paper. It is possible that the travellers who were willing to participate may have held BMJ Global Health stronger views on screening than those who refused, or vice versa. Participants may have also felt reluctant to provide honest accounts to the researcher due to concerns about confidentiality and the researcher's level of independence from PHE, despite this being stressed at the beginning of the interview. The recruitment rate for travellers (31.5%) is relatively low but not unexpected given the approach method relied on written invitation only and the time elapsed since screening, and only three participants were assigned risk category 2 or 3. Therefore, the participants' views may not reflect all those screened. Traveller experiences may be subject to recall bias given a variable amount of time since screening and reports of extreme tiredness during screening. However, as the findings demonstrate a range of positive and negative experiences and patterns of commonalities as well as divergent views, these biases are expected to be minimal. ConClusIons According to travellers and screening staff, the UK EVD screening programme was acceptable and was perceived to be effective at assessing individual risk and providing information and advice to travellers according to that risk. In future, if similar programmes are being considered, it is important that there is clarity as to the objectives of screening and efforts are made to streamline processes and minimise disruption. Any future screening programme should be tailored to the nature of the threat posed, be developed with close involvement of key recipients and consider the specific needs of healthcare workers taking part in the humanitarian response as well as general travellers. Contributors All authors contributed to the design of the evaluation, the interpretation of the data and read and approved the final version of the manuscript. JMK led the design of the evaluation, performed the focus groups and the majority of the interviews, analysed the transcripts and led the drafting of the manuscript with support from SA. CC conducted one traveller interview. MHo led the selection of travellers using PHE databases.
2018-07-11T00:21:40.322Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "227482da4155aca65337043742d321e54b546d21", "oa_license": "CCBY", "oa_url": "https://gh.bmj.com/content/bmjgh/3/3/e000788.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "227482da4155aca65337043742d321e54b546d21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
21679539
pes2o/s2orc
v3-fos-license
Applications of direct-to-consumer hearing devices for adults with hearing loss: a review Background This systematic literature review is aimed at investigating applications of direct-to-consumer hearing devices for adults with hearing loss. This review discusses three categories of direct-to-consumer hearing devices: 1) personal sound amplification products (PSAPs), 2) direct-mail hearing aids, and 3) over-the-counter (OTC) hearing aids. Method A literature review was conducted using EBSCOhost and included the databases CINAHL, MEDLINE, and PsycINFO. After applying prior agreed inclusion and exclusion criteria, 13 reports were included in the review. Results Included studies fell into three domains: 1) electroacoustic characteristics, 2) consumer surveys, and 3) outcome evaluations. Electroacoustic characteristics of these devices vary significantly with some meeting the stringent acoustic criteria used for hearing aids, while others producing dangerous output levels (ie, over 120-dB sound pressure level). Low-end (or low-cost) devices were typically poor in acoustic quality and did not meet gain levels necessary for most adult and elderly hearing loss patterns (eg, presbycusis), especially in high frequencies. Despite direct-mail hearing aids and PSAPs being associated with lower satisfaction when compared to hearing aids purchased through hearing health care professionals, consumer surveys suggest that 5%–19% of people with hearing loss purchase hearing aids through direct-mail or online. Studies on outcome evaluation suggest positive outcomes of OTC devices in the elderly population. Of note, OTC outcomes appear better when a hearing health care professional supports these users. Conclusion While some direct-to-consumer hearing devices have the capability to produce adverse effects due to production of dangerously high sound levels and internal noise, the existing literature suggests that there are potential benefits of these devices. Research of direct-to-consumer hearing devices is limited, and current published studies are of weak quality. Much effort is needed to understand the benefits and limitations of such devices on people with hearing loss. Hearing loss and its management According to the World Health Organization (WHO), hearing loss is the fifth leading cause of years lived with disability. 1 Research has shown that untreated hearing loss in adults has been linked to cognitive decline, 2 depression, 3 social isolation, 4 increased incidence of dementia, 5 and even falls. 6 Prevalence of hearing loss highly correlates with increasing age. As of 2012, of those 65 years and older, 164.5 million persons (~33%) reported disabling hearing loss. 7 The number of individuals in this age group (65 and older) is growing at an exponentially faster rate (~37% growth from 2010 to 2019) than younger age groups. 8 Given this growth of the aging population, experts recognize untreated presbycusis as a looming public health concern. 9, 10 Historically, hearing aids have been the primary remediation option for individuals affected by medically uncomplicated presbycusis (ie, age-related hearing loss). Uptake of hearing aids, especially among adults, however, has been poor. Data from the US indicate that the unmet need for hearing health care is high, yielding between 67% and 86% of adults with hearing loss who fail to use hearing aids. 10 One study demonstrated that less than 25% of adults aged 80 and above with self-reported hearing problems -the cohort with the highest prevalence of hearing loss -do not use hearing aids. 11 The reasons for poor hearing aid uptake among adults are myriad. In the US, approximately 20 million persons 60 years or older have an untreated clinically significant hearing loss, of which nearly 6 million are of low income. 12 These figures may suggest that there is a substantially large population of individuals, even in high-income countries, who may have difficulty paying for high-priced hearing care services. While hearing aids are often not reimbursed by health insurance and high costs are a primary issue, finances are not the only barrier and reason for poor uptake. Other explanations for poor uptake include stigma, negative word-of-mouth about hearing aids, and the inconvenience of multiple appointments with hearing health care professionals. 13 Direct-to-consumer approach in health care While the audiology community and those they serve have attended to issues related to the effects of untreated hearing loss and poor hearing aid uptake, health care has undergone a consumer-driven revolution. Popularity is growing for a direct-to-consumer approach to health care service delivery, which is believed to provide greater accessibility to services and affordability for patients. The need for a direct-to-consumer approach has also been discussed in relation to hearing care service delivery. Contrera et al 14 outlined five major obstacles for obtaining effective hearing and rehabilitative care, which included awareness, access, treatment options, cost, and device effectiveness. A directto-consumer delivery model could partially address these obstacles (eg, access and cost). Direct-to-consumer hearing devices Led by the baby-boomer generation and access to low-cost, high-tech smartphones, consumers are demanding to be more actively involved in their health care decisions. Over the past few years, the increase in computing capacity of technology (eg, smartphones) has led experts to believe that health care will become more accessible and affordable through these technologies. Undoubtedly, this democratization of health care is already having an impact on the hearing health care industry. This has led to a proliferation of amplification devices that are available on the market today, as shown in Table 1. Similar to traditional hearing aids regulated since the 1970s by the FDA, a variety of hearing technologies can be purchased through direct-mail, via the Internet, or OTC with minimal involvement from a hearing care professional. Increased processing in technologies has led to a rise of self-programming hearing aids, which enable end users to fit and program their own hearing aid without assistance from a hearing care professional. One recent feasibility study of self-programming hearing aids showed that 73% of older adults were able to successfully insert these devices into their ears, and 55% of these same adults could complete a 10-step fitting process without the assistance of a professional. 15 Direct-to-consumer amplification is not confined to traditional hearing aids. Recently, there has been a dramatic increase in the number of PSAPs that can be purchased online or in retail stores. Unregulated by the FDA, PSAPs can be purchased by consumers directly from multiple manufacturers without involvement of a licensed hearing care professional. 16 In addition to a wide range of prices (US $20 to over $400), PSAPs have a varying range of quality, with a few operating electroacoustically similar to traditional hearing aids, according to one recent study. 17 There are various advantages and limitations of current PSAPs. 18,19 Additionally, since FDA regulations have not kept pace with technological innovations, the same company can manufacture both traditional hearing aids and PSAPs. Thus, an identical product may have two different labels, hearing aid and PSAP, leading to confusion for both consumers and professionals. Beyond PSAPs, there are a few other types of direct-toconsumer hearing devices. One, broadly classified as hearables, is paired to smartphones and includes several features, such as biometrics, music storage, hearing protection, and amplification into a device worn in the ear. 16,20,21 Another such technology is smartphone-enabled amplification applications (apps). These two types of devices are not addressed in the current review. Definitions The FDA defines a PSAP as a wearable consumer electronic product intended for consumers without hearing loss to amplify sounds in certain environments such as recreational activities. PSAPs come in a range of style options, from those similar to Bluetooth headsets to those almost identical to in-the-ear or behind-the-ear hearing aids. While PSAPs are direct-to-consumer products, it is important to note that per the FDA, PSAPs are not intended to compensate for impaired hearing (eg, cannot treat, cure, or mitigate disease nor alter the structure or function of the body). For this reason, the FDA refrains from asserting regulatory authority over them, except incidentally under the Radiation Control for Health and Safety Act of 1968. This act applies to all sound amplification equipment, and among others, seeks to ensure that there are volume limits to prevent hearing damage. The FDA defines a hearing aid as any wearable instrument or device designed for, offered for the purposes of, or represented as aiding persons with or compensating for impaired hearing. All hearing aids must comply with specific requirements of the FDA. On the other hand, the FDA regulates OTC hearing aids. The main difference between a traditional hearing aid and an OTC hearing aid is that the OTC device is considered a direct-to-consumer product. Thus, it does not require consultation with or dispensing from a hearing health care professional, although the FDA requires that a person buying a hearing aid be examined to rule out certain red-flag medical conditions related to the ears or that a medical waiver declining a medical evaluation be signed by the patient. OTC hearing aids are also often referred to as direct-mail hearing aids. Other than the lack of FDA regulation, the main difference between PSAP and OTC devices is the intended use of the device. As of now, PSAPs are intended to be used by people with normal hearing who want an enhancement of certain environmental sounds. However, the OTC hearing aids are directed towards people with mild-to-moderate hearing loss to improve their hearing and communication. Potential regulatory changes Two separate organizations that advise the American federal government (ie, PCAST and National Academies of Sciences, Engineering, and Medicine) recently recognized that PSAPs and OTC devices may play a crucial role in addressing unmet needs of adults with untreated presbycusis. 22 Given both the potential changes to FDA regulations and the rapid pace of innovation in amplification technology, this systemic literature review investigates the current published findings regarding these devices with the secondary purpose of uncovering questions within this area of emerging consumer-driven amplification that warrant further study. A recent paper by Blustein and Weinstein provides more details on the regulatory changes recommended by the PCAST. 23 The current literature review is aimed at investigating the applications of direct-to-consumer hearing devices for adults with hearing loss. In this review, we focus on three categories of direct-to-consumer hearing devices: PSAPs, direct-mail hearing aids, and OTC hearing aids. Search words The search was conducted with the following words/phrases: cheap hearing aids, personal sound amplification systems, personal sound amplification products (PSAPs), personal sound amplification devices, direct-mail hearing aids, overthe-counter (OTC) hearing aids, direct-to-consumer hearing aids, direct-to-consumer hearing devices, hearing amplifier, sound amplifier, basic hearing aid, self-fitting hearing aid, affordable hearing aid, and hearable(s). Inclusion and exclusion criteria Due to limited numbers of studies in this area, all studies published in peer-reviewed journals and reports from nonpeer-reviewed journals/magazines were included in the review regardless of their study design, as long as they met inclusion criteria. Papers were excluded if the study did not meet the following criteria: 1. Population -adults with hearing loss 2. Condition -electroacoustic characteristics, consumer market surveys, and outcome studies 3. Context -studies focusing on direct-to-consumer hearing devices 4. Study type -any study design 5. Language -studies that were published in English 6. Timescale -no restrictions were applied Overall, the database search resulted in a total of 213 records of articles. A manual search was also conducted through conference papers and through reference lists of key papers, and an additional 21 reports were identified. Abstracts of all the 234 records were screened, and subsequently, full text of 25 reports was assessed for eligibility. After applying inclusionary criteria, 13 studies were found to be relevant to include in the current review. Figure 1 shows the process followed in study identification, eligibility screening, and inclusion of papers. Table 2 provides a summary of these studies included in this literature review. Literature searches resulted in a total of 13 reports concerning direct-to-consumer hearing devices, including five peer-reviewed journal articles, four peerreviewed magazine articles, three consumer surveys, and one conference paper. electroacoustic characteristics The literature search identified four published reports on electroacoustic characteristics, three peer-reviewed publications that focused on OTC hearing, [24][25][26] and one with emphasis on PSAPs, published in a non-peer-reviewed professional Peer-reviewed journal Examined the amplification characteristics of ten low-cost (#US $65) OTC devices Performance of majority of OTC devices was within ANSI standard limit for typical HA, although some were outside the limit for eIN and THD. Overall, OTC devices were low-gain hearing devices with little-to-no-high frequency output. This laboratory study used ANSI S3.22 standard for test box assessments and real-ear measurements on ten normal hearing adults Devices deemed unable to meet needs of the majority of older adults with presbycusis who are likely the more common OTC device users. Researchers suggested that only patients with mild-to-moderate low-frequency reverse sloping HLs (eg, early Meniere's disease or otosclerosis) may benefit from use. Callaway and Punch 25 Peer-reviewed journal Some of the OTC devices were able to match the target gains in simulated conditions, although authors suggest that the factors such as ineffective volume control function, high internal noise, and irregular frequency response may limit the potential benefit to people with HL. Smith et al 17 Peer-reviewed magazine evaluated low-end and high-end PSAPs and HAs amplification characteristics All high-end HAs were able to fit most HL configurations, whereas two high-end PSAPs and one app were able to meet the moderate HL configuration. Laboratory ANSI S3. standards were used for test box assessments and real-ear measurements on a simulated condition using a KeMAR Most low-end HAs and PSAPs produced inappropriately high gain at low frequencies, whereas high-end devices produced appropriate amplification for moderate HL configurations. Low-end PSAPs and HAs were found to be inappropriate for any severity and configuration of high-frequency HL. Kochkin 27 Peer-reviewed magazine Aimed at estimating the population of PHL who use direct-mail HAs and PSAPs and also to compare the characteristics of those who use one-size-fits-all products with those who use custom HAs estimates suggested that about 3.3% of the HA owners received their device through direct-mail orders. PSAP owners were found to be 4.8% of the non-adopters population. PSAP owners paid less than US $50 for their device when compared to direct-mail HA owners who paid a median of US $237. Survey of consumers Used a cross-sectional survey design and consisted sample of 3174 HA owners and 4339 non-adopters of HAs Direct-mail and PSAP owners earned US $10,000 less per year, were less likely to buy binaural HAs, and used devices less (ie, 3 hours a day when compared to 10 hours a day) than those who purchased custom HAs. Nearly 75% of direct-mail and PSAP owners were candidates for custom HAs, although estimates suggested that ,18% users substitute PSAPs for custom HAs. Only a fraction of those diagnosed with HL (6%) and those with at least some trouble hearing (4%) own PSAPs, although two out of five are interested in purchasing direct-to-consumer hearing devices. Study used a cross-sectional Internet-based survey design and included a national sample of 3,459 US adults who had at least little trouble hearing Although most consumers with trouble hearing would consult hearing care professional, few were interested in seeking information online (14%), from friends and family (13%), and others with hearing difficulties (10%). More than two-thirds of the sample preferred purchasing nonprescription hearing devices (ie, mail or drug stores). Current PSAP owners mainly used them for listening to Tv, although potential buyers were interested in exploring its use for wider situations. JapanTrak 29 Consumer survey report 865 Direct-to-consumer hearing devices for adults with hearing loss magazine. 17 Table 3 OSPL90 is the level of output provided by a hearing device when the input is set to 90-dB SPL and with full-on gain. ANSI S3.22 tolerances for OSPL90 are expected to be within ±4 dB of the value provided by the manufacturer's 34 Conference paper examined the preferences of PSAPs and HAs via listening to different sounds processed by these devices In laboratory settings, PSAPs performed as well as HAs for everyday noises and music. Cross-sectional comparison study conducted in a laboratory HAs were significantly more preferred than PSAPs for speech. 23 adults (23-83 years) with mild-to-moderate HL Different devices process some types of sounds more effectively than other types of sounds. Tedeschi and Kihm 35 Peer-reviewed magazine Pilot study examined the outcome of direct-to-consumer hearing devices with and without professional guidance Some of the participants (13%) were not able to self-identify the red-flag conditions that would require medical consultation, nearly half were not able to correctly self-assess the degree of loss, and nearly a third of the participants with moderate loss could have delayed seeking help with professionals. 29 older people (aged 60 or older) with mild-to-moderate HL who used PSAPs and provided outcome data through survey after 3 and 6 weeks Individuals supported by hearing health care professionals experienced better outcomes in terms of various indicators, which include daily usage, expectations, overall satisfaction, usage, willingness to recommend, and perceived success. 866 Manchaiah et al specification sheet. However, in many of the studies discussed, the manufacturers of direct-to-consumer devices did not provide specification information. Most of the OTC devices were reported to have an output OSPL90 of 110-to 120-dB SPL, although some were over 130-dB SPL. Although high gains over 130-dB SPL can be useful for greater degrees of hearing loss, this can be problematic for direct-to-consumer purchase; it creates potential issues such as feedback, noise damage, and so on. Peak responses ranged between 200 to 2,000 Hz, although more close observation revealed peak values ranging between 1,400 and 2,000 Hz. The frequency response curve showed a range of up to 8,000 Hz (higher end) in some newer devices, although most were limited to about 4,000 Hz. The differences were also noted in terms of the device cost, as the low-end PSAPs tend to provide more low-frequency gain, 17 suggesting limited benefit for adults with high-frequency hearing loss. THD reveals the percentage of harmonic distortion (nonlinear added overtones) present in hearing device output. The ANSI S3.22 standard for TDH is 3% maximum. Generally, most of the devices in these published studies meet the standard for harmonic distortion, but a small number of the low-end devices revealed excessively high values (ie, outliers). EIN is a measure of the internal circuit noise of a hearing device. The ANSI S3.22-1987 standard for EIN is 28 dB maximum with a tolerance from this standard of ±3 dB (ANSI 2014). In these published studies, the EIN ranged between 23.85 and 54.48 and 19.8 and 52.9 dB for PSAPs and OTC devices, respectively. However, only a limited number of devices (ie, 17 of 47 devices) from the four of the studies passed the tolerance level of 28 dB, making EIN the least met criteria by the direct-to-consumer devices. These studies also evaluated how closely the device gain and output could match a prescribed fitting target for various degrees of hearing loss. Probe microphone measurements on the KEMAR were used to verify how closely these devices could match a prescription target (ie, NAL-NL2) for various degrees of hearing loss. Results of these studies varied greatly with a few devices matching target gain within 3 dB for mildto-moderate high-frequency hearing losses, while the majority missed the prescribed fitting target by more than 10 dB and had limited high-frequency gain. Generally, the lower-end (ie, low-cost) direct-to-consumer hearing devices were found to be of poor electroacoustic quality, and thus, of no value to individuals with mild-to-moderate hearing loss. On the other hand, a few of the higher-end products performed electroacoustically very similar to traditional hearing aids. Survey of consumers The literature search identified five reports that have results of consumer surveys. Two of these were published in nonpeer-reviewed professional magazines, while the remaining three were included in consumer survey reports. MarkeTrak VIII estimates suggested that about 3.3% of hearing aid owners received their device through direct-mail orders and 4.4% of hearing aid non-adopters own PSAPs. 27 Internet-based consumer survey conducted by the Consumer Electronic Association (today known as the Consumer Technology Association) in the US suggested that only a small portion of those diagnosed with hearing loss (ie, 6%) and those with at least some trouble hearing (ie, 4%) own PSAPs. 28 Large-scale consumer surveys in Japan indicated that substantial amount (ie, 14%-19%) of hearing aid owners purchase their devices through direct-mail or online sources. 29,30 MarkeTrak data revealed some differences in demographic factors between hearing aid and direct-to-consumer hearing device owners. Direct-mail and PSAP owners were more likely to be male, older, retired, lower income, more experienced hearing aid users, less likely to buy binaural hearing aids, and limited users (ie, 3 hours per day, compared to 10 hours per day with custom hearing aids). 27,31 Two-thirds (ie, nearly 75%) of direct-mail and PSAP owners were candidates for custom hearing aids, although estimates suggested that less than 18% of users substitute PSAPs for custom hearing aids. 31 Also, it appears that the current PSAP owners mainly use them for listening to TV. However, those who were interested in purchasing PSAPs were keen to explore its use for wider situations in daily life. 28 Exploratory analysis of survey data indicates that satisfaction with hearing aids purchased online is lower than those who purchased in hearing aid centers. 30 One important factor could be the professional guidance and support. Direct-mail hearing aids provided significantly less real-world benefit than hearing aids dispensed by professionals who adhere to the highest levels of best practice. 31 However, consumers believe both direct-mail and traditional hearing aids provide equal benefit. Additionally, some consumers were willing to make trade-offs in benefit for substantial cost reduction. 31 Outcome evaluation The literature review identified two studies, published in peer-reviewed journals that evaluated the outcome of OTC hearing aids. 32,33 Another study examined the preference between PSAPs and hearing aids in a laboratory condition. 34 In addition, Tedeschi and Kihm, 35 867 Direct-to-consumer hearing devices for adults with hearing loss behavior about direct-to-consumer devices with and without professional consultation. McPherson and Wong 32 evaluated the effectiveness of a low-cost OTC hearing aid (ie, ReSound Avance HE4, which costs approximately US $125) in elderly people with mildto-moderate hearing loss in Hong Kong. Specifically, they focused on objective aided hearing measures and subjective self-reported performance and benefit. Nineteen older adults used the OTC device for a 3-month period. Participants underwent aided hearing threshold and real-ear insertion gain measurements. In addition, they completed self-report measures related to hearing aid outcome, and participated in an open-end interview. The comparison between target and actual insertion gain measures suggested that the OTC device provided satisfactory gain in 2,000 and 4,000 Hz, but under-amplified at 1,000 Hz (4.65-dB difference from target gain). Most of the participants indicated that the device provided benefits, and all of the participants rated the device as "worth the trouble" of wearing. Sixteen of the 19 participants used the device 1-8 hours a day. The outcome of all three self-reported measures indicated that there were some benefits from using the device. The interview highlighted some benefits (eg, lightweight, invisible device, improvement of hearing ability, feeling of greater security and happiness) and shortcomings (eg, difficult to handle the device, hearing aid-related problems such as feedback, and not clear at close distance) of the device. More recently, Sacco et al 33 studied the clinical value of a newly developed OTC device (ie, TEO First ® which costs approximately US $250) for elderly people with mildto-moderate hearing loss in France. Participants were fitted with the device following a detailed audiological test and instructions. Thirty-one participants used the OTC device for a 1-month period. An outcome assessment was performed before fitting the device and following 1-month use of the device. The outcome assessment included a self-reported measure on quality of life, a survey on acceptability of the device, and overall satisfaction. Quality-of-life improvements were noted in terms of the decrease of perceived hearing difficulties in decreased negative emotions while watching TV, during conversation without background noise, during conversations in noise backgrounds, and during conversation with several people. Self-reports of average daily time use of the device was 60 minutes. Although these benefits were noted and no adverse events were reported during the study, the acceptability of the device was low to moderate. Xu et al 34 examined the preferences towards PSAPs and hearing aids, of adults with hearing loss, for different listening sounds, processed by these devices, in a laboratory condition. Twenty-three adults with mild-to-moderate hearing loss participated in a listening task and provided preference ratings on three stimuli (ie, speech -dialogue in quiet, everyday noises, and music) with three different device conditions (ie, two premium BTE hearing aids, two basic BTE hearing aids, and two high-quality PSAPs). Hearing aids (combined) were preferred more significantly by participants when compared to PSAPs for speech sounds, whereas no differences in preferences were noted for environmental noises and music. The authors suggested that different devices process some types of sounds more effectively than others. The main limitation of this study is that the devices were fit to an average hearing loss without individualizing the settings and some advanced features (eg, directional microphones, vented earmolds) on the hearing aids were turned off. While these results provide interesting observations, caution must be taken in generalizing the results to real-life settings. In a recent pilot study, Tedeschi and Kihm 35 examined how consumers react to and behave in relation to direct-toconsumer devices with and without professional consultation. Over a 12-week time window, divided into two 6-week phases, their study compared a group of consumers' experience with OTC products (Phase 1) to the traditional service delivery model (Phase 2) in which a professional directs the care. The study participants included 29 older adults (aged 60 or over) with mild-to-moderate hearing loss. Although it appears that none of the study participants were directly asked to self-identify any possible red-flag conditions, four of the 29 individuals (13%) were referred to a physician for a possible medical condition. Also, one participant was excluded from the study because of an outer ear infection based on a preliminary screening before purchasing hearing devices. Twenty-nine eligible study participants completed Phase 1 of the pilot by using a self-selected PSAP or ready-to-wear hearing aid for 6 weeks. At the end of their 6-week trial with the OTC product/process, about half reported that the OTC device helped some or all of the time, and reported willingness to recommend one to a friend who had a hearing problem. Notably, another one-quarter of the group stopped using OTC devices entirely during Phase 1. Phase 2 of the study, which involved direct care with a hearing care professional, was completed by 18 of the 26 participants. Although the details of the participants' interaction with the professional were not disclosed in the article, each participant had their level of usage, expectations, and satisfaction measured twice, 3 weeks and 6 weeks post-intervention. Results indicated that 83% were satisfied with the provider-driven fit, compared 868 Manchaiah et al to 48% who were satisfied with the OTC device fitting. The article did not report if these differences in outcome between the two phases were of statistical or practical significance. Quality analysis of existing literature Due to limited number of publications in this area, all studies published in both peer-reviewed and non-peer-reviewed journals, consumer surveys, and conference papers were included. The studies on electroacoustic characteristics have used conventional study designs with test box measures and simulated real-ear measures in the KEMAR. The consumer surveys generally used convenience sampling, which may have resulted in sampling bias. In addition, studies on patient outcomes with these devices used open-trial design without a control group or blinding. This may have resulted in some bias as hearing aid research has a documented placebo effect. 36 Although no structured analysis of quality was performed, the study design of existing literature in this area was found to be generally poor. Further, the studies cited here have higher chances of bias due to the sampling method used and lack of blinding of either the participants or the researchers. Summary of main findings The current systematic literature review was aimed at investigating the applications of direct-to-consumer hearing devices for adults with hearing loss. The studies on direct-to-consumer hearing devices fell into three themes: 1) electroacoustic characteristics, 2) consumer surveys, and 3) outcome evaluation. The analysis of physical characteristics based on test box and simulated real-ear measures suggested high variability in terms of electroacoustic characteristics. Of particular note, although most of these devices have an OSPL90 of 110-to 120-dB SPL, some were over 130-dB SPL. High outputs are problematic for the direct-to-consumer approach, as a high output can be potentially harmful, especially for ear canals with smaller physical dimensions. Moreover, most of the devices analyzed in these articles showed peak gain and output response at around 1,400-2,000 Hz suggesting limited benefit for adults with high-frequency hearing loss (eg, presbycusis). The analysis of TDH values suggested that most of the devices, including the low-end devices, were well within the suggested 3% tolerance, with a few low-end devices producing excessively high harmonic distortion (Table 3). In addition, most of the devices seem to have high degree of internal noise (ie, EIN .28 dB). A device with a high internal noise floor may be problematic, especially for individuals with normal hearing or mild loss, as circuit noise exceeding 30 dB may be audible and even bothersome. High circuit noise is not confined to PSAPs and OTC hearing aids. A recent report by Holder et al 37 indicated that a high number of traditional hearing aids are also prone to equivalent input (circuit) noise that exceeds the ANSI standard. The consumer surveys reviewed here suggest that less than 5% of people with hearing loss in the US purchase directmail hearing aids, 27 whereas in Japan, up to 19% of hearing aid owners purchase devices through direct-mail or online. 30 Thus, it seems apparent that demographic differences exist between those who own direct-to-consumer hearing devices when compared to those who own traditional custom hearing aids. 27,31 Also, direct-mail hearing aids and PSAPs were associated with lower satisfaction when compared to hearing aids that were purchased through hearing health care professionals. 30,31 While these results are interesting, it is important to note that most of these surveys were specifically not focused on direct-to-consumer hearing devices; therefore, these observations are rather spurious. The studies on outcome evaluation suggested that the OTC devices appear to have some benefit for elderly people with mild-to-moderate hearing loss. 32,33 These benefits ranged from improved hearing in quiet and in noisy situations, improved communication, and activities of daily living. The acceptability ratings were low to moderate in one study conducted in France, 33 whereas the study in Hong Kong had higher acceptability ratings. 32 One unpublished laboratory study identified that the hearing aids were preferred more significantly than PSAPs for listening to speech, although no preferences were noted for listening to everyday noises and music. 34 On a different note, a recent study has reported positive attitude and likely benefits of PSAPs on adults with normal hearing. 38 However, both studies did not have a control group, and the outcomes were evaluated on a shortterm basis (ie, 1-3 months). Hence, the outcome of these studies should be considered as preliminary findings and interpreted with caution. Cost of the device Cost of the device seems to be a factor in terms of quality and appropriateness of the device for people with hearing loss. For example, low-end (or low-cost) direct-to-consumer hearing devices were poorer in regard to electroacoustic characteristics. 24,26 In addition, studies suggest that lessexpensive direct-to-consumer hearing devices did not meet 869 Direct-to-consumer hearing devices for adults with hearing loss the gain levels necessary for appropriate amplification of simulated mild-to-moderate hearing loss. 17,25 Hence, consumers and clinicians should bear in mind that at this stage, the lowest-price device may not be the most appropriate. Role of hearing health care professionals Despite the potential benefits of these direct-to-consumer devices, there is some concern in the audiology community that these devices will disrupt the hearing aid market and may result in a more limited demand for clinical care. However, it is important to note that professional services provided by audiologists are found to be one of the biggest differentiating factors in terms of hearing aid success, as indicated by the MarkeTrak VIII report. 39 This is further supported by a recent pilot study, which indicated that participants exposed to both the direct-to-consumer and professional-driven delivery systems experience higher satisfaction scores when working directly with a professional. 35 Another recent qualitative study evaluated the Internet-based delivery of hearing aids, and showed that a large number of study participants reported to have missed the building of trust, value guidance, and expertise of hearing health care professionals. 40 MarkeTrak VIII survey estimates suggest that less than 18% of PSAP users substituted PSAPs for custom hearing aids, suggesting that in the absence of such direct-toconsumer hearing devices, those individuals would have lived with hearing loss without any hearing device. 39 Taken together, these observations suggest that there is a continuing need for audiology services even after a hearing aid market disruption spurred by the availability of direct-to-consumer hearing devices. Potential advantages and limitations of direct-to-consumer hearing devices Direct-to-consumer hearing devices may have various benefits and limitations. 19 From the professional literature, however, it is evident that wide ranges of opinions have been expressed. While some experts in the field have identified benefit and opportunities, 16,23 others have concerns about the limitations of the direct-to-consumer model. 41 The regulatory changes in relation to direct-to-consumer hearing devices could potentially open the new market and provide accessibility to various individuals who would not seek help and intervention through traditional channels. 23 It could greatly reduce the time and money associated with purchasing and using a hearing device. Moreover, as many of these devices are not called hearing aids (eg, PSAPs) and look more like consumer electronic devices than hearing aids, it may reduce stigma associated with the hearing aid image. 42 On the other hand, there are also potential disadvantages. First, there is potential risk with the direct-to-consumer model that some individuals with red-flag conditions (eg, sudden deafness, acute or chronic dizziness) who would require medical investigations may not have the opportunity to undergo screening by a hearing health care specialist. Second, users of such devices, if fitted inappropriately, may experience dangerously high sound levels, and they may be at risk of developing further hearing damage and symptoms such as tinnitus. 41 Third, initial bad experiences with inappropriate use of such devices may keep those individuals away from consulting hearing health care professionals, although there is no published data to support this claim. Conversely, some individuals may use these devices as a gateway instruments to actual hearing aids. 18 Future directions There is some move towards developing self-fitting hearing aids, which may disrupt and alter innovation in hearing health care. 43,44 However, research on direct-to-consumer hearing devices is still in its infancy. Today is probably one of the most interesting times in the hearing industry as the landscape is changing quickly due to the rapid advancement of amplification technology as well as to potential changes in federal regulations of the hearing aid market. It is important to differentiate between traditional hearing aids and the direct-to-consumer hearing devices, not only in terms of device characteristics but also in terms of expected patient outcomes. Also, it is important to differentiate the devices that produce the best patient outcomes across various listening situations. 34 Moreover, the full scope of direct-toconsumer hearing devices may have been overlooked in this manuscript, as we do not include personalized amplification through mobile phones. At this point, it is too early to know if smartphone-based apps will be an integral part of a dedicated self-fitting hearing aid or simply allow the end user to control a variety of amplification devices through any number of apps. However, some recent evidence suggests that the smartphone-based amplification app has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing. 45 There is a great need to develop an evidence base with well-controlled and more imaginative studies in relation to direct-to-consumer hearing devices. This could range from determining the candidacy to studying the user experience, outcome, and economic evaluation. Table 4 provides some specific areas that researchers and clinicians Conclusion Direct-to-consumer hearing devices, a category of products comprising PSAPs, direct-mail hearing aids, and OTC hearing aids, have caught the attention of various stakeholders, including audiologists, public health officials, physicians, and consumers. Their rise in popularity appears to be driven by technological advancements in amplification, consumer demand, and suggestions made by federal government advisory boards. Currently, there is limited evidence on the applications of direct-to-consumer hearing devices for people with hearing loss. Our literature identified studies on direct-to-consumer hearing devices, which fall into three general themes: 1) electroacoustic characteristics compared to traditional hearing aids, 2) consumer surveys, and 3) patient outcome evaluation. Although some devices have the capability to cause adverse effects due to high output sound levels and internal noise they produce, the existing literature suggests that there are some potential benefits of direct-to-consumer hearing devices. The research on directto-consumer hearing devices is limited, and the quality of current studies is weak. Much effort is needed to understand the benefits and limitations of such devices on people with hearing loss. Clinical Interventions in Aging Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-interventions-in-aging-journal Clinical Interventions in Aging is an international, peer-reviewed journal focusing on evidence-based reports on the value or lack thereof of treatments intended to prevent or delay the onset of maladaptive correlates of aging in human beings. This journal is indexed on PubMed Central, MedLine, CAS, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use.
2017-10-15T06:46:54.197Z
2017-05-18T00:00:00.000
{ "year": 2017, "sha1": "8ef779f7af23e03eb5a442bcc00f24b914c9a56d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=36582", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97486319f1f12dca193dcfd2754740fb4ac700ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12264894
pes2o/s2orc
v3-fos-license
Assessment of the Brain's Macro- and Micro-Circulatory Blood Flow Responses to CO2 via Transfer Function Analysis Objectives: At present, there is no standard bedside method for assessing cerebral autoregulation (CA) with high temporal resolution. We combined the two methods most commonly used for this purpose, transcranial Doppler sonography (TCD, macro-circulation level), and near-infrared spectroscopy (NIRS, micro-circulation level), in an attempt to identify the most promising approach. Methods: In eight healthy subjects (5 women; mean age, 38 ± 10 years), CA disturbance was achieved by adding carbon dioxide (CO2) to the breathing air. We simultaneously recorded end-tidal CO2 (ETCO2), blood pressure (BP; non-invasively at the fingertip), and cerebral blood flow velocity (CBFV) in both middle cerebral arteries using TCD and determined oxygenated and deoxygenated hemoglobin levels using NIRS. For the analysis, we used transfer function calculations in the low-frequency band (0.07–0.15 Hz) to compare BP–CBFV, BP–oxygenated hemoglobin (OxHb), BP–tissue oxygenation index (TOI), CBFV–OxHb, and CBFV–TOI. Results: ETCO2 increased from 37 ± 2 to 44 ± 3 mmHg. The CO2-induced CBFV increase significantly correlated with the OxHb increase (R2 = 0.526, p < 0.001). Compared with baseline, the mean CO2 administration phase shift (in radians) significantly increased (p < 0.005) from –0.67 ± 0.20 to –0.51 ± 0.25 in the BP–CBFV system, and decreased from 1.21 ± 0.81 to −0.05 ± 0.91 in the CBFV–OxHb system, and from 0.94 ± 1.22 to −0.24 ± 1.0 in the CBFV–TOI system; no change was observed for BP–OxHb (0.38 ± 1.17 to 0.41 ± 1.42). Gain changed significantly only in the BP–CBFV system. The correlation between the ETCO2 change and phase change was higher in the CBFV–OxHb system [r = −0.60; 95% confidence interval (CI): −0.16, −0.84; p < 0.01] than in the BP–CBFV system (r = 0.52; 95% CI: 0.03, 0.08; p < 0.05). Conclusion: The transfer function characterizes the blood flow transition from macro- to micro-circulation by time delay only. The CBFV–OxHb system response with a broader phase shift distribution offers the prospect of a more detailed grading of CA responses. Whether this is of clinical relevance needs further studies in different patient populations. INTRODUCTION Cerebral autoregulation (CA) describes the ability of the cerebrovascular system to provide a continuous steady-state blood supply to the brain over a wide range of blood pressure (BP) levels. Cerebral perfusion exhibits a linear relationship with BP beyond the CA maintenance range (Kontos et al., 1978;Harper et al., 1984). Low BP leads to low cerebral perfusion and may result in ischemia (Ringelstein et al., 1988;Kleiser and Widder, 1992). The methods used at present to confirm CA integrity have revealed that patients with a disrupted or decreased CA have a greater risk of a poorer outcome compared with those with an intact CA (Kleiser and Widder, 1992;Müller et al., 1995Müller et al., , 2003Müller and Schimrigk, 1996;Reinhard et al., 2003). The current methods widely used for CA analysis include high time resolution methods, such as transcranial Doppler sonography (TCD), near-infrared spectroscopy (NIRS), or laser speckle imaging; these methods are non-invasive and can frequently be repeated if necessary over time periods of several hours (Zhang et al., 1998;Panerai et al., 1999;Terborg et al., 2000;Murkin and Arango, 2009;Zweifel et al., 2010;Cooper et al., 2011;Taussky et al., 2012;Hecht et al., 2013;Müller and Österreich, 2014;Nielsen, 2014). TCD measures blood flow velocity in large cerebral arteries. The actual measured blood flow velocity depends on several factors, mainly on the BP gradient across the vessel bed and on the vessel diameter. Metabolic factors such as partial pressure of carbon dioxide (pCO2), mental activity, or [H + ] concentration in the brain tissue may additionally affect vessel diameter and velocity. The measured velocity represents the actual brain demands and corresponds closely to the cerebral blood flow (CBF) when vessel diameter does not change considerably. Insofar TCD data represent CBF in the brain's macro-circulation. NIRS investigates the cortical microangiopathic capillary vessel bed. The mechanisms of CA transform macroangiopathic blood flow to microangiopathic capillary flow, thus linking the processes. CA is mostly characterized by the relationship between BP and macro-circulatory CBF or its velocity (CBFV). The micro-circulatory CBF can be considered blood flow after CA mechanisms have regulated the macro-circulatory input CBF. The interaction between the macro-and micro-circulatory levels has been rarely investigated (Reinhard et al., 2006;Phillip et al., 2012). Herein, we used high temporal resolution methods that would particularly allow assessment of the dynamic aspects of such interactions. This approach could provide additional insights into autoregulatory processes and might facilitate the future development of mathematical models for CA status predictions. Such a predictability of CA would be helpful in diseases (such as traumatic brain injury, subarachnoid hemorrhage, stroke) in which it is known that secondary ischemic events due to CA failure can follow the initial brain damage and worsen patient's outcome. In order to explore whether one of the two methods or a combination of both is more suitable for this purpose, we applied both methods simultaneously. METHODS This study was approved by the local ethics committee. It follows the Declaration of Helsinki and good clinical practice standards. All subjects provided written informed consent. Eight healthy volunteers (5 women; mean age, 38 ± 10 years) were investigated while in the supine position with the head elevated ∼30 • . Each investigation was performed in the late morning. After mounting all probes and adapting the subject to the experimental setting, baseline values were recorded over a minimum period of 10 min. After that, a CO 2 enriched air mixture (7% CO2, 93% Oxygen) was administered until a clear CBFV increase was identified. CO 2 administration was then maintained for 10 min. To assess CBFV, we used TCD (MultidopX, DWL; Compumedics, Sipplingen, Germany) with a 2-MHz probe to insonate the bilateral middle cerebral arteries (MCAs) through the temporal skull; the probes were fixed using a head holder provided by the manufacturer. The MCAs were identified according to commonly used criteria. BP was measured non-invasively by finger plethysmography (Finometer Pro; Finapres Medical Systems, Amsterdam, The Netherlands). A cerebrovascular resistance (CVR) index was calculated by BP/CBFV. To assess the micro-circulation using NIRS, we used the NIRO-200NX device (Hamamatsu Photonics, Herrsching, Germany). This NIRS device emits infrared light at three frequencies (735, 810, 850 nm); the backscattered light exhibits different intensities after absorption by oxygenated and deoxygenated hemoglobin. Differences in light intensities between the emitted and backscattered light correlate with the concentrations of oxygenated and deoxygenated hemoglobin in the brain's upper layers. We used self-adhesive NIRS probes in which the light emitting diode (LED) and detecting photodiode were fixed 3.5 or 4 cm apart. The detecting probe was placed over the frontotemporal lobe, and the emitting probe was placed on the frontal skull. The probes with the 4 cm distance between the LED and receiving diode were always placed on the right side; the other probe (with a distance of 3.5 cm between the diodes) was placed on the left side. We used both types of probes to address the potential relevance of differences in the penetration depth. After making initial adjustments to determine a baseline hemoglobin concentration, the NIRS device then provides information about changes in the hemoglobin concentrations (in µmol/L) from the baseline. Because oxyhemoglobin-derived data provides the best signal intensity for transfer function analyses (Reinhard et al., 2006;Phillip et al., 2012), we restricted our analysis to the oxyhemoglobin-derived data, namely oxygenated Hb (OxHb) and the total oxygenation index (TOI), defined as (OxHb/oxygenated + deoxygenated Hb) and reported as a percentage. The end-tidal pCO 2 (ETCO 2 ) concentration was measured using the capnograph embedded in the TCD device. To measure ETCO 2 , the small collecting tube of the capnograph was placed in one nostril. In the other nostril, a larger tube was placed through which an air mixture was added to the breathing air to inducing pCO 2 -related blood flow changes. The ETCO 2 for each patient was reported as the mean ETCO 2 over the total recording period. Data Preparation The minimum recording time was 10 min. BP, CBFV, and pCO 2 data were collected at 100 Hz, and NIRS data were collected at 20 Hz. Data were analyzed using Matlab (2015b; MathWorks Inc., Natick, MA, USA). Data were visually inspected for artifacts. Only artifact-free data periods were used. For each subject, the recordings contained bilateral artifact-free periods of 7 min in both modalities (baseline and CO 2 administration); therefore, a total of 16 hemispheres were analyzed. After aligning the time series with their common starting time point, each raw data time series was recollected by averaging to 1 s. The coherence and TF estimates of the phase and gain of the different time series were extracted from their respective power auto spectra or cross spectra using Welch's averaged periodogram method, with a Hanning window length of 100 s, a window overlap of 50%, and a total Fast Fourier Transformation data length of 400 s. For each subject, the coherence, phase (in radians), and gains (in cm/s/mmHg for the BP-CBFV system, in µmol/L/mmHg or %/mmHg for the BP-NIRS data system, and in µmol/L/cm/s or %/cm/s for the CBFV-NIRS data) were calculated over a frequency range of 0.02-0.40 Hz. At each frequency phase, gain and coherence are calculated. Phase indicates that the corresponding sinus waves of (e.g.,) a period length of 10 s (= 0.1 Hz) from BP and CBFV are congruent in time (phase = 0) or dissociated from each other (one is earlier or later than the other). Gain indicates how much power is transmitted from the BP wave to the CBFV wave. Coherence indicates how stable over time the phase relationship between the two waves is; 0 indicates no stability, 1 a perfect stability with then a high consistency of the calculated phase and gain values. Statistical Analysis All values are reported as mean ± standard deviation (SD). Pearson correlations (95% confidence interval (CI)) and paired t-tests were used for analysis. A p < 0.05 is considered to indicate a significant difference. RESULTS The mean values of the relevant physiological and TF parameters at baseline and after CO 2 administration are listed in Table 1. Of note, the ETCO 2 increase was accompanied by a small BP increase. CO 2 induced CBFV increase (in % from baseline) correlated significantly with the CO 2 induced OxHb increase [r = 0.72 (95% CI: 0.36, 0.89), p < 0.005)], as well as with the CO2 induced TOI increase [r = 0.69; 95% CI: 0.30, 0.80; p < 0.005]. TF analysis of the BP-CBFV system (Figures 1A-C) showed a high coherence so that gain and phase estimations are valid. All Coherence analyses which included OxHb or TOI showed a relevant coherence in the frequencies between 0.05 and 0.2 Hz only (two examples are provided in Figures 2A,B). Because CA is mostly regulated in the frequency range of 0.07-0.15 Hz, we used the TF parameters of this frequency range for analysis. Only phase and gain values at a coherence ≥0.3 were considered for analysis (this level was considered to provide reliable data) (Meel-van den Abeelen et al., 2014). Regarding the BP-CBFV relationship, CBFV led BP by -0.67 ± 0.20 radians (Table 1), which was increased to -0.51 ± 0.25 radians after CO 2 . BP initially led OxHb by a phase of 0.38 ± 1.17 and did not significantly change after CO 2 administration. In the CBFV-OxHb system, OxHb followed CBFV by a phase shift of 1.21 ± 0.81 radians at baseline; this phase shift changed to −0.05 ± 0.91 after CO 2 administration. CBFV-TOI exhibited similar behavior. Compared to baseline, 3 TF variables changed significantly after CO 2 administration, and one variable exhibited a trend. The BP-CBFV phase changes in each hemisphere correlated significantly with CO 2 changes (r = 0.52; 95% CI: 0.03, 0.80; p < 0.05); the best correlations were obtained when phase changes were expressed in percent changes from baseline and the results of both hemispheres were subsequently averaged to yield 1 result per subject (r = 0.63; 95% CI: −0.13, −0.92; p = 0.09). The CBFV-OxHb phase change correlated with the change in CO 2 (r = −0.60; 95% CI: −0.16, −0.84; p < 0.01) such that a greater ETCO 2 change yielded a more negative CBFV-OxHb phase. Of note, the probe with two diodes separated at a 4 cm distance performed slightly better (r = −0.67; 95% CI: −0.93, 0.05; p = 0.06) than did the probe with diodes at a 3.5 cm distance (r = −0.53; 95% CI: −0.90, 0.22). The CBFV-TOI phase change did not correlate with the CO 2 administration but exhibited a trend with BP changes (r = 0.44, p = 0.08). Remarkably, the BP-CBFV gain was the only gain variable to exhibit a change. For further interpretation of gain change the corresponding change of FIGURE 1 | Coherence (A) and transfer function spectra (gain B; phase shift C) of the blood pressure (BP)-cerebral blood flow velocity (CBFV) system under normo-and hypercapnia. The high coherence indicates a highly stable phase relationship over the whole frequency range. For convience, we show in all three parts the means only and did not include the SD range of both curves. CVR is necessary. CO 2 administration induced a significant CVR decrease (Table 1) DISCUSSION In recent years, the use of TF to estimating CA via analysis of the BP-CBFV relationship clarified the dependence of results on several technical aspects (Meel-van den Abeelen et al., 2014), including how signals are averaged (beat-by-beat or raw wave, sample frequency), the window selected for FFT, and decisions regarding signals smoothing or analysis of relative values (for details, see Claassen et al., 2016). Our baseline results regarding the phase shift between BP and CBFV are in agreement with the observed spread ( Table 2 and Zhang et al., 1998;Panerai et al., 1999;Müller et al., 2003;Reinhard et al., 2003). Under pathological conditions, when phase shift approaches 0 • this phase shift spread narrows. Under the two evaluated pathological conditions (Table 2), the most striking difference between our results and those of Reinhard et al. (2006) is the finding that the phase shift between CBFV and OxHb was 0 • after CO 2 administration but is 65 • in a group of patients with occlusive artery disease. One explanation for this discrepancy might be that CO 2 diminishes CA rapidly, whereas the most chronic process associated with carotid artery occlusive disease allows the development of a graduated cerebrovascular response that depends on blood flow through a stenotic vessel and the development of sufficient collateral blood flow (Müller and Schimrigk, 1996;Reinhard et al., 2003). Compared to the baseline phase shift of 84 • between CBFV and OxHb, the phase shift of 64 • indicates a reduced but not abolished CA. Regarding the time in seconds, CO 2 led to a direct transformation of macrocirculatory to the micro-circulatory blood flow, whereas this process remained under CA control in the occlusive vessel group. As shown by Reinhard et al. (2006), CBFV is ahead of BP and has to be signed mathematically by a negative phase shift when BP is the reference. From there on, OxHb follows CBFV with positive phase shifts. Therefore, the cerebrovascular response to CO 2 demonstrated that the (negative) phase between BP and CBFV shifts toward 0 • increases with increasing CO 2 (Zhang et al., 1998;Panerai et al., 1999;Müller et al., 2003) while the positive phase shifts of the CBFV-OxHb system are reduced until 0. Regardless of which parameter is used, with the present interpretation of the TF model of CA a phase shift of 0 or near zero indicates that the depending flow (CBFV or OxHb) follows the driving force (BP or CBFV) without delay which is equivalent to an abolished CA. Such a situation is pathophysiologically considered a highly risky condition of the brain because a (therapeutically) unanswered BP drop can be followed by cerebral ischemia. Our observation is that the response distribution was more widely spread in the CBFV-OxHb system than in the BP-CBFV system, as indicated by the range of their respective mean phase changes (BP-CBFV: 0.67 to 0.51 radians; CBFV-OxHb: 1.21 to −0.05 radians). The correlation analysis results are slightly in favor of the CBFV-OxHb system, so one could suggest that this system can graduate CA disturbances more precisely. One example of its suggested advantage could be that the flow-to-flow model displayed CA failure (0 radians) at times when the pressure-to-flow system indicated some autoregulation still present (0.51 radians). Because the flow-to-flow system reflects closer CBF changes than the pressure-to-flow system, the flow-to-flow systems should be further evaluated. One question is whether the broader phase response distribution is indeed more precisely than the BPflow system to graduate CA disturbances. A second question is whether our suggestions can be applied to diseases with arterial diameter changes (e.g., subarachnoid hemorrhage); initial experience using TOI seems promising (Zweifel et al., 2010). Although the CBFV-TOI system demonstrated a clear CO 2 induced change, it was a surprise to recognize that this change didn't correlate with ETCO 2 but with BP changes. TOI is calculated from both OxHb and deoxygenated hemoglobin via total hemoglobin concentration. Deoxygenated hemoglobin is mostly present in the venous part of the micro-circulation. Speculatively, it can be assumed that the venous system is regulated by other additional mechanisms apart from BP. Regarding gain changes, we did not observe a gain transfer from macro-to micro-circulation in the flow-to-flow systems; a significant gain change was only present in the BP-CBFV system. Similar results were described by Phillip et al. (2012). Gain is considered a function of vasculature and shows an inverse relationship to cerebrovascular resistance (Aaslid et al., 1989;Tiecks et al., 1995;Serrador et al., 2005;Zhang et al., 2009). Ideally, the product of gain and CVR remains approximately constant (Zhang et al., 2009). CVR is considered a myogenic function of the smooth vessel cells (Aaslid et al., 1989;Schubert and Mulvany, 1999;Zhang et al., 2009). In an animal experiment Kolb et al. (2007) inhibited the myogenic action by Ca 2+ channel antagonists; in the animals (rats) with Ca 2+ channel inhibition the experimentally induced gain decrease was significantly less than the one in the control condition without Ca 2+ channel inhibition. Recent similar observations in human beings (Tzeng et al., 2011;Tan et al., 2013) strengthen the assumption that gain is also a myogenic function. That means on the other hand that the flow-to-flow systems do not describe the pressure dependent autoregulatory processes completely. Our study had some limitations. Apart from the technical limits as mentioned above, we address two others. First, we used probes in which diodes were separated by different distances. As indicated by our correlation analysis of CBFV-OxHb phase changes, probes in which diodes were separated by a 4-cm distance yielded better results than those in which diodes were separated by a 3.5-cm distance. This phenomenon was described similarly for a probe containing diodes separated by a 3cm distance (Murkin and Arango, 2009). A shorter distance between the two diodes causes less brain parenchyma and more skin to be involved in measured changes in oxygenation. Therefore, one might speculate that our overall results when using only probes with 4-cm distances could be closer to the findings reported by Reinhard et al. (2006) and Phillip et al. (2012). Second, we reported results within a particular frequency range (0.07-0.15 Hz), whereas Reinhard et al. (2006) and Phillip et al. (2012) reported results obtained at a distinct frequency of 0.1 Hz. This difference may have led to a systematic failure in our results by including time periods <10 s in our analysis, which resulted in a shorter overall time delay/phase shift. Moreover, the methodical differences in performing TF, described in the first paragraph of this section, could lead to differences in the absolute phase shift values. Nevertheless, our overall results exhibit the same apparent direction as those of Reinhard et al. (2006) and Phillip et al. (2012). CONCLUSION We assessed CA by two approaches: the BP-to-flow, and the flowto-flow transition. The flow -to-flow system is characterized by phase (time delay) changes only; the BP-CBFV system reacts additionally with gain changes, which correspond most likely to a myogenic reaction. Using correlation analysis between ETCO 2 changes and the respective phase changes, a descriptive interpretation indicates that the flow-to-flow system CBFV-OxHb may allow a more detailed grading of CA than the BP-CBFV system by providing a broader phase shift response distribution. As an example we found, that CA is already abolished in the flow-to-flow system when the BP-to-flow system did show autoregulation; in this situation the risk of further brain damage was more clearly displayed by the flow-to-flow system than by the BP-to-flow system. However, which of both approaches is the physiologically and, hence, the clinically more relevant needs further studies in different patient populations. AUTHOR CONTRIBUTIONS MM: data and statistical analysis, study design, writing, intellectual content. MÖ: data collection, study design. AM: data analysis, intellectual content. JL: data interpretation, writing, intellectual content.
2016-06-17T23:05:48.924Z
2016-05-09T00:00:00.000
{ "year": 2016, "sha1": "86d52bf78c8bd2eecab47aa8b6cd62d926f2f5d3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2016.00162/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86d52bf78c8bd2eecab47aa8b6cd62d926f2f5d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55083842
pes2o/s2orc
v3-fos-license
THE PERCEIVED IMPACT OF RESTRUCTURING ON SERVICE QUALITY IN A HEALTH CARE ENVIRONMENT This study evaluates health care employees’ perceptions of service quality in a hospital environment after the process of restructuring and assesses whether their perceptions are influenced by biographical profiles. A sample of 143 clinical and non-clinical employees from three of the largest regional hospitals within the Ministry of Health in Lesotho was drawn using cluster sampling. Data was collected using an adapted version of SERVQUAL whose psychometric properties were statistically determined. Data was analyzed using descriptive and inferential statistics. The results indicate that employees were fairly convinced that the process of transformation undertaken in the health care organization led to enhanced service quality in terms of improved empathy, assurance, responsiveness, tangibles and reliability, although in varying degrees and, reflect areas for improvement. Introduction Health care is a government priority as it is in the interest of every country to have a healthy nation.Health services are fundamental to every health system and identifying strategies to improve health services is vital (Loevinsohn & Harding, 2005).There are some key resources that health service delivery relies on, such as motivated staff, equipment, information, finance and adequate drugs.Some of the aspects that assist in health service delivery are improving access, coverage and quality of health services and these can be determined by the ways in which services are organized and managed and on incentives influencing providers and users (Clancy, 2006).Loevinsohn and Harding (2005) note that many countries allocate more resources towards health services which do not necessarily address the problem of service delivery.Mehrotra and Jarrett (2002) assert that health service delivery is fundamental for the community specifically those people who are in the rural areas where health services are not accessible.They further emphasize that the quality of health services provided at grassroots level are poor due to lack of adequate resources and inefficient political will to ensure proper functioning.Improving health service delivery requires efforts from stakeholders within the health systems such as policy makers in the ministries of health, finance and public administration, health service managers and workers, public and private providers, clients as well as the communities (WHO, 2012). This paper aims to evaluate health care employees" perceptions of service quality in a hospital environment after the process of restructuring and to assess whether these perceptions are influenced by biographical profiles.Walshe and Smith (2006) describe that the provision of health care services within a regional or national health care system can be classified into three sectors, namely, primary, secondary and tertiary care.The sectors can be modeled as subsystems of the whole health care system, even though in some countries the boundaries between these sectors are often unclear and many times shift as health services provision moves from one sector to the other.The three sectors overlap and a patient can be expected to move from one sector to another depending on his or her clinical needs.For example, a patient may begin from the first sector where the patient is diagnosed and this is where primary health care takes place.The patient can later be transferred to secondary care to be hospitalized or access some services not provided at primary care level.Tertiary care plays an important role in cases where a patient requires specialized care which cannot be provided at secondary level.The movement through the sectors increases costs for the patients.This situation brings more pressure on health organizations to manage the increasing health care costs and require attempts to develop the capacity of primary health care providers and facilitate the ability of the patients to provide self-care (Walshe & Smith, 2006). Constraints in delivering health services Oliveira-Cruz, Hanson, and Mills (2003) maintain that some of the constraints within health service delivery operate on five levels, namely, Community and household level, Health services delivery level, Health sector policy and strategic management level, Public policies cutting across sectors and, Environmental and contextual characteristics.Other constraints that were analyzed include the lack of demand for effective intervention and, barriers to the use of effective interventions.In addition, the constraints of health service delivery such as shortage and distribution of qualified staff, poor technical guidance and supervision, inadequate drug and medical supplies, lack of equipment and infrastructure as well as poor accessibility are issues of concern which have to be managed by health sectors.Travis, Bennett, Haines, Pang, Bhutta, Hyder, Pielemeier, Mills & Evans (2004) add that financing and the use of information are further problems facing health systems.Oliveira-Cruz, Kurowski, and Mills (2003) emphasize that improved health care services can be realized through national and international commitment to enlarge access to priority health interventions. Strategies to strengthen and improve the delivery of health services According to Shi and Singh (2008), health professionals should not only focus on their roles at the workplace but should also have a better understanding of forces outside their profession that can affect their current and future practices.Furthermore, they emphasize that policy makers should not only focus on one health care sector when they deal with certain problems as the impact may be felt in the entire system.Peters, EL-Saharty, Siadat, Janovsky, and Vujicic (2009) suggest that some of the strategies that can be implemented to strengthen the health services are the expansion and involvement of community health workers, the establishment of user fees and community management, decentralization, performance incentives, social marketing and reorganizing outreach workers.Nauert (2002) mentions that the health care industry has been denying patients quality health services and there are certain business strategies that the health industries had to put in place to curb such poor service delivery.Such business initiatives include key components like environmental assessments of market wants, needs and demands, strengths and weaknesses as well as external threats and opportunities of the health industries.According to Healey and Kuehn (2011), technology plays an important role as an innovation element in health service delivery for the reason that records are easily kept and electronic communication assists in collecting, analyzing and disseminating health related information.The other two innovation elements are a business model for health care system in which more emphasis will be on wellness and prevention, performance outcomes and development of a value network that is sustainable and will probably need an external catalyst.Battacharyya, Khor, Mcgahan, Dunne, Daar, & Singer (2010) suggest marketing strategies that can be implemented to improve health services especially for the poor such as social marketing, tailoring services to the poor, franchising, high volume and low unit cost.Social marketing involves implementing marketing techniques to attain behavioural change by creating training and peer education programs that concentrate on behavior change in schools, prisons, the sex industry and the public.Tailoring services to the poor focuses on tailoring services and products towards the needs of the poor while franchising, high volume and low unit cost concentrate on the enlargement and sustainable distribution of products and services of specific quality in reproductive health with low costs.Battacharyya et al. (2010) added that other marketing activities involve operating activities and financial strategies which aim to provide products and services at lower costs while maintaining quality of services.These financial strategies include lower operating costs through simplified medical services, high volume and low unit costs, cross subsidization and income generating mechanisms.Leggat, Bartram, Casimir, and Stanton (2010) similarly emphasize that improvement in health service delivery substantially relies on job satisfaction and empowerment of health workers and found that improved autonomy, decision making and empowerment were linked to lower patient mortality rates.Similarly, Mukherjee and Malhotra (2006) emphasize that freedom to plan one"s work, participation in decision making, role clarity and psychological support from supervisors motivates employees to improve service delivery.The need for efficient health services is a concern for countries worldwide but in the end people have to be provided with quality services within an integrated delivery network (coordinated continuum of services) that are not fragmented (Ramagem, Urrutia, Griffith, Cruz, Fabrega, Holder & Montenegro, 2011).Some of the benefits of an integrated delivery network include improved access to health services, decreased inappropriate costs, prevention of duplication of infrastructure and services, reduction of costs and responding better towards people"s health needs (Ramagem et al., 2011).Travis et al. (2004) elaborate that strengthening health systems is fundamental to attain enhanced service delivery, but stress that strengthened health systems cannot achieve expected results unless they are effective.Shortell (2004) describes that health systems requires advanced knowledge that should be put into action to attain the improved health services.Travis et al. (2004) argue that the problem is not putting the existing knowledge into practice but identifying effective health systems that will lead to expected outcomes.Vertical approaches (planning, staffing, management and financial systems) and horizontal approaches (operate within the existing health system"s structures) are used with the aim of improving services. There is a need for policy relevance and innovations techniques, more focus on strengthening commitment and investment in terms of research capacity of the developing countries (Travis et al., 2004).Stable (2000) argues that health care services can be improved through transparent processes that take place during the engagement of the new model for effective delivery of service and highlights four phases that should be considered for effective service delivery, namely, identification of a problem, community profile (the population group to whom the services are delivered), implementation (services required, quality of services, costs and challenges must be known) and evaluation (expectations of what is required should be clear). Effective communication and consultation are regarded as key elements that need a careful consideration in any change process taking place in an organization (Stable, 2000).Wright and Baker (2005) maintain that using appreciative inquiry allows easy transition from conversation to action and provides energy as well as motivation for the health workers in order to improve health services as it allows them to feel a sense of ownership and responsibility for both their decisions and actions.Similarly, Conner and Finnemore (2003) suggest same time/same place (face-to-face collaboration) and same time/any place methods of communication (phone and video conferencing, team room and digital collaborative technology).Although the former saves time and costs, it limits social and physiological benefits but the latter is beneficial for health care providers where working with shifts is concerned and results in improved work efficiency and improved service delivery.Furthermore, effective communication across the structural departments within an organization enhances successful implementation of its plans (Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2004).McCallin (2001) explains that teamwork cannot function well without proper communication among health workers; hence, team effectiveness relies on effective communication.According to Robinson, Gorman, Slimmer & Yudkowsky (2010), factors that contribute to effective communication among nurses and physicians include, but are not limited to, clarity and precision of message that relies on verification, collaborative problem solving, maintenance of mutual respect and authentic understanding of the unique professional role.Working together as a team to solve problems and respect for one another contributes a lot to good relationships and effective communication (Robinson et al., 2010). Unlike other countries that have opted for other health reforms for improved health services, Cambodia used contracting as an approach to improve health service delivery (Soeters & Griffiths, 2003).Campinha-Bacote (2002) acknowledges the various models that have emerged to overcome the challenges of health service delivery and believes that the cultural competence model may add more value in health service delivery.She explains her model as a continuous process whereby health care providers should make an effort to attain the ability to work within the cultural context of the customer.The model needs health workers to be culturally competent.The model is categorised into five parts, namely, cultural awareness, cultural knowledge, cultural skill, cultural encounters and cultural desire. Cultural awareness is about self-examination and indepth exploration of one"s own cultural and professional background.It includes individual"s recognition of biases, prejudice and assumptions about other people who are different and helps to avoid the risk of cultural imposition.Cultural knowledge involves learning or acquiring educational foundation from different cultural and ethnic groups.Health providers have to take into consideration knowledge of clients" health-related beliefs and cultural values, disease incidence and prevalence and treatment efficacy which will eventually allow them to understand the clients" world view (Campinha-Bacote, 2002).Brown and Busman (2003) share the experience of Saudi Arabia whereby more expatriates are employed due to high population growth and low number of health workers which affects the health service delivery due to the cultural and communication barrier between the patients and the expatriates.Some mechanisms such as education and training for Saudi Arabian health workers have been implemented but they cannot reach the number that is needed.On the other hand, cultural skill assists health providers to gather relevant data concerning the clients" prevailing problem and performing a culturally based physical assessment.Cultural encounters provides an opportunity for health providers to interact with clients from different cultural backgrounds and, therefore, helps them not to be stereotyped when it comes to other individuals" culture and values.Through Cultural encounters, health workers can identify clients" linguistic needs and find an interpreter where necessary.Cultural desire has to do with the health providers" care towards the clients since people are concerned about how much one cares than how much one knows (Campinha-Bacote, 2002). Ensor and Cooper (2004) mention that some of the interventions implemented by different countries brought significant changes in health service delivery.Those interventions are education and information provided to community educators through training, funds provided to reduce transport costs for patients who travel to the health centres and maternity waiting homes near districts hospitals.Community educators are basically women in the target communities who can encourage or influence families of the importance of maternal care and help to facilitate admission to hospital during emergencies.Basic education plays a key role to some extent to increase the desire and actual use of health services.Education yet again assists individuals to make informed decisions concerning their lifestyle, that is, they can take care of their health themselves without relying on health services.This implies the need to implement methods of improving literacy.Countries such as Zimbabwe and Ethiopia report high use of hospitals and decreased rates of complications for the subsequent delivery as a result of these interventions.On the contrary, Ghana and Zaire also established maternity homes but were not positively received because they were located in isolated areas without facilities to prepare food.The need for consultation with the community is essential before implementation of any intervention.Burundi invested in roads with the aim of improving access to health care.Bangladesh implemented a door-to-door provision of family planning services and it has been successful in overcoming consumer costs and social objections to women obtaining services outside the home.However, it has been reviewed as it was expensive.Recent policy re-oriented the focus to delivery of services to the community clinics as opposed to door-to-door provision (Ensor & Cooper, 2004).Powers and Jack (2008) advocate the use of volume flexibility which is categorized into internal and external strategies.Volume flexibility has to do with the organization"s ability to efficiently manage output levels in response to the fluctuations in demand for its current products or services without incurring high transition penalties or large changes in performance outcome.The internal strategies in this regard are based on the resources, processes and capabilities that the organizations own.On the other hand, external strategies that include outsourcing (provided that the outsourcing strategy is understood) and strategic alliances, risk pooling, managed care controls as well as pricing and rationing strategies can be implemented.Demand management strategies (promotion programs, induced demand techniques) can be used to reduce demand uncertainty (Powers & Jack, 2008). During the transformation that took place in South Africa in 1994, the new government established three service delivery initiatives with the aim of improving service delivery.The first initiative was Bathopele which means people first.This initiative was published in 1997 and has eight principles: (1) to regularly consult with customers about a level and quality of public service they get (2) To set service standards so that people will be aware of what to expect in terms of level and quality of services they have to receive (3) To increase access to services to allow citizens to have equal access to the services to which they are entitled, (4) To ensure high level of courtesy and consideration, (5) Provision of more and better services and information for people to be well informed of services that they ought to receive, (6) Increasing openness and transparency about services (7) To remedy failures and mistakes and (8) give possible value for money (economical and efficient provision of services) (Russell & Bvuma, 2001).The second initiative was Public Private Partnerships which aimed at improving services and cost effectiveness.The third initiative was Alternative Service delivery.It includes Information Technology, significant management improvement, accelerated training and development of staff at all levels, redeployment of resources in the budget to higher priority areas, effective review and accountability measures and seeks to focus attention on innovative delivery solutions at the customer end (Russell & Bvuma, 2001). Challenges experienced in health care service delivery Some of the challenges faced in health care delivery systems are improving quality, increasing access and reducing costs (Andaleeb, 2001).McIntyre and Klugman (2003) share the challenges that South Africa experienced during the restructuring of health services.They explained that there is a lack of effective communication between the senior provincial officers and local government representatives especially in policy decisions which eventually affects their daily planning and delivery of services.Jacobs, Lauderdale, Meltzer, Shorey, Levinson & Thisted (2001) take this concept further by pointing out that communication is not only a barrier among health workers but also a problem between a health worker and a patient.Jacobs et al. (2001) explain that a language barrier still exists in many hospitals.Various studies that have been undertaken reveal that patients who are unable to speak the English Language well receive unsatisfactory health services and this denies them to receive preventive and other services.Although interpreter services are introduced, they are not adequate in number and in some cases untrained nonclinical employees have to interpret which result in negative clinical consequences such as breach of patient confidentiality, misdiagnosis and inadequate or inaccurate treatment (Jacobs et al., 2001).Kulwicki, Miller, and Schim (2000) agree that there is lack of bilingual health care providers and there is a need for cultural awareness, for example, in some countries, culturally, female patients cannot be diagnosed by male health providers.Duncan and Breslin (2009) assert that it is difficult for health workers to provide innovative services due to inadequate incentives.Some of the challenges facing the United States of America within the health care systems are the rise of value and services delivered of which they include quality and cost features (Shortell, 2004).Shortell (2004) points out that health systems also face challenges at managerial level.Also, several health reforms do not have explicit objectives and this makes it difficult to evaluate the success and failures in attaining the objectives (Preker & Harding, 2003). Decentralization of health services: Experiences from International and local countries There are some benefits that are obtained from decentralization of services such as being able to make decisions based on people"s needs since decision makers are much closer to the people and their needs.This helps local decision makers to align the services and public expenditure with their local needs and preferences.Eventually this may lead to improved service delivery (Saavetra-Costas, 2009).The decentralization involved in South Africa"s process of restructuring of health services was in two forms, namely, the devolution of authority to provincial and local governments and, the decentralization of services from provincial health to health district.National government in this regard was responsible for policy development while provinces had a responsibility of service provision and hospital services with more focus on curative primary health care.Some of the challenges faced during the restructuring process were uncertainties about responsibilities and line of accountability, the health workers had to account to both provincial and facility managers, poor staff morale and working conditions, problems concerning infrastructure, availability of drugs and, staff attitude towards patients.According to the analysis of the study, people believe that the focus was on restructuring and transition and less on service delivery (McIntyre & Klugman, 2003). Regmi, Naidoo, Greer, and Pilkington (2010) mention that many countries opt for decentralization of services with the aim of improving service delivery in aspects such as accessibility, reduction of costs and community participation.A successful decentralization of health service experienced by Nepal reports the benefits of improved availability of drugs due to pressure from local representatives, ability of community to communicate their needs and expectations to the public health representatives locally, reduced absenteeism for the reason that local representatives are around physically to assess the situation and an increase in community satisfaction with the services (Regmi et al., 2010).However, a failure in Nepal is that little fiscal decentralization took place which led to more resources still being controlled and managed at central level. Sakyi (2010) identified some of the shortfalls during the decentralization of services that took place in Ghana as lack of effective and timeous communication, inadequate information concerning reforms, lack of participation of and consultation with health workers, the top-down style of communication and the communication gap between district managers and relevant stakeholders that affected service delivery.Eggleston, Ling, Qingyue, Lindelow, and Wagstaff (2008) report that changes in China"s health sector were only seen in urban areas and only improved quality to a certain degree and results indicate that China requires improvement in health service delivery with regard to quality, responsiveness to patients, efficiency and equity. According to Ramani and Mavalankar (2006), India implemented health reforms in 2004 which took place in nine states and improvements reported include the establishment of logistic management system to coordinate the purchase of drugs, storage and distribution of drugs and medicines.Following its success, the system was replicated in over 450 government hospitals in one of the states where health reform took place.The Telemedicine Centre plays a key role in improving health service delivery.The country, however, still experiences challenges especially in rural areas in terms of access, affordability and equity of services.According to Aksan, Ergin, and Ocek (2010), Turkey embarked on health reforms with the aim of addressing issues such as inequalities in access to health services and fragmentation in financing and delivery of health services.Restructuring of the Ministry of Health in Turkey was done with the aim of enhancing the stewardship function.Some of the health services like laboratory and radiodiagnostic services were outsourced to improve health services.The government also incorporated private hospitals by increasing incentives for investment. Measuring the quality of services Service quality is defined as "the valuation that the consumer makes of the excellence or superiority of the services" or "discrepancy between consumer"s perceptions of services offered by a particular firm and their expectations about firms offering such services" (Zeithaml, Berry, & Parasuraman, 1988).According to Ramsaran-Fowdar (2008), theoretical perspectives on service quality were developed in 1980s.There are two types of service quality, namely, technical quality which refers to core service delivery or service outcome and, functional quality which involves service delivery processes or the manner in which customers receive the service.Lu and Liu (2000) add that in the health care environment, technical quality involves factors such as average length of stay, re-admission rates, infection rates and outcome measures whilst functional quality includes factors like doctors" and nurses" attitudes towards patients, cleanliness of facilities and quality of food given to patients. Jensen and Markland (1996) emphasize that it is critical for individuals who deliver services to be assessed because good service is determined by their performance and any changes made, affects their work.Jensen and Markland (1996) advocate that organizations should evaluate quality measurement systems like SERVQUAL and identify the one that will best suit the needs of the organization.Babakus and Mangold (1992) believe that SERVQUAL has been known for its potential usefulness in the hospital environment and mention that service industries identify quality as the main determinant of cost reduction, market share and return on investment.This instrument can also be used internally to understand the employees" perceptions about the service quality with the aim of improving services (Fedoroff, 2012).This instrument comprises of five dimensions: (1) tangibles -physical facilities, equipment, and appearance of personnel (2) reliability ability to perform required services dependably and accurately (3) Responsiveness -willingness to assist customers and provide prompt services (4) assuranceknowledge and courtesy of employees and their ability to inspire trust and confidence and (5) empathycaring and individual attention given to the customers (Carrillat, Jaramillo & Mulki, 2007).Assessing these dimensions in a health care environment is critical as they repeatedly surfaced as challenges and areas for improvement in the preceding discussions. Hence, this paper aims to evaluate health care employees" perceptions of service quality (tangibles, reliability, responsiveness, assurance, empathy) in a hospital environment after the process of restructuring and to assess whether these perceptions are influenced by biographical profiles (gender, job category, age, tenure, qualification) respectively. Respondents In this study the population comprises of employees from three of the largest regional hospitals within the Ministry of Health in Lesotho who were in the employ of the organization from before the restructuring, making up a population of approximately 800 clinical and support staff.It must be noted that management for clinical and support staff is already included in the population of 800.The researcher used a sample of 143 employees.The adequacy of the sample was determined using the Kaiser-Meyer-Olkin Measure of Sampling Adequacy (0.883) and the Bartlet"s Test of Spherecity (1696.124,p = 0.000) for the five subdimensions assessing service quality after the process of transformation, which respectively indicated suitability and significance.The results indicate that the normality and homoscedasticity preconditions are satisfied.A computer programme was used to select employees from the Ministry of Health staff list who were in the employ before and after the restructuring took place.Managers of the respective departments distributed the questionnaires to the selected subjects during one of their weekly meetings. The composition of the sample may be described in terms of age, gender, job category, tenure and education.With regards to age, 36.4% of the participants were between 26-35 years followed by those between 36-45 years (33.6%),thereby indicating that the majority of the sample (70%) was between the ages of 26-45 years old.There were more females (81.1%) than males (18.9%) and more clinical services staff (72%) than non-clinical services employees.The majority of the respondents served the organization for 11-20 years (33.6%),followed by 1-5 years (25.9%),followed by 6-10 years (23.8%)thereby indicating that 83.3% of the sample have a tenure of 1-20 years.The majority of the participants have a diploma (51%) and a further 27.3% hold a degree. Measuring Instrument Data was collected using a questionnaire that was adapted from both SERVQUAL developed by Parasuraman, Zeithaml and Berry (1988) and SPUTNIC (undated) and comprised of two sections.Section A comprised of biographical data relating to age, gender, job category, tenure and education and was measured using a nominal scale.Section B consisted of 22 items pertaining to the perception of employees of the sub-dimensions of service quality (tangibles, reliability, responsiveness, assurance, empathy) after the process of restructuring.Subjects were reminded that the items relate to their perceptions of the sub-dimensions of service quality after the process of restructuring.Section B was measured using a five point Likert scale ranging from (1) strongly disagree, (2) disagree, (3) neither agree nor disagree, (4) agree to (5) strongly agree.In-house pretesting was adopted to assess the suitability of the instrument.Pilot testing was also carried out using 12 subjects, selected using the same procedures and protocols adopted for the larger sample.The feedback from the pilot testing confirmed that the questionnaire was appropriate in terms of relevance and construction. Measures/statistical analysis of the questionnaire The validity of the questionnaire was assessed using Factor Analysis.A principal component analysis was used to extract initial factors and an iterated principal factor analysis was performed using SPSS with an Orthogonal Varimax Rotation.In terms of the validity of the section relating to perceptions of service delivery after the process of transformation, the five service quality dimensions (assurance, reliability, tangibles, empathy, responsiveness) were generated with respective eigenvalues being greater than unity (4.664, 3.056, 2.756, 2.601, 1.832).The items assessing perceptions of the transformation process were also reflected as having a very high level of internal consistency and reliability, with the Cronbach's Coefficient Alpha being 0.922. Statistical analysis of the data Descriptive statistics (means, standard deviations) and an inferential statistic (correlation, Mann-Whitney test, Kruskal-Wallis ANOVA) will be used to evaluate objectives and hypothesis of the study. Descriptive Statistics The perceptions of health care employees regarding the sub-dimensions of service quality (tangibles, reliability, responsiveness, assurance, empathy) was assessed by asking respondents to rate the various aspects of service quality using a 1 to 5 point Likert scale.The results were processed using descriptive statistics (Table 1).The greater the mean score value, the more positive the perceptions of service delivery after the process of transformation.From Table 1 it is evident that the respondents have varying views of the sub-dimensions of service quality after the process of transformation, which in descending level of mean score value is:  From the results it is evident that employees believe that after the restructuring health care workers have improved levels of empathy, followed by assurance, responsiveness, tangibles and lastly, reliability.Whilst respondents have a most positive view of the impact of restructuring on service delivery, when compared again a maximum attainable score of 5 it is evident that there is room for improvement in each of the sub-dimensions of service quality.In order to assess where these improvements lie, frequency analyses were conducted. In terms of empathy, respondents believed that as a result of the restructuring the hospital/clinic (64.6%) and hospital personnel (52.8%) is able to give patients individual attention and 57% felt that the hospital/clinic has the patient"s best interests at heart.Furthermore, whilst 49.2% of the respondents agreed that the restructuring has enabled personnel of the hospital/clinics to better understand the specific needs of patients, 38.7% were not convinced that the restructuring has led to such improvement. In addition, 42.9% of the respondents are uncertain whether the process of restructuring has made the operating hours of the hospital/clinics convenient to all its patients. In terms of assurance, respondents believed that as a result of the restructuring process personnel has learnt to behave in ways that instills confidence in patients (51.1%), are consistently courteous to patients (56.1%) and have the knowledge to answer patient"s questions (71.8%). However, 51.7% of the respondents were not convinced that the process of restructuring has made patients feeling safer in their interactions with the hospital/clinic. In terms of responsiveness, respondents agreed that after the restructuring the personnel of the hospital/clinic are able to tell patients exactly when services will be performed (67.6%) and are always willing to help patients (62.7%).Furthermore, whilst 35% of the respondents felt that as a result of the restructuring personnel in the hospital/clinic are never too busy to respond to patients" requests, 37.1% were not convinced and a further 37% disagreed.In addition, a significant percentage of the staff were not convinced that after the restructuring, personnel of the hospital give prompt service to patients (43.7%) and a further 22.5% disagreed that they do. In terms of tangibles, respondents believed that as a result of the restructuring the physical facilities at the hospital/clinic are visually more appealing (68.3%), personnel in the hospital/clinic are neat in appearance (61.5%) and that materials associated with the service such as pamphlets or statements are visually appealing (64.8%).However, whilst 37.1% of the respondents indicated that after the restructuring the hospital/clinic has modern equipment, a significant 46.9% disagreed and a further 16.1% were uncertain. In terms of reliability, respondents reflected that as a result of the restructuring when a patient has a problem the hospital/clinic shows a sincere interest in solving it (60.1%)and that the hospital/clinic insists on error-free records (64.6%).Furthermore, whilst 33.8% of the respondents agreed that the restructuring has ensured that when the hospital/clinic promises to do something by a certain date it does it, a significant percentage disagreed that this happens (38.7%) and a further 27.5% were uncertain.In addition, whilst 64.6% of the respondents indicated that the restructuring has enabled the hospital/clinic to provide its service at the time that it promises to do so, 31.5% disagreed and a further 28.7% were uncertain.Also, 38% of the respondents were uncertain that as a result of the restructuring the hospital/clinic gets things right the first time and 41.6% indicated that they do not get things right the first time. Inferential Statistics Inferential statistics were conducted to test the hypotheses of the study relating to perceptions of the sub-dimensions of service quality after the process of restructuring. Relationship between sub-dimensions of service quality Hypothesis 1.There exists significant intercorrelations amongst the sub-dimensions of service quality (tangibles, reliability, responsiveness, assurance, empathy) respectively (Table 2). 2 indicates that the sub-dimensions of service quality (tangibles, reliability, responsiveness, assurance, empathy) significantly intercorrelate with each other at the 1% level of significance, except for tangibles and empathy which show no significant relationship.Therefore, hypothesis 1 may only be partially accepted at the 1% level of significance.In particular, strong, direct and significant relationships were noted between assurance and responsiveness and empathy respectively at the 1% level of significance. Impact of biographical variables The influence of the biographical variables (gender, job category, age, tenure, qualification) on health care employees" perceptions of the sub-dimensions of service quality as a result of the process of restructuring were evaluated using tests of differences (Mann-Whitney test, Kruskal-Wallis Analysis of Variance) respectively. Hypotheses 2. There is a significant difference in the perceptions of health care employees varying in biographical profiles (gender, job category, age, tenure, qualification) regarding the sub-dimensions of service quality as a result of the process of restructuring (tangibles, reliability, responsiveness, assurance, empathy) respectively (Table 3 to Table 6).3 indicates that there is a significant difference in the perceptions of male and female health care employees regarding empathy whereby females reflect higher levels of empathy as reflected in the mean scores (Mean = 3.432) than males (Mean = 3.185) at the 5% level of significance.No other significant differences were noted between males and females regarding the remaining sub-dimensions of service quality (tangibles, reliability, responsiveness, assurance) respectively.Hence, hypothesis 2 may only be accepted in terms of gender and empathy at the 5% level of significance.4 indicates that there is a significant difference in the perceptions of health care employees varying in job category (clinical and non-clinical staff) regarding assurance and empathy whereby clinical staff reflected higher levels of assurance as reflected in the mean scores (Mean = 3.491) than non-clinical staff (Mean = 3.044) and the former also reflected higher levels of empathy (Mean = 3.480) than the latter (Mean = 3.143) at the 1% level of significance.No other significant differences were noted between clinical and non-clinical staff regarding the remaining sub-dimensions of service quality (tangibles, reliability, responsiveness, assurance) respectively.Hence, hypothesis 2 may only be accepted in terms of job category and assurance and empathy respectively at the 1% level of significance.6 indicates that there is a significant difference in the perceptions of health care employees varying in qualification regarding reliability and empathy respectively at the 1% level of significance and responsiveness and assurance respectively at the 55 level of significance.Mean analyses indicate that the perceptions of health care employees regarding the sub-dimensions of service quality (reliability, empathy, responsiveness, empathy) after the process of restructuring became more positive as their qualifications increased up until a Degree qualification and dropped for health care employees with a Masters degree.No significant differences were noted in the perceptions of health care employees varying in qualification regarding tangibles after the process of restructuring.Hence, hypothesis 2 may only be accepted in terms of qualification and reliability, empathy, responsiveness and empathy respectively at the 1% level of significance and not in terms of tangibles. The sub-dimensions of service quality after the restructuring The results reflect that employees were fairly convinced that the process of transformation undertaken in the health care organization led to enhanced service quality in terms of improved empathy, assurance, responsiveness, tangibles and reliability (Mean scores ranged from 3.105 to 3.385 against a maximum attainable score of 5) thereby respectively indicating increased some room for further improvement.In terms of their perceptions of the restructuring in enhancing empathy, employees were not convinced that the restructuring has enabled personnel of the hospital/clinics to better understand the specific needs of patients nor made the operating hours of the hospital/clinics convenient to all its patients.In this regard, Nauert (2002) suggests adopting business strategies such as environmental assessments of market wants, needs and demands and Ramgem et al. (2011) emphasizes that an integrated delivery network will improve access to health services and respond better towards people"s health needs.Except for the current strategies used to improve health services such as flexible hours, efficiency measures and, information technology, Powers and Jack (2008) suggest that organizations can increase their flexibility by relying on multi-skilled employees like nurse practitioners and physician assistants.In terms of the service quality dimension of assurance, employees did not believe that the restructuring process made patients feel safer in their interactions with the hospital/clinic.This triggers the importance of South Africa"s first service delivery initiative of Bathopele which means people first and is based on eight principles of service delivery (Russell & Bvuma, 2001).With regards to responsiveness, a significant percentage of the employees were not convinced that responding to patient"s requests improved after the restructuring. Oliveira-Cruz, Hanson, and Mills (2003) mention that one of the constraints of health service delivery is the shortage and distribution of qualified staff.Nauert (2002) suggests the business strategy of incorporating system linkages with key physicians and other providers and the strengthening of executive direction to enhance business performance (Nauert, 2002).In terms of tangibles, employees felt that the restructuring did not result in the provision of modern equipment.This is ironic as Clancy (2006) believes that equipment is a fundamental resource for health service delivery and Oliveira-Cruz, Hanson, and Mills (2003) identify the lack of equipment as a constraint of health care service delivery.In terms of the service quality dimension of reliability, a large percentage of employees felt that the restructuring did not improve reliability of service in that services were not still delivered according to set date and time and things are not done right the first time.This result confirms Loevinsohn and Harding"s (2005) notion that allocating more resources to health services does not necessarily address the problem of service delivery.This is particularly true, considering that the quality of health care service delivery depends on job satisfaction and empowerment of health workers (Leggat et al., 2010), participation in decision making and psychological support from supervisors (Mukherjee & Malhotra, 2006) as well as effective communication (McCallin, 2001;Robinson et al., 2010). Furthermore, the five sub-dimensions of service quality as perceived by employees after the process of transformation correlate significantly with each other at the 1% level of significance except for tangibles and empathy which do not relate.The implication is that business strategies designed and adopted, during and after the restructuring process, to improve each subdimension of service quality individually has the potential to snowball and improve employee perceptions of health care service delivery as a whole.Conversely, failure to manage each of the subdimensions of service delivery after the transformation process can perpetuate negative perceptions of the restructuring and bring about a failed process in enhancing health care service delivery.The significant relationships amongst these five subdimensions of health care service delivery emphasizes the need for an unfragmented and integrated delivery network as proposed by Ramagem et al. (2011). The impact of biographical variables The results also indicate that perceptions of the service quality dimension of empathy after the restructuring process is significantly influenced by gender, job category and qualification.The qualification of employees also influenced their perceptions of reliability, responsiveness and assurance after the restructuring.No other biographical influences were noted.Whilst these biographical influences were noted, employees" perceptions of these subdimensions of service delivery may also be influenced by staff attitude towards patients (McIntyre & Klugman, 2003). Recommendations and conclusion The results of the study reflect obvious recommendations which when implemented have the potential to result in enhanced service delivery and a more successful restructuring process (Table 7). Sub-dimensions of service quality Recommendation Empathy  Understand the specific needs of patients. Adopt business strategies such as environmental assessments of market wants, needs and demands  Make operating hours of the hospital/clinics convenient to all patients and take cognisance of those travelling distances. Increase flexibility in service delivery by relying on multi-skilled employees like nurse practitioners and physician assistants.Assurance  Make patients feel safer in their interactions with the hospital/clinics as medical setbacks can be a daunting experience. Adopt and effective implement the eight principles of Bathopele for enhanced service quality and delivery.Responsiveness  Respond to patients" requests promptly. Ensure that the hospital/clinics are sufficient and qualified staff.  Incorporate system linkages with key physicians and other service providers. Ensure strengthened executive direction in order to enhance business performance.Tangibles  Ensure the provision and effective utilisation of modern equipment. Ensure that staff are trained to use the modern equipment and that they are not under-utilized.Reliability  Deliver promised service according to set date and time.  Do things right the first time as this is imperative in health care. Ensure that staff are satisfied, empowered, engage in decision making and receive psychological support from supervisors and effective communication and these motivate employees to perform optimally and enhance service quality.Overall  Monitor employee attitudes to patients. Ensure an unfragmented and integrated health care service delivery network Health care service delivery is a challenge in many countries and health care organizations are trying to overcome the obstacles to improved health services.Whilst identifying strategies to improve health services is vital, it is also imperative to ensure that the strategies are selected by taking cognisance of the cultural and environmental context in which they are to be implemented.The provision of quality health care service delivery means ensuring improved and equal access to health services, enlarged access to priority health interventions, reduction in costs to ensure affordability and responding effectively to people"s health needs.Accomplishing this not only requires business and financial strategies but also effective human resource management and communication strategies because keeping staff motivated is imperative in nurturing proper attitudes towards patients thereby enhancing health care service quality.Regular service delivery surveys will provide feedback on patients" perceptions of service quality in relation to tangibles, reliability, responsiveness, assurance and empathy and provide insight into staff attitudes and behaviours.People must be provided with a well coordinated continuum of services in an integrated way in order to ensure effective service quality. Table 1 . Descriptive statistics: sub-dimensions of service quality Table 2 . Intercorrelations: sub-dimensions of service quality Table 3 . Mann-Whitney test: sub-dimensions of service quality and gender Table 4 . Mann-Whitney test: sub-dimensions of service quality and job category Table 5 . Kruskal-Wallis Anova: sub-dimensions of service quality and age and tenure Table 6 . Kruskal-Wallis Anova: sub-dimensions of service quality and qualification
2018-12-07T21:05:15.949Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "f1b3c35155fb7d1aed8da94118e49358b634690b", "oa_license": "CCBYNC", "oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=5629&hash=6f676844509d7b704879e60fa441dc5f5d07418b", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "f1b3c35155fb7d1aed8da94118e49358b634690b", "s2fieldsofstudy": [ "Business", "Medicine" ], "extfieldsofstudy": [ "Business" ] }
44164284
pes2o/s2orc
v3-fos-license
KLF10 as a Tumor Suppressor Gene and Its TGF-β Signaling Krüppel-like factor 10 (KLF10), originally named TGF-β (Transforming growth factor beta) inducible early gene 1 (TIEG1), is a DNA-binding transcriptional regulator containing a triple C2H2 zinc finger domain. By binding to Sp1 (specificity protein 1) sites on the DNA and interactions with other regulatory transcription factors, KLF10 encourages and suppresses the expression of multiple genes in many cell types. Many studies have investigated its signaling cascade, but other than the TGF-β/Smad signaling pathway, these are still not clear. KLF10 plays a role in proliferation, differentiation as well as apoptosis, just like other members of the SP (specificity proteins)/KLF (Krüppel-like Factors). Recently, several studies reported that KLF10 KO (Knock out) is associated with defects in cell and organs such as osteopenia, abnormal tendon or cardiac hypertrophy. Since KLF10 was first discovered, several studies have defined its role in cancer as a tumor suppressor. KLF10 demonstrate anti-proliferative effects and induce apoptosis in various carcinoma cells including pancreatic cancer, leukemia, and osteoporosis. Collectively, these data indicate that KLF10 plays a significant role in various biological processes and diseases, but its role in cancer is still unclear. Therefore, this review was conducted to describe and discuss the role and function of KLF10 in diseases, including cancer, with a special emphasis on its signaling with TGF-β. SP/KLF Family Transcription factors SP (specificity proteins) and KLF (Krüppel-like Factors) contain three Krüppel-like zinc fingers collectively known as the SP/KLF family. While members of this family also have a unique amino-terminal (N-terminal) end that acts as the functional domain and that allows it to bind specifically to a certain partner, the N-terminal domains of the various KLFs vary, and these different domains mediate different molecular functions [1]. In the early 1980s, SP1 was first identified as a protein capable of binding GC and GT rich regions or the 5 -CACCC-3 DNA sequence in the SV40 promoter, indicating that it could serve as a transcriptional regulator [2]. To date, a total of 27 identified members in mammals (human) have been characterized, from which nine members of the SP subfamily (SP1-SP9) and 18 members of the KLF family (KLF1-KLF18) were included (Table 1) [3][4][5]. They were categorized as members of SP/KLF because of their resemblance in the zinc finger motif located toward the C-terminus [6]. Buttonhead (Btd) box domain CXCPXC, which differentiate SP from KLFs is only present in the SP family at just 5 of the KLF DNA binding domain [7], whereas N-terminal regions of KLF/SP transcription factors are extremely flexible and consist of different combinations of transactivation/repression domains. Cancers 2018, 10, 161 4 of 18 SP/KLF factors have been found to be engaged in fundamental cell processes such as growth, differentiation, apoptosis, and angiogenesis; therefore, the family is involved in numerous aspects of tumorigenesis, while other diseases behave as either activators or repressors (Table 1). Mammalian KLFs have been divided into 3 groups based on the shared domain. Most of the members of this family are widely expressed, but some have restricted tissue expression. Formerly these KLFs were named as they were identified, KLF10 and KLF11 were originally identified as early genes induced by transforming growth factor β and were named TGF-β inducible early gene 1 (TIEG1) and TGF-β inducible early gene 2 (TIEG2), respectively [8,9]. Conversely, KLF1 expressed in specific tissues erythroid and therefore referred to as EKLF [6], KLF2 is found in the lung and therefore referred to as LKLF [10], KLF3 is widely expressed and therefore called basic BKLF [11], KLF4 is initially found gut enriched KLF and called GKLF [12,13], KLF5 is found in the intestines and therefore known as IKLF [14], and KLF15 is initially found in the kidney so it is referred to as KKLF. KLF9 was identified as a basal transcription element binding (BTEB) protein [13], with KLF5 and KLF13 as homologs (BTEB2 and BTEB3, respectively) [15], while KLF17 was first discovered as a germ cell-specific gene encoding a zinc finger protein 393 (Zfp393) [16]. KLF18 genes/pseudogenes are found in most placental mammals (for example, human, mouse) [3]. The key feature of SP/KLF proteins is the presence of a DNA-binding domain consisting of three Krüppel-like zinc fingers classified into different subfamilies based on the presence of conserved motifs within their N-terminal domains outside the zinc finger domain ( Figure 1). KLF members are divided into distinct groups according to functional similarities revealed upon structural analysis (reviewed in ref [17]. Most KLF proteins were randomly named by their discoverers, which led to the development of a new nomenclature by the HGNC (Human Gene Nomenclature Committee) that serially designates each KLF protein by number [18]. Several groups have described various approaches to identify all of the members of the SP/KLF family in human and mouse genomes, and the effects of these proteins in more detail [4,[19][20][21][22][23][24][25]. As shown in Figure 1, the protein-binding domains within the protein structures of KLF Group 1 (KLF 3, 8, and 12) serve as transcriptional repressors through their interaction with the C-terminal binding protein (CtBP). Group 2 (KLF 1, 2, 4, 5, 6, and 7) function mostly as transcriptional activators and bind to histone acetyltransferases (HATs) and Group 3 (KLF 9, 10, 11, 13, 14, and 16) have repressor activity through the presence of a Sin3A-binding site. KLF 15, 17 and 18 based s contain no defined protein interaction motifs. Eighteen mammalian KLFs have been recognized to date, and these have received increased attention because of their basic, diverse biological processes and their contributions to human diseases (Table 1). Here, we reviewed KLF10 to understand its role and function in diseases, including cancer as a tumor suppressor, as well as its interaction with TGF-β signaling. Each protein contains three zinc fingers motifs at the C-terminus, two conserved cysteine residues and two conserved histidine residues for zinc binding are highlighted. KLF10 Induced Mechanism of Gene Activation Transforming growth factor (TGFβ) beta-like group proteins consist of a large family of related growth factors comprised of at least 30 members in mammals. TGFβ has three isoform TGFβ1, 2, 3 which function as primary mediators of TGFβ signaling. These three ligands are structurally related to activins, nodals and some growth and differentiation factors (GDFs), and the bone morphogenetic proteins (BMP) [26]. TGFβ superfamily members regulate fundamental cell processes such as proliferation, differentiation, death, cytoskeletal organization, adhesion, and migration. Smad proteins are the major intracellular mediators of TGFβ signaling pathway to control transcription of their target genes [27]. TGF-β1 activates SMAD-dependent and -independent pathways to exhibit its biological activities [28]. Members of the SMAD are classified into three groups, R-SMADs, the receptor-regulated SMADs which include SMAD1, SMAD2, SMAD3, SMAD5, and SMAD8, Co-SMAD (common SMAD) which have only one member SMAD4, I-SMADs (inhibitory SMADs) including SMAD6 and SMAD7. It is well-known that TGF-β is stimulated by activating downstream mediators SMAD2 and SMAD3, and is negatively regulated by an inhibitory SMAD7 [29]. In addition, several KLFs are linked to TGFβ signaling, TGFβ induces the expression of early response transcription factors such as the Sp1/KLF like zinc-finger proteins KLF10 and KLF11 [9,30]. KLF10, which work as effector proteins in TGFβ mediated cell growth control and differentiation were originally defined as TGF-β inducible early gene 1 (TIEG1) in human osteoblast (OB) cells [10]. The KLF10 N-terminal protein segments also contain three unique repression domains, R1, R2, and R3 [10][11][12]. The N terminal region of the Sp1-like transcriptional factor family is variable, therefore, these three transcriptional repressor domains are a critical feature of the KLF10 and KLF11 subfamily. GC-rich sequences of KLF10 play important roles in the regulation of a large number of genes Each protein contains three zinc fingers motifs at the C-terminus, two conserved cysteine residues and two conserved histidine residues for zinc binding are highlighted. KLF10 Induced Mechanism of Gene Activation Transforming growth factor (TGFβ) beta-like group proteins consist of a large family of related growth factors comprised of at least 30 members in mammals. TGFβ has three isoform TGFβ1, 2, 3 which function as primary mediators of TGFβ signaling. These three ligands are structurally related to activins, nodals and some growth and differentiation factors (GDFs), and the bone morphogenetic proteins (BMP) [26]. TGFβ superfamily members regulate fundamental cell processes such as proliferation, differentiation, death, cytoskeletal organization, adhesion, and migration. Smad proteins are the major intracellular mediators of TGFβ signaling pathway to control transcription of their target genes [27]. TGF-β1 activates SMAD-dependent and -independent pathways to exhibit its biological activities [28]. Members of the SMAD are classified into three groups, R-SMADs, the receptor-regulated SMADs which include SMAD1, SMAD2, SMAD3, SMAD5, and SMAD8, Co-SMAD (common SMAD) which have only one member SMAD4, I-SMADs (inhibitory SMADs) including SMAD6 and SMAD7. It is well-known that TGF-β is stimulated by activating downstream mediators SMAD2 and SMAD3, and is negatively regulated by an inhibitory SMAD7 [29]. In addition, several KLFs are linked to TGFβ signaling, TGFβ induces the expression of early response transcription factors such as the Sp1/KLF like zinc-finger proteins KLF10 and KLF11 [9,30]. KLF10, which work as effector proteins in TGFβ mediated cell growth control and differentiation were originally defined as TGF-β inducible early gene 1 (TIEG1) in human osteoblast (OB) cells [10]. The KLF10 N-terminal protein segments also contain three unique repression domains, R1, R2, and R3 [10][11][12]. The N terminal region of the Sp1-like transcriptional factor family is variable, therefore, these three transcriptional repressor domains are a critical feature of the KLF10 and KLF11 subfamily. GC-rich sequences of KLF10 play important roles in the regulation of a large number of genes essential for various cellular functions, including cell proliferation, differentiation, and apoptosis [13]. KLF10 is involved in several different types of gene expression in various cell types and serves as a target gene for many signaling pathways. This gene is known to be induced by estrogen, TGF-βs, bone morphogenetic protein (BMP), nerve growth factor and epidermal growth factor (EGF) depending on cellular and environmental circumstances. One of its potential roles as a transcription factor is that it mimics the anti-proliferative effect of TGF-β and induces apoptosis [31]. KLF10 and TGF-β induce apoptosis by the formation of ROS (reactive oxygen species) and a loss of the mitochondrial membrane potential [32]. KLF10 facilitates TGF-β signaling after phosphorylation of the carboxy-terminal serine residue of the internal modulator Smad2/Smad3 protein via TGF-β receptor type I, and interaction with Smad4, the nuclear localized Smad compound, induces expression of KLF10 in order to bind with promoters of Smad2, Smad7, and TGF-β1. KLF10 overexpression increases endogenous TGF-β regulated genes p21 and PAI-1 [33]. Additionally, KLF10 is regulated by other members of the TGF-β superfamily, such as BMPs, activins and GDNF (glial cell derived neurotrophic factor), which suggests that KLF10 might act in diverse signaling pathways at the transcriptional level. KLF10 Role in Various Diseases KLF10 has the potential for use as a marker for various diseases including diabetes, cardiac hypertrophy, and osteoporosis ( Table 2). Diabetes High blood and tissue concentrations of glucose play a significant role in the development of vascular complications in diabetes mellitus (DM) patients. Loss of KLF10 in liver tissue suppresses glycolytic proteins and encourages gluconeogenic and lipogenic proteins [43]. KLF10 is a circadian-clock-controlled transcription factor that can suppress lipogenic genes involved in glucose and lipid metabolism in the liver [44] that also affects hepatic gluconeogenesis, which contributes to T2DM [36]. However, no apparent pathological defects of pancreatic function were observed under basal conditions in knockout mouse models of KLF10 [45,46]. Using real-time PCR analysis, Zitman-Gal et al. found that the expression of KLF10 was also upregulated in a diabetic-like environment, whereas the addition of calcitriol significantly down-regulated KLF10 mRNA expression [47]. Bone Disease TGF-β, which plays a major role in osteoblast (OBL) and osteoclast (OCL) functions, is present in large amounts in the skeleton [48]. KLF10 is one of the key transcriptional regulators in osteogenesis regulated through the TGF-β signaling pathway [49]. KLF10 knock-out (KO) mice have slow osteoblast production of RANKL (receptor activator of nuclear factor kappa-B ligand) and high levels of osteoprotegerin (OPG), which delays OCL differentiation, leading to a reduced bone turnover and a loss of bone (osteopenia) [34,35,46]. KLF10 also shows osteoclast precursors that differentiate slowly, as well as increased AKT (a serine/threonine-specific protein kinase) and MAPK/ERK (mitogen activated protein kinases/extracellular signal regulated kinases) signaling pathway activation, which is consistent with the roles of these kinases in promoting osteoclast survival. Higher RANKL concentrations can restore this defect, suggesting that KLF10 plays a role in osteogenesis through RANKL signaling [50]. Heart Hypertrophy Microarray analysis of heart tissue from the left ventricles of KLF10 KO male, but not female, mice aged 16 months showed a 14-fold increase, increased fibrosis, and increased wall thickness relative to wild-type animals [37]. Another study reported that KLF10 is downregulated in Angiotensin II (Ang II) induced hypertrophy through repression of expression of the cardiac transcription factor GATA4 and mRNA levels of hypertrophy-related genes atrial natriuretic factor (ANF) and brain natriuretic peptide (BNP) [51]. TGF-β1 and KLF10 control T regulatory cell differentiation and suppressor function and act as regulators of CD4 + CD25 − T cell activation. KLF10 in response to TGF-β1 can transactivate both TGF-β1 in CD4+CD25 − T cells and Foxp3 promoters, which is associated with a positive response of KLF10 expressed highly in T regulatory cells [38]. KLF10 plays a critical role in the regulation of atherosclerotic lesion formation in mice by targeting TGF-β1 to regulate CD4+CD25 − T cells and T regs. KLF10 also complexed with pituitary tumor-transforming gene-1 (Pttg1), which is one of its target genes and plays an important role in cardiac hypertrophy [37,52]. Other Diseases KLF10 are able to physically associate withthe FOXP3 gene to induce transcription and can either positively or negatively regulate FOXP3 through its differential association with p300/CBP-associated factor (PCAF) or the histone deacetylase binding protein Sin3, respectively. Inactivation of immune genes, such as FOXP3, may cause human diseases such as colitis [53]. In the absence of KLF10, colonic macrophages express lower levels of TGFβRII and reduced Smad2 phosphorylation, resulting in TGF-β1 stimulation that contributes to colitis [41,54]. Moreover, transcriptomic analysis of peritoneal cells in a mouse model of sepsis caused by infection with a non-pathogenic strain of Escherichia coli revealed that KLF10 was down-regulated at 2 h [45], while KLF10 induces TGF-β1 expression, which is an anti-inflammatory cytokine and regulates T cell activation [29]. KLF10 expression was significantly increased in diet-induced nonalcoholic steatohepatitis (NASH) and collagen-producing activated hepatic stellate cells. This up-regulation of KLF10 increased TGF-β signaling genes and suppressed ChREBP expression [40]. Furthermore, KLF10 plays a role in hyperglycemia [55], human chronic obstructive pulmonary diseases and liver cirrhosis [56], tendon repair [57], and hypoxia [58]. However, the actual functional role and mechanism of KLF10 in various pathophysiological conditions are still uncertain. Phenotype in KLF10 Deficient Models KLF10 deficient animals are widely used to determine its role in different cellular process and diseases. The KO phenotype of KLF10 shows a normal lifespan in mice, but with some defects such as in the microarchitecture and mechanical properties of tendons. The tail tendons of KLF10 KO mice were significantly less stiff than the wild-type controls at 3 months of age, while no difference existed at 1 or 15 months, indicating that there are age-dependent changes in the mechanical properties of the tendon in KLF10 KO mice [59]. KLF10 display an important role in skeletal development and homeostasis; significantly weaker bones and reduced amounts of cortical and trabecular bone were noticed in the absence of KLF10 [60]. Additionally, transmission electron microscopy revealed that osteocytes display defects in their morphology, density and surrounding bone matrix [61]. Moreover, type-I collagen, which is the most abundant collagen found in tendons, was significantly decreased in KLF10 KO tendons [62]. KO of KLF10 shows gender-specific osteopenic phenotypes in females characterized by a decrease in the total number of functional/mature OBLs indicating a potential role of KLF10 in mediating estrogen signaling (as well as TGFβ) in the skeleton [63][64][65]. Moreover, KLF10 null mice develop age-related hypertrophy [66]. Loss of KLF10 can delay wound healing and increase Smad 7 in wounds, which causes weakened wound contraction, granulation tissue formation, collagen synthesis, and re-epithelialization [32,33]. Its loss has reduced endothelial progenitor cells (EPCs) function and TGF-β1 responsiveness because of impaired blood flow recovery after hindlimb ischemia in systemic KLF10−/− mice [67]. KLF10 lacking bone marrow display both basal defects in their ability to migrate and incorporate into the vessel wall and complex paracrine defects in their ability to facilitate endothelial cell (EC) growth and migration [68]. Role of KLF10 as a Tumor Suppressor in Various Cancers KLF10 plays a vital role in many biological processes and diseases, including tumorigenesis (Table 3). Many studies have shown that KLF10 acts as a tumor suppressor through TGF-β signaling by playing an important role in inhibition of cell proliferation and induction of apoptosis ( Figure 2) [30,69]. KLF10 is an effective repressor of cancer cell proliferation, overexpression of KLF10 reduced cell proliferation in many cancer types while in the absence of KLF10 cell proliferation increased, which decreases Smad dependent transcription, and importantly, Smad2/3 in association with a prolonged increase in Smad7 expression. Moreover, KLF10 is a valuable tool to enhance the death of p53 deficient cancer in association with low dose chemotherapy [70]. [83] Multiple myelomas Suppressor MicroRNA-410 accumulation regulates cell proliferation and apoptosis by targeting KLF10 via activation of the PTEN/PI3K/AKT pathway in multiple myeloma. Liver Cancer Loss of TGF-β sensitivity has been assumed to be an event in HCC (hepatocellular carcinoma) development [84,85]. Activation of KLF10 gene can promote growth inhibition and apoptosis of TGF-β-susceptible human HCC cells, as well as to inhibit stathmin promoter activity, suggesting stathmin is a direct target of KLF10 [79]. Furthermore, KLF10 was found to repress glutathione transferase P (GST-P) promoter activity, which is an excellent tumor marker in hepatocarcinogenesis, by binding to GST-P silencer 2 [86]. Lack of KLF10 blocked cellular proliferation of hepatocytes by encouraging TGF-β/Smad pathway during liver tumorigenesis [80]. Koczulla et al. reported that KLF10 is highly expressed in liver cirrhosis [56]; however, there is inadequate information regarding KLF10, and the actual functions of KLF10 in liver tumorigenesis are controversial. Pancreatic Cancer KLF10 expression is inversely associated with pancreatic cancer and therefore, KLF10 can be used as a predictive indicator for pancreatic cancer stages [87]. KLF10 is expressed in both acinar and ductular epithelial cell populations and plays a significant role in the pancreatic β-cells. Loss of this gene increases p21 Cip1 in islet cells, which are associated with impaired glucose tolerance and impaired insulin secretion [30,88]. KLF10 overexpression in the TGF-β sensitive pancreatic human cell line is sufficient to induce apoptosis [89]. Furthermore, its overexpression-induced cell cycle arrest at the G1-to S-phase transition to inhibit human pancreatic cancer SW1990 cell proliferation [77]. KLF10 is located on chromosome 8q22, where mutations are quite frequent in pancreatic cancers [90]; however, mutational screening of a panel of the twenty-two human pancreatic cell lines showed no alteration in KLF10 expression [78]. Skin Suppressor Loss of KLF10 leads to enhanced tumor formation and progression. P21 ↑ transcriptional activation in a p53 independent manner. [83] Multiple myelomas Suppressor MicroRNA-410 accumulation regulates cell proliferation and apoptosis by targeting KLF10 via activation of the PTEN/PI3K/AKT pathway in multiple myeloma. Figure 2. TGF-β signaling and its target disease mediated by KLF10 gene. In response to TGFβ ligand binding, KLF10 gene expression is induced in a Smad dependent manner and play a role as a tumor suppressor in many cancers. TGFβ signaling occurs through TGFβ receptors (TβRI and TβRII). Binding of TGFβ ligand to TβRII facilitates phosphorylation of TβRI, which in turn phosphorylates the SMAD2 and SMAD3 proteins. After phosphorylationSmad2/Smad3 form complex with Smad4, and translocate to the nucleus to induce expression of KLF10. Subsequently, KLF10 bind to the promoters of Smad2, Smad7, and TGF-beta1, whereas the expression of the inhibitory Smad7 is blocked to disrupt the negative feedback loop. Importantly, KLF10 serves as a positive feedback loop for regulating TGFβ signaling by inducing the expression of SMAD2 and inhibiting the expression of the inhibitory SMAD7 gene in many cancers. Liver Cancer Loss of TGF-β sensitivity has been assumed to be an event in HCC (hepatocellular carcinoma) development [84,85]. Activation of KLF10 gene can promote growth inhibition and apoptosis of TGFβ-susceptible human HCC cells, as well as to inhibit stathmin promoter activity, suggesting stathmin is a direct target of KLF10 [79]. Furthermore, KLF10 was found to repress glutathione transferase P (GST-P) promoter activity, which is an excellent tumor marker in hepatocarcinogenesis, by binding to GST-P silencer 2 [86]. Lack of KLF10 blocked cellular proliferation of hepatocytes by encouraging TGF-β/Smad pathway during liver tumorigenesis [80]. Koczulla et al. reported that KLF10 is highly expressed in liver cirrhosis [56]; however, there is inadequate information regarding KLF10, and the actual functions of KLF10 in liver tumorigenesis are controversial. Pancreatic Cancer KLF10 expression is inversely associated with pancreatic cancer and therefore, KLF10 can be used as a predictive indicator for pancreatic cancer stages [87]. KLF10 is expressed in both acinar and Figure 2. TGF-β signaling and its target disease mediated by KLF10 gene. In response to TGFβ ligand binding, KLF10 gene expression is induced in a Smad dependent manner and play a role as a tumor suppressor in many cancers. TGFβ signaling occurs through TGFβ receptors (TβRI and TβRII). Binding of TGFβ ligand to TβRII facilitates phosphorylation of TβRI, which in turn phosphorylates the SMAD2 and SMAD3 proteins. After phosphorylationSmad2/Smad3 form complex with Smad4, and translocate to the nucleus to induce expression of KLF10. Subsequently, KLF10 bind to the promoters of Smad2, Smad7, and TGF-beta1, whereas the expression of the inhibitory Smad7 is blocked to disrupt the negative feedback loop. Importantly, KLF10 serves as a positive feedback loop for regulating TGFβ signaling by inducing the expression of SMAD2 and inhibiting the expression of the inhibitory SMAD7 gene in many cancers. Lung Cancer KLF10 occupied GC-rich sequences in the promoter region of EMT (epithelial mesenchymal transition) promoting transcription factor SLUG/SNAI2 (snail family zinc finger 2) and acted as the main factor of the TGF-β1 related EMT program [91]. Mishra et al. reported that KLF10 suppresses TGF-β induced EMT by binding and repressing the SNAI2 promoter as a direct KLF10 target gene through HDAC1 in lung and pancreatic cancer models [82]. Furthermore, in vitro and in vivo studies showed decreased tumor growth and increased chemo-sensitivity to gemcitabine through G0/G1 cell cycle arrest after Cul4A (cullin 4A) knockdown, which is an important regulator of proliferation and cell cycle progression in lung cancer. In addition, Cul4A knockdown increases the expression of KLF10, cyclin-dependent kinase inhibitor p21/WAF1, and TGF-β1, which play a role as tumor suppressors [92]. Breast Cancer KLF10 and its target genes play important roles in breast cancer. KLF10 has been indicated as a marker for breast cancer, and different stages of breast tumor tissues from different populations show decreased mRNA levels of KLF10, Smad2, and BRAD1 (breast cancer 1 associated RING domain 1), while Smad7 is inversely correlated with KLF10 [73]. KLF10 is an anti-metastasis gene that significantly prevents breast cancer cell invasion, suppresses mammary tumorigenesis and decreases lung metastasis by inhibition of EGFR (epidermal growth factor receptor) gene transcription through the EGFR signaling pathway [93]. Furthermore, Hsu et al. reported a significant E2/Klf10/BI-1/cytoplasmic calcium pathway in which E2 (estrogen) induces apoptosis through increased expression of KLF10, which decreases BI-1 (Bax inhibitor-1) transcription and finally increases the concentration of Ca 2+ (cytoplasmic calcium) [94]. The above studies strongly support KLF10 as a tumor suppressor protein that plays an inhibitory role in the proliferation of breast cancer. Colon Cancer KLF10 is believed to be a tumor suppressor gene in many cancers (Table 2), including human colorectal cancer. Additionally, it is one of the members of the signal transduction of the peroxisome proliferator-activated receptor gamma (PPARγ) pathway in human colorectal cancer cells. Human colorectal cancer cells express abundant PPARγ, but its inhibitory function is very low, signifying a defect in the PPARγ pathway [95]. Treatment of colon cancer cells with 15-hydroxy-eicosatetraenoic acid (15S-HETE), an endogenous ligand for PPARγ, arrests the growth of colon cancer cells via a PPARγ dependent pathway involving increased expression of KLF10 and decreased Bcl-2 (B cell lymphoma 2) expressions [72]. Human Prostate Cancer Instigation of the TGF-β1 pathway triggers doxazosin-based apoptotic effect in prostate cancer cells. Doxazosin is an α1-selective alpha blocker used to treat high blood pressure and urinary retention related to benign prostatic hyperplasia (BPH). In vitro analysis has shown that treatment of PC-3 prostate cancer cells with doxazosin resulted in a marked induction in KLF10 and Smad4 mRNA levels, as well as a transient decrease in Smad7 mRNA expression. Smad4 is an important regulator of TGF-β1 signaling and apoptosis in a variety of cell lines [71,96,97]. Metastatic Brain Tumors Metastasis is a procedure in which cancerous cells spread from the initial or primary tumor, enter the bloodstream or lymphatic system, and spreads to different or secondary sites within the body in numerous malignant neoplasms, including breast, lung, and ovarian cancer. Gene expression profiling using a 17k-expression array of metastatic brain tumors from primary lung adenocarcinoma revealed that KLF10, which is a gene involved in apoptosis, was expressed at insufficient or abnormally low levels in metastatic brain tumors, suggesting KLF10 acts as a tumor-repressor gene [75]. Renal Cancer KLF10 facilitates up-regulation of TGF-β1 in VHL (von Hippel-lindau) deficient tumors. In a Luc-reporter assay, a potential KLF10 binding site was identified in the TGF-β1 promoter upstream from the transcription initiation site, which is also recognized as a Sp1-binding site. KLF10 was suppressed in the renal cancer cell line by wild-VHL and not from mutant VHL cells, suggesting that it may also serve as a VHL target. Stimulation of the TGF-β1 promoter by KLF10 in HAEo(−) and 293 cells suggests that KLF10 is a VHL target that regulates the TGF-β1 promoter in renal cell carcinoma [76]. Other Cancers Smad signaling is present in certain different human lymphoma cells. However, in lymphoma cancer, the contribution of Smads to TGF-β induced apoptosis is supported by the increased expression of KLF10, which is a Smad-responsive gene [74,98]. KLF10 has been recognized as a direct downstream target of miR-410 in multiple myelomas (MM) cells and facilitates the effects of miR-410 in MM, resulting in PTEN (phosphatase and tension homolog)/AKT activation [42,99]. KLF10 is a circadian transcriptional regulator that links the molecular clock to energy metabolism [44]. In ovarian cancer, transcription factor KLF10 activated by circadian rhythm gene expression serves as a risk factor for epithelial ovarian cancer, histopathologic subtype, and invasiveness [81]. The zinc finger transcription factor KLF10 regulates myeloid-specific activation of the leukocyte integrin CD11d promoter [100]. Co-transfection and electrophoretic mobility shift have revealed that KLF10 competes with these SP proteins for binding to overlapping sites in the CD11d promoter. Stimulation of CD11d expression resulting in differentiation of myeloid cells is mediated through increased binding of KLF10 to the CD11d promoter [101]. Concluding Remarks KLF10 plays an active role in the etiology and development of many mammalian diseases and appears to have unique tissue-specific roles that were identified primarily using in vivo experiments, including gene knockout models. KLF10 is believed to play a crucial role in inhibition of cancer cell proliferation and promotion of apoptosis, which strongly indicate its role as a tumor suppressor. However, there is limited information regarding the mechanism of its role in cancer. The majority of the KLF10 role and gene targeting is mediated through TGF-β signaling. Taken together, the results of the studies discussed indicate that KLF10 can have profound effects as a tumor suppressor in many cancers via TGF-β/Smad dependent and independent pathways. Further understanding of its function and target genes are needed to provide additional insights into the mechanisms of action of KLF10 to understand its role in diseases, including cancer.
2018-06-07T12:41:46.006Z
2018-05-25T00:00:00.000
{ "year": 2018, "sha1": "4002ec76f95bd42d1418144ecd182f70939db7cb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/10/6/161/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4002ec76f95bd42d1418144ecd182f70939db7cb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9951271
pes2o/s2orc
v3-fos-license
Diagnosing ANCA-associated vasculitis in ANCA positive patients Supplemental Digital Content is available in the text Introduction Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a rare, necrotizing vasculitis that predominantly affects small vessels. AAV includes microscopic polyangiitis (MPA), granulomatosis with polyangiitis (GPA), and eosinophilic granulomatosis with polyangiitis (EGPA). [1,2] These clinical conditions are often associated with circulating ANCA directed against either proteinase 3 (PR3) or myeloperoxidase (MPO). Untreated and severe AAV can be fatal within months. [3] Fortunately, advances in therapies have led to an improved prognosis over the last decades. [4,5] Adequate therapy requires an early diagnosis, but diagnosing AAV can be challenging. [6,7] Ideally the diagnosis AAV is confirmed with a biopsy. [8] However, in clinical practice and even in large trials, a biopsy is not always performed and a diagnosis is often made based on clinical features in combination with positive ANCA serology. [5,9,10] Importantly, positive ANCA serology can also be found in other conditions with systemic symptoms, or even in asymptomatic patients. [11][12][13] Classification systems, such as the American College of Rheumatology (ACR) criteria [14] and the Chapel Hill Consensus Conference (CHCC) guidelines [1] have several limitations. [15] The ACR criteria were developed in the 1980s, when ANCA was not routinely assessed and the classification of vasculitis did not yet include MPA. The CHCC classification was developed as a nomenclature system and does not provide clear diagnostic criteria. Hence, the lack of an established diagnostic system, the complex symptomatology of AAV and the limited specificity of ANCA may lead to a delayed and unstandardised diagnosis AAV in clinical practice. The aim of the present study was to identify clinical and laboratory variables that lead to the diagnosis AAV in ANCA positive patients in clinical practice. Furthermore, the sensitivity and specificity of several ANCA cut-off values for a clinical diagnosis AAV were explored. Methods We performed a retrospective cohort study in the Northwest Clinics a teaching hospital in Alkmaar, The Netherlands. The institutional review board approved the study and the medical ethical committee waived requirements for informed consent, due to the retrospective nature of the study. A computerised search for the assessment of ANCA in the local laboratory between February 1, 2005 and February 1, 2015 was performed. ANCA serology was examined by indirect immunofluorescence (IIF) on neutrophil substrate (NOVA Lite ANCA, INOVA Diagnostics Inc, San Diego) and, if positive, followed by immunoassays for the detection of antibodies to PR3 and myeloperoxidase MPO (Autostat II Anti-PR-3 and Anti-MPO ELISAs, Hycor Biomedical Ltd, UK, from February 2005 until August 2012, and EliA PR3 S and EliA MPO S run on a Phadia 250 analyzer, Thermo Fisher Scientific, Immunodiagnostics, Sweden from Augustus 2012 until the end of the study period). In patients with a positive IIF, all subsequent ANCA assessments were performed with anti-PR3 and anti-MPO specific immunoassays immediately, leaving out IIF. Upper limits of the normal range were provided by the manufacturer of the assays: MPO >5 IU/mL and PR3 >8 IU/mL before 2012 and MPO >5.0 IU/mL and PR3 >3.0 IU/mL after 2012. Medical records of all patients with one or more positive MPO and/or PR3 ANCA test were reviewed for a clinical diagnosis of AAV (i.e., GPA, MPA, or EGPA). Demographic and clinical parameters were collected: age at presentation, sex, symptoms at presentation, number of affected organ systems, date and level of the first positive ANCA titre, laboratory parameters, and comorbidities. Furthermore, the clinical diagnosis (i.e., AAV or alternative diagnosis), date of diagnosis, and histological data were recorded. If a diagnosis was revised over time, this was recorded as well. Symptoms per organ system were recorded similar to symptoms as described in the Birmingham Vasculitis Activity Score (BVAS/WG). [16] Statistical analysis Patients with a clinical diagnosis AAV were compared with patients without a clinical diagnosis AAV. Chi-square tests were used for categorical data. Continuous data were analysed by the unpaired Student t test. The number of affected organ systems was analysed with the use of the Mann-Whitney U test. The results of the different ANCA assays were transformed into the multiplicity of their respective cut-off values. A receiver-operating characteristics (ROC)-curve was calculated for the sensitivity and specificity of several ANCA cut-off values for a clinical diagnosis. In order to identify indicators for AAV in ANCA positive patients a multivariable logistic regression model was developed. Fifty bootstrap samples were applied with backward elimination (P < 0.05) in order to establish the final predictors in the model. Hereafter the calculated shrinkage factor was used to adjust the original coefficients, in order to correct for optimism. A P value <0.05 was considered to be statistically significant. A sensitivity analysis was performed by repeating the analysis after the exclusion of patients with a clinical diagnosis AAV that was not biopsy proven. For data management and statistical analysis, Statistical Package for Social Sciences (SPSS) version 20.0 (IBM, Armonk, NY, USA) and RStudio 0.98.932 (Boston, MA, USA) were used. Enrolment Between February 1, 2005 and February 1, 2015 a total of 8403 IIF for ANCA was performed of which 1238 tested positive (27% p-ANCA, 71% c-ANCA pattern, 1% aspecific pattern) in 279 patients. A total of 5370 immunoassays for PR3 and/or MPO ANCA was performed of which 1218 samples tested positive in 239 patients (Fig. 1). Two of the 239 anti-MPO or anti-PR3 positive patients were excluded due to a lack of data in the medical records. Patients Of the 237 included MPO and/or PR3 ANCA positive patients, 57% was men with a mean age of 57 ± 19 years. ANCA was PR3 positive in 51% versus MPO positive in 49%. The median follow-up was 5.8 (percentiles 2.7-9.4) years and the median time between the request of the first positive ANCA titre and the diagnosis AAV was 15 days (9.0-36.0). Of the 237 patients, 119 patients (50%) were diagnosed with AAV between 1991 and 2015. In 9 patients the time until the diagnosis AAV was more than 4 months. None of the diagnoses were revised during followup. A total of 54 (45%) had a biopsy-proven vasculitis (34 renal biopsies, 12 [deep] skin biopsies, 4 nose biopsies, 3 lung biopsies) and in 28 patients (24%) a biopsy revealed aspecific, inflammatory findings. ANCA titres First ANCA titres were available in 226 patients, since 5 patients were referred from another hospital and 6 patients were diagnosed before the use of immunoassays for the detection of antibodies (1993). These patients were excluded from the analysis. An ROC curve of ANCA cut-off values as a determinant for clinical diagnosis showed an area under the curve of 0.87 (95% CI 0.82-0.92), shown in Fig. 3. Coordinates of the sensitivity and specificity several cut-off levels are shown in Table 3. An ANCA titre of ≥4 times the usual cut-off resulted in a sensitivity of 83.5% and a specificity of 78.6% for a clinical diagnosis of AAV. In patients with an alternative diagnosis, the ANCA level was ≥4 times the cut-off in only 21%. The association between the ANCA titre and the clinical diagnosis AAV was comparable for the different tests that were used in the laboratory before and after 2012, shown in Fig. 4. Fig. 5 shows the proportion of patients diagnosed with AAV subdivided by the level of the ANCA titre. Multivariable analysis After the exclusion of patients with missing first ANCA titres (n = 11), 226 patients were included in the multivariable model. (Table 4). In a sensitivity analysis after the exclusion of patients with a clinical diagnosis AAV that was not biopsy proven, a high Organ involvement, no. Table 5-12 in the supplemental content for detailed information about these models and the sensitivity analysis, http://links.lww.com/MD/B359). Discussion Our findings confirm that both MPO and PR3 ANCA can be positive in a variety of clinical conditions. Higher ANCA levels, PR3 as well as MPO, and more affected organ systems were associated with a clinical diagnosis AAV in our cohort. To the best of our knowledge, this is the first study that addresses the role of the level of both the MPO and PR3 ANCA titre in diagnosing AAV. In the 4 different immunoassays that were used for the ANCA test, ≥4 times the upper limit appeared a reasonable cut-off point to discriminate between AAV and alternative diagnoses. Recent studies have provided some evidence for an association between the ANCA titre and disease Table 3 Sensitivity and specificity of the number of times the cut-off value. Pooled analysis for different ANCA immunoassay techniques in the local laboratory. activity. A recent study found a moderate association between the ANCA level and relapses in patients with renal involvement. [17] Another study found an association between high PR3 ANCA levels and decreased patient survival. [18] The value of these data in clinical practice is still under debate. [19] Data on the role of ANCA titre as a diagnostic tool is scarce. In a study by Noel et al, [20] 19 patients diagnosed with GPA had higher PR3 ANCA titres as compared with patients without GPA. This observation so far, had not been described in patients with anti-MPO positivity. Our data suggest that the ANCA titre is potentially useful in clinical practice as a diagnostic tool. Additional studies in other cohorts with different tests would be required to establish the optimal cut-off of ANCA titres that could be incorporated in a diagnostic score system. As shown by others and confirmed by our own data, ANCA can be detected following a variety of medical conditions apart from AAV. Some of the conditions that had been reported previously are: chronic inflammatory processes, malignancies, infections, and the use of drugs such as propylthiouracil. [21,22] The significance of the ANCA positivity in patients without AAV remains largely unexplained. In animal models it has been demonstrated that both MPO and PR3 ANCA are pathogenic. For instance, mice infused with ANCA auto-antibodies presented with clinical and histological features of glomerulonephritis. [23,24] Clinical evidence for ANCA pathogenicity in humans is scarce. One case report that is often referred to, describes a newborn child who develops MPA secondary to transplacental transfer of maternal MPO antibodies. [25] Indirect clinical evidence of pathogenicity is the efficacy of plasma exchange in severe AAV. [26] In patients with immunoglobulin A (IgA) nephropathy and seemingly accidental positive ANCA serology, ANCA positive patients showed more severe clinical and histological features when compared with ANCA-negative IgA nephropathy patients. [27] This implies ANCA pathogenicity, even in patients without AAV. On the other hand, the fact that in our cohort patients with apparently mild symptoms were ANCA positive, is an argument against ANCA pathogenicity. One possible explanation for these conflicting data is the recent finding that some ANCA are more pathogenic than others, depending on epitope specificity of the antibody. In MPO antibodies, certain epitopes were found to be specific for active disease, while other epitopes remained present during remission or were also present in healthy individuals. [28,29] Perhaps the association between the ANCA titre and a clinical diagnosis reflects an additional pathophysiologic mechanism, in which a certain antibody load is required for the development of AAV. This hypothesis is supported by an animal model of AAV in which the percentage of immunised rats developing crescentic glomerulonephritis, depended on the administered anti-MPO load and was 46%, 64%, and 100% in the groups receiving 400, 800, and 1600 mg/kg anti-MPO, respectively. [30] As previously mentioned, in clinical practice the diagnosis AAV is often not biopsy proven nor standardised by a diagnostic scoring system. [5,9,10] In our AAV population, diagnosis was biopsy proven in 45%. A kidney biopsy was performed in 29%, which is comparable with 25% of patients in the Rituximab in Associated Vasculitis (RAVE) trial. [5] In patients without a biopsy proven AAV the diagnosis AAV was based on ANCA serology combined with clinical features and expert opinion. A current study by the ACR and the European League Against Rheumatism is developing and validating diagnostic and classification criteria for primary systemic vasculitis which could be helpful in clinical practice and in future trials. [7] Our results suggest that the ANCA titre and the number of affected organ systems could be considered in the development of future diagnostic classification systems. Future, prospective studies, should also consider novel tools in the diagnosis of vasculitis. For example, imaging: 18-F-Fluorodeoxyglucose Positron Emission Tomography with Computed Tomography has been shown to identify organ localizations of GPA at presentation. [31,32] Its value in the diagnosis of AAV needs to be further addressed. Biomarkers such as C3a, C5a, IL-18BP in blood and MCP-1 and C5a in urine samples have shown to be of value in discriminating between active and inactive disease. [33] Further investigations should confirm their reliability in predicting a clinical diagnosis AAV. An important strength of this study was that our cohort was complete. The Northwest Clinics is connected to a large laboratory that is the only laboratory performing ANCA tests in the region. Besides that, the quality of the medical records was high and data from only two patients were missing. Therefore, it is likely that our cohort is representative of the entire ANCA positive population in the region with a population of approximately 470,000 patients. A limitation of our study was that our search strategy neglected ANCA negative AAV patients, that may account for approximately 10% of the MPA and GPA population and approximately up to 70% of EGPA patients. [34,35] However, data suggest that ANCA negative patients represent clinically different subtypes and should, therefore, be studied separately. [36,37] We used the clinical diagnosis of AAV as a gold standard, since there is no accurate diagnostic system available, which could potentially leave room for subjectivity. Nevertheless, with a median follow-up of 5.8 years we were able to record a reliable final diagnosis over time. Furthermore, the statistical model was repeated with the biopsy proven AAV patients, after which it remained largely unchanged. Finally, because of the retrospective nature of this study we were unable to record a classifying diagnosis of AAV: GPA, MPA or EGPA. Our population consisted of slightly more anti-PR3 positive patients, often with ENT symptoms, possibly indicating a trend for GPA. In conclusion we demonstrated that ANCA can be positive in a variety of diseases that mimic AAV. In ANCA positive patients in a teaching hospital in The Netherlands, there was a strong association between the ANCA titre and a clinical diagnosis of AAV. The ANCA titre and the number of affected organ systems could be considered as diagnostic markers in AAV in clinical practice. Table 4 Multivariable logistic regression analysis of factors related to a clinical diagnosis of ANCA-associated vasculitis in ANCA positive patients (c-statistic 0.88).
2018-04-03T05:47:32.025Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "7d7cb6050edb2e848d61ddbd2fbd16b200453f53", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000005096", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d7cb6050edb2e848d61ddbd2fbd16b200453f53", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16046981
pes2o/s2orc
v3-fos-license
MRI evaluation of tibial tunnel wall cortical bone formation after platelet-rich plasma applied during anterior cruciate ligament reconstruction Background After anterior cruciate ligament (ACL) reconstruction, formation of cortical sclerotic bone encircling the femoral and tibial tunnel is a part of intratunnel graft healing. During the physiological cascades of soft tissue healing and bone growth, cellular and hormonal factors play an important role. The purpose of this study was to non-invasively but quantitatively assess the effect of intraoperatively applied platelet-rich plasma (PRP) on the formation of cortical bone encircling the tibial tunnel. Patients and methods In fifty patients, standard arthroscopic ACL reconstructions were performed. The PRP group (n = 25) received a local application of PRP while the control group (n = 25) did not receive PRP. The proximal tibial tunnel was examined by MRI in the paraxial plane where the portion of the tibial tunnel wall circumference consisting of sclerotic cortical bone was assessed with testing occurring at one, two and a half and six months after surgery. Results At one month after surgery, differences between the groups in the amount of cortical sclerotic bone encircling the tunnel were not significant (p = 0.928). At two and a half months, the sclerotic portion of the tunnel wall in the PRP group (36.2%) was significantly larger than in the control (22.5%) group (p = 0.004). At six months, the portion of sclerotic bone in the PRP group (67.1%) was also significantly larger than in the control (53.5%) group (p = 0.003). Conclusions Enhanced cortical bone formation encircling the tibial tunnel at 2.5 and 6 months after ACL graft reconstruction results from locally applied platelet-rich plasma. Introduction After anterior cruciate ligament (ACL) reconstruction, two biological mechanisms take place: ligamentization of the intra-articular part of the graft and healing in the bone tunnel. [1][2][3][4][5] Among other processes, in the chronic reparative phase of intratunnel graft healing, chondrification, neo-ossification and proliferative osteoblastic activity are present at about 6 weeks after recon-struction 3 with the formation of new cortical bone, creating the tunnel wall. During the physiological cascades of soft tissue healing and bone growth, cellular and hormonal factors play an important role, the most important among them being various growth factors (GF). 5,6 These proteins have a positive effect on fibroblast proliferation and the synthesis of extracellular matrix proteins and therefore on the enhancement of tissue healing. [7][8][9] Platelet-derived growth factors (PDGFs) particularly enhance the graft incorporation process. 9 Platelet-rich plasma (PRP), defined as a portion of the plasma fraction of autologous blood having a platelet concentration above the baseline, contains an autologous concentration of platelets and growth factors. 9,10 PRP can be activated with thrombin to create platelet-rich plasma gel. The role of local application of various GF in ACL reconstruction has been analysed by previous studies in animals, where their effect has been evaluated with histological findings or biomechanical tests. [11][12][13][14] MRI is because of its excellent contrast resolution 15,16 feasible method for demonstrating the osteosclerosis, where cortical bone is hypointense on all pulse sequences. To the best of our knowledge, no radiological research has yet evaluated the effect of PRP on the formation of cortical bone encircling the tibial tunnel after ACL reconstruction in humans. We found only one study, which assessed the cortication of the tunnel wall, however, only its presence was evaluated with CT, in comparing two different graft fixation screws. 17 The enhancing effect of PRP and bone GF on bone formation was demonstrated by histological studies in animal models as well as in periodontology. [18][19][20][21][22][23][24][25] Therefore, the aim of presented study was to assess the effect of intraoperatively locally applied PRP, using MRI, for quantitative but noninvasive measurement of the cortical sclerotic bone formation, encircling the tibial tunnel. We hypothesized that PRP promotes the tunnel wall cortical bone (TCB) formation that can be quantitatively assessed by MRI. Patient selection, surgical technique and platelet gel preparation The study was designed as a 6-month, single-centre trial, approved by the national ethics committee and was carried out in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki. All patients gave their written informed consent for participation in the study. The 50 patients included in the study, aged between 18 and 50, were treated for ACL rupture. The main indication for reconstruction was a symptomatic, unstable knee joint due to ACL rupture, assessed by the orthopaedic surgeon. All patients with inflammatory diseases, diabetes mellitus, advanced knee osteoarthrosis (3 rd and 4 rd degree), previous knee surgery (osteotomies, reconstructive ligament and meniscal procedures and chondral lesions treatment), malignant diseases, allergy to the contrast media, renal diseases and thrombocytopenia were excluded from the study. All patients undergoing arthroscopic ACL reconstruction were randomized into 2 groups: PRP group comprised 25 patients (15 men, 10 women), and the control group (16 men, 9 women). All procedures were performed by the same orthopaedic surgeon. In all cases, the standard arthroscopic reconstructive procedure, using the single-incision technique with a double-looped semitendinosus and gracilis tendon graft, was performed. The tunnel sizes in the tibia and femur were matched to the cross-sectional size of the graft and measured 7-9 mm in diameter. The graft was inserted antegrade via the tibial and femoral tunnel and fixed with 2 bioabsorbable cross pins (3.2 mm, DePuy Mitek, Massachussets, USA) in the femoral tunnel and with one bioabsorbable interference screw (8-10 mm, DePuy Mitek, Massachusetts, USA) in the tibial tunnel. The patients in the platelet group received a local application of PRP, which the patients in the control group did not receive. PRP preparation was similar to that previously described. 26,27 During the surgical procedure; autologous blood was obtained and centrifuged. The fraction of PRP was then mixed with activated autologous human thrombin and applied after autograft positioning, into the femoral and tibial tunnels (1 ml in each of them), as well as onto the graft itself (3 ml), where the autologous PRP was formed. An interference screw was inserted after PRP application. All patients were blinded to the treatment with PRP. Both groups followed the same standard rehabilitation protocol. Radiological assessment The MRI was performed in proton density (PD) sequence with fat suppression (TR 2900ms, TE 22 ms, 3 NEX, matrix 320x224, FOV 200, slice thickness 3 mm, 1 mm spacing). The tibial tunnel was examined in the paraxial plane, perpendicular to the tunnel axis. The tibial tunnel is more easily identified than the femoral tunnel on scout images, which facilitates planning and analysis. Examinations were performed one, two and a half and six months after the ACL reconstruction. At the first examination one month after reconstruction, the slice with the most pronounced TCB between the tibial plateau and the tip of the inter-ference screw, where images were free of volume averaging from the plateau and artefacts from the screw, was chosen for the analysis. In each patient, the same slice was chosen in follow-up assessments. TCB was defined as a clearly hypointense rim of the tibial tunnel wall that was at least 1 mm thick, assessed on paraxial slices. The portion of the tunnel wall circumference, consisting of TCB, was assessed by the consensus of the radiologist (M.R.) and the orthopaedic surgeon (M.V.), rounded off to ten percent ( Figure 1). The examiners were blinded to group assignment but not to the time of the procedure. Statistical analysis Numerical data are presented as mean values, while categorical data are expressed as proportions. Differences in TCB between the platelet and control groups were analysed using the Mann-Whitney test. Changes in TCB from the first to the final examination after ACL reconstruction were analysed by the Friedman two-way analysis of variance. As the post-hoc tests, Wilcoxon's signed ranks tests with Keppels modification of the Bonferroni correction of alpha were used. A P value less than 0.05 were taken to represent statistical significance. Data were analysed using PASW 18 software (SPSS Inc., Chicago, IL, USA). Results 20 patients from the control group and 21 patients from the PRP group were available for the followup and were analysed, while 9 patients from initial group were lost to follow-up. A comparison of the preoperative parameters of remaining patients in the PRP and the control groups showed that both groups were comparable in gender, age, injury site and body mass index (Table 1). We observed a gradual increase in the percentage of the tunnel wall consisting of TCB during the follow-up (Table 2). In each group, post-hoc comparisons showed a significant increase in average FIGURE 1. Proton-density weighted fat-suppressed paraxial images just below the tibial plateau from the same patient one (A), two and a half (B) and six (C) months after reconstruction. Because of the perpendicular orientation of slices the cross section of the tibial tunnel was in the rule circular. At the first month (A), only small part of the tunnel wall is sclerotic (estimated to be 10%, arrow). At two and a half months (B) about 20% of the tunnel wall is sclerotic. At six months (C), a thick sclerotic rim encircles estimated 90% of the tunnel. Note also some high signal intensity surrounding the tunnel, representing oedema, which also decreased during the follow-up. At the first postoperative month we found only small amount of TCB in each group with a nonsignificant difference between the groups (p = 0.928). At two and a half months as well as at six months after surgery, the mean percentage of TCB (Table 2) was significantly higher in the PRP group than in the control group (36.2 vs 22.5 and 67.1 vs 53.5, p = 0.004 and p = 0.003, respectively). Discussion For ethical reasons, histological evaluation of ACL graft incorporation in humans is impossible, particularly in the bone tunnels. However, MRI is a method of choice for the evaluation of the knee. 28 Graft healing in the tibial tunnel starts immediately after the operation as an acute inflammatory response with oedema, neutrophils and recruited macrophages present in the tendon bone interface as early as 4 days after surgery. 2,3 At 3 weeks, small vessels appear along with an increased number of osteoblasts on the bone surface. After 6 weeks, the vessels decrease in number and there is a shieldlike new bone formation surrounding the graft as well as an increased number of collagen fibers integrating along the tendon. 6 Sharpey-like fibers, which anchor the fibroproliferative process to the bone, were found at 6 to 12 weeks after reconstruction in a dog model, followed by progressive bone ingrowth. 2,3 The tissue maturation process is finished at about 26 weeks, although this differs among various animals and also between different types of grafts. 1,2,4 In the intratunnel graft healing process, incorporation progresses through the formation of a new matrix at the tendon-bone interface. Proliferation of new bone trabeculae along the edge of the tunnel is seen as early as three weeks after surgery. 6 This pattern is not uniform, as some areas exhibit a cartilaginous interface between tendon and bone. The zone of fibrocartilage may persist and represents a form of direct healing by tissue which may undergo enchondral bone formation. 6 This histological evidence is consistent with our observation of focal areas of sclerosis, representing the TCB, which in the follow-up period in each group expanded and finally fused. Although, on average, two thirds of the tibial wall circumference were sclerotic in the PRPG group at six months, we did not detect complete TCB formation surrounding the entire tunnel in any patient. This could be the consequence of a relatively short follow-up period and is in accordance with the observation of increased osteoblastic activity up to two years after reconstruction. 4 Several reports of the enhancing effect of PDGF and various other GFs on bone proliferation exist, particularly in periodontology. [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25] These factors have been shown to enhance murine osteoblast activity and proliferation in vitro. 23 In vivo, 2-fold increases in uptake of the bone-seeking radiopharmaceutical Technetium 99-MDP, as well as histologically up to a 10-fold increase in new bone and cement were found in dogs. 21 Locally added PRP accelerated bone healing after mandibular reconstruction in goats considerably. 25 In humans, significant increase in alveolar bone formation in the process of periodontal regeneration was demonstrated as effects of locally applied PDGF and insulin-like GF. 22,24 Despite the lack of radiological evidence in humans, assessment with MRI in sagittal and coronal slices demonstrated significantly more periosteal and intratunnel bone formation as effects of locally applied combination of bone GF during the ACL reconstruction in rabbits. This was histologically confirmed with more extensive new bone trabeculae and cartilage formation at the tendon-bone interface at two and eight weeks after reconstruction and generally more mature tissue at the grafttunnel interface. 18 In addition, other histological studies in animal models also showed more new bone formation at this interface as effects of local application of bone morphogenetic protein (BMP)-7 and BMP-2 up to eight weeks after reconstruc- tion. 19,20 Moreover, all of the above-mentioned studies also found higher tensile strength in bone GF groups. We did not detect differences in the TCB between the groups at one month, which is not in agreement with the above-mentioned animal studies, possibly because of a higher sensitivity with histological analysis, which may detect small changes in the TCB in the first weeks after surgery, when changes are too subtle for observation with MRI. The assessment of joint or even graft stability was not a part of the study protocol which could represent a limitation. However, increased anterior knee stability resulting from PDGF or bone GF has been demonstrated in various studies of humans and animals, [18][19][20]27 although the clinical effect of the PRP is still debated. 26,29 There are several other limitations of the study. We did not evaluate the femoral tunnel because such examination would have to be performed in a different plane, and would therefore significantly prolong the examination. In the tibial tunnel, the analysis of additional neighbouring slices could be more accurate, but also time-consuming, therefore in each patient the same slice was meticulously selected for analysis in all follow-up examinations. Owing to superior spatial resolution with even thinner slices, CT examination could be possibly more accurate, but at the cost of significant radiation exposure, important particularly in this young study population. We did not perform preoperative MRI, nor did we evaluate for lower degrees of osteoarthrosis and concurrent procedures during arthroscopy, as well as injury chronicity and possible drugs intake like NSAR and corticosteroids. All these factors could influence the inflammatory response in the healing process, which is, however, rather questionable in the intratunnel region compared with intra-articular inflammation. In conclusion, the results of our study demonstrate that formation of focal areas of sclerotic cortical bone with subsequent fusion into a thick tibial tunnel wall is a part of the ACL graft incorporation process that can be quantitatively assessed by MRI. Furthermore, we observed that local application of PRP results in enhanced cortical bone formation encircling the tibial tunnel at 2.5 and 6 months but not 1 month after ACL reconstruction.
2016-05-04T20:20:58.661Z
2013-05-21T00:00:00.000
{ "year": 2013, "sha1": "60b8ebd25a43ce41f16a7e32faf591ad3b0cf6f5", "oa_license": "CCBY", "oa_url": "https://content.sciendo.com/downloadpdf/journals/raon/47/2/article-p119.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a39f01757c5d8fba1aa550bb3691f04becaad85", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220258294
pes2o/s2orc
v3-fos-license
Utilizing nanopore sequencing technology for the rapid and comprehensive characterization of eleven HLA loci; addressing the need for deceased donor expedited HLA typing The comprehensive characterization of human leukocyte antigen (HLA) genomic sequences remains a challenging problem. Despite the significant advantages of next-generation sequencing (NGS) in the field of Immunogenetics, there has yet to be a single solution for unambiguous, accurate, simple, cost-effective, and timely genotyping necessary for all clinical applications. This report demonstrates the benefits of nanopore sequencing introduced by Oxford Nanopore Technologies (ONT) for HLA genotyping. Samples (n = 120) previously characterized at high-resolution three-field (HR-3F) for 11 loci were assessed using ONT sequencing paired to a single-plex PCR protocol (Holotype) and to two multiplex protocols OmniType (Omixon) and NGSgo®-MX6-1 (GenDx). The results demonstrate the potential of nanopore sequencing for delivering accurate HR-3F typing with a simple, rapid, and cost-effective protocol. The protocol is applicable to time-sensitive applications, such as deceased donor typings, enabling better assessments of compatibility and epitope analysis. The technology also allows significantly shorter turnaround time for multiple samples at a lower cost. Overall, the nanopore technology appears to offer a significant advancement over current next-generation sequencing platforms as a single solution for all HLA genotyping needs. HLA gene as a single amplicon proved to be a more effective and meaningful way of using NGS to characterize HLA polymorphisms, further revealing its potential for HLA typing [25]. The community has since adopted NGS of targeted long PCR amplicons, including most of the genomic sequences of each HLA locus, as the method of choice for HLA typing; almost all commercial protocols are based on this approach. However, while available NGS platforms (Illumina and Ion Torrent) can provide phasing of fragments ranging from 400 to 900 bp, they are unable to sequence very long fragments (3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15) Kb) as a continuum or as a single read. This limitation hinders the potential of this technology as the number of HLA alleles continues to increase, and the inability to phase distal polymorphic positions results in ambiguities. Additionally, a longer turnaround time (2-3 days) and a need to queue multiple samples per run, for cost effectiveness and capacity utilization, preclude the use of this method for time-sensitive applications such as deceased donor HLA genotyping [9] or for low throughput needs. PacBio platform offered a better solution than short-read platforms, enabling the sequencing of longer fragments like intact HLA amplicons. Even though the platform was characterized by a high error rate in the past, most recently it has been reported that by generating circular consensus sequences you attain high accuracy upwards of Phred 30 [26]. However, primarily because of its cost, size, maintenance requirements, and the lack of a commercial HLA typing kit with a comprehensive software solution, the platform has not been adopted widely by clinical laboratories. Its utilization, nevertheless, in a hematopoietic cell transplantation study has demonstrated the benefits of the complete genomic characterization of HLAs [27]. The search continues for alternative platforms offering accurate and simple sequencing of a single 3-15 kb molecule as a single read, while remaining cost-effective. One such advancement within the last several years is DNA sequencing through nanopores, as introduced by Oxford Nanopore Technologies (ONT) on the MinION platform. This portable and relatively inexpensive device generates long sequence reads in real-time as molecules of single-stranded DNA or RNA pass through protein nanopores, resulting in characteristic disruptions in ion current. While earlier versions of this technology had high error rates that made it clinically unsuitable, recent improvements have elevated the platform to the point that a reevaluation is warranted. Earlier reported efforts to utilize the nanopore technology for HLA genotyping fell short of providing a comprehensive solution of characterizing all loci within a time frame necessary for all of our clinical needs [28]. Most recently, however, De Santis et al. using a combination of a multiplex PCR and ONT reported successful characterization of all 11 HLA loci [29]. Our report introduces nanopore technology as a viable alternative to existing methods for reliable clinical HLA typing of all eleven loci within a significantly shorter time. Single samples can be genotyped in less than six hours using the Flongle adapter for the MinION device, and multiple indexed samples, sets of 24, can be analyzed using a single MinION flow cell in less than 24 h. This significant advancement presents new opportunities and a step towards a complete solution to the challenge of HLA typing. Sample selection Assessing ONT (Oxford, UK) for HLA typing required the selection of diverse samples. Samples were selected to include alleles for HLA-A, −B, −C, −DQB1, −DRB1 and −DRB3/4/5 such that the frequencies cumulatively would comprise greater than 95% of the Caucasian and African American populations in the United States, shown in Supplemental Table 1a-g, and includes alleles and their relevant frequencies from five populations [30]. Several samples were included that present genotyping challenges for different loci. A total of 120 samples were identified. All samples had been previously HLA genotyped at 11 loci by NGS using Omixon Holotype V2 kits (Omixon, Budapest, Hungary) on the Illumina MiSeq (Illumina, San Diego, CA, USA), and reported at 3-field resolution. All samples were reanalyzed with NGSengine (GenDx, Utrecht, Netherlands) version 2.16.2 using the IMGT/HLA database v. 3.38 to minimize any discrepancies that could occur due to differences in the IMGT/HLA database version. This MiSeq-typed dataset served as the reference HLA genotyping. DNA preparation DNA was extracted from blood using the Qiagen EZ1 ® DNA extraction platform with Qiagen EZ1 ® Blood 350 Kits (Qiagen, Hilden, Germany) for the majority of the samples included in these experiments, n = 93. These samples are a collection of clinical samples, included after institutional approval, and an internal reference panel. DNA samples from the Coriell Institute (n = 12; Coriell Cell Repositories, Camden, NJ, USA), and from the International Histocompatibility Working Group (n = 6; Fred Hutchinson, Seattle, WA, USA) were also used. The remainder of the samples (n = 9) were from four African populations (Ethiopia, Tanzania, Cameroon and Botswana) with challenging HLA typings. These particular samples had loci with a high degree of variation that is quite distinct from what is present in the current version of the IMGT/HLA database and includes unpublished novel alleles. The written informed consent was obtained from all participants and research/ ethics approval and permits were obtained from the following institutions prior to sample collection: COSTECH, NIMR and Muhimbili University of Health and Allied Sciences in Dar es Salaam, Tanzania; the University of Botswana and the Ministry of Health in Gaborone, Botswana; the University of Addis Ababa and the Federal Democratic Republic of Ethiopia Ministry of Science and Technology National Health Research Ethics Review Committee; the Cameroonian National Ethics Committee and the Cameroonian Ministry of Public Health. DNA concentration was quantified with Qubit BR assay (ThermoFisher, Waltham, MA, USA). Sample concentration was adjusted to that suggested by the manufacturers' PCR protocols, when possible. PCR conditions utilized for the project Three different protocols of PCR were used. The set of 120 samples, sequenced on the ONT MinION platform with R9.4 flow cells in five sets of 24, were amplified using the Holotype V2 protocol by Omixon, whereby 11 loci were amplified individually and pooled before library preparation and sequencing. Omixon primers amplify the entire gene for HLA-A, B, C, DQA1, DQB1, and DPA1 loci, with priming sites in the UTR regions. DRB1/3/4/5 amplicons include exons 2, 3, and 4 and DPB1 is amplified from exon 2 through the entire 3′UTR. The manufacturer's procedure for amplification, quantitation, and pooling was performed on a Hamilton STARlet (Hamilton Robotics, Reno, NV, USA). Thirty-five μl of the diluted amplicon pools were treated with 4 μl of ExoSAP Express (ThermoFisher) incubating for 4 min at 37 °C followed by 1 min at 80 °C. The other two protocols were rapid multiplexed PCR protocols from Omixon and GenDx. Nine samples were amplified with Omixon OmniType kits according to manufacturer's PCR procedure and a different set of 9 samples were amplified with the NGSgo ® -MX6-1 PCR (GenDx). All 18 samples being a subset of the 120 samples used for this study. The OmniType protocol amplified all eleven major HLA loci in 2 h and 10 min in a single tube using the same primers as the Holotype kit. The PCR product was then treated with 3 μl of ExoSAP Express as described above. GenDx NGSgo ® -MX6-1 PCR amplified A, B, C, DRB1, DQB1, and DPB1 in 3 h and 15 min in a single tube. The NGSgo ® -MX6-1 primers amplify the entire gene for HLA-A, B and C, whereas the Class II amplifications all start upstream of exon 2 and extend through exon 3 (DRB1 and DQB1) or exon 5 (DPB1). The amplifications, performed according to manufacturer's procedure, have a total reaction volume of 10 μl. These were performed in duplicate to ensure 800 ng of starting material for sequencing. Each 10 μl reaction was treated with 1 μl of ExoSAP Express and incubated as described above. The eighteen samples amplified with these multiplexed protocols were sequenced individually on ONT Flongles (R9.4), adaptors for the MinION that enable DNA sequencing on smaller, single-use flow cells. Library preparation Library preparation for MinION and Flongle libraries used gentle mixing of reagents at each step to avoid shearing the amplicons. 2.4.1. MinION-Each set of 24 indexed libraries were prepared for MinION sequencing with ONT 1D Native barcoding DNA procedure v109_revH with EXP-NBD104, EXPNBD114, and SQK-LSK109 kits with minor revisions to the procedure. Briefly, 2 μg of sample was treated with NEBNext End Repair/dA-tailing Module reagents (NEB, Ipswich, MA, USA), substituting water for NEBNext FFPE Repair reagents. Native barcodes (ONT NBD1-24) were ligated to 700 ng of dA-tailed amplicons with NEB Blunt/TA Ligase Master Mix. Samples were then pooled approximately equimolar. ONT sequencing adapters from the Barcoding Expansion kit were ligated to 900 ng of the pooled library using NEBNext Quick Ligation Reaction Buffer and T4 Ligase (NEB). Libraries were cleaned after each enzymatic reaction with AMPure XP beads (Beckman Coulter, Indianapolis, IN) using the ONT Long Fragment Buffer for wash steps after sequencing adapters were ligated. Libraries were quantitated after each cleanup step with Qubit BR assay. Flongle-Libraries for eighteen individual multiplexed samples were prepared for sequencing on Flongles using ONT 1D Genomic DNA by Ligation protocol v109_revL starting at the End-Prep step with SQK-LSK109 kits with minor revisions to the procedure. Briefly, 800 ng of sample was dA-tailed with NEBNext End Repair/dA-tailing Module reagents (NEB), substituting water for NEBNext FFPE Repair reagents, and cleaned with AMPure XP beads. ONT sequencing adapters were then ligated to the amplicons using ONT Ligation Buffer with NEB Quick T4 ligase and cleaned again with AMPure XP beads using the ONT Long Fragment Buffer for the wash steps. Libraries were quantitated after each cleanup step with Qubit BR assay. The Flongle library preparation process takes approximately 1 h and 45 min. Sequencing The number of active pores on each sensory array was assessed before loading libraries on flow cells for each sequencing run to confirm flow cells met manufacturer's quality control metrics. MinION-Indexed libraries were loaded onto the MinION SpotON flow cells following ONT procedure with reagents from SQK-LSK109 and EXP-FLP002 kits. To summarize, flow cells were primed with ONT Flush Buffer and Flush Tether reagents before loading library mixed with ONT Loading Beads and Sequencing Buffer in a dropwise fashion onto the SpotON sample port. After each run was terminated, MinION flow cells were washed according to the manufacturer's protocol v1_revB with ONT Flow Cell Wash kits (EXP-WSH003). In these experiments, flow cells were re-used a maximum of four times. Flongle-Individual sample libraries were loaded onto the Flongle flow cells following the ONT procedure with reagents from SQK-LSK109 and EXP-FLP002 kits. To summarize, flow cells were primed with ONT Flush Buffer and Flush Tether reagents applied directly to the SpotON port before loading approximately 20 fmol library mixed with ONT Loading Beads and Sequencing Buffer pipetted directly onto the SpotON sample port. Data analysis ONT raw signal data was basecalled and demultiplexed using Guppy (ONT software v3.4.3). Fastq files were then analyzed with a custom pipeline available at http://nanopore hla.chop.edu. The fasta output of the web application was then submitted to NGSEngine ® to determine the HLA genotyping (GenDx, V2.16.2). In NGSEngine, the following parameters were selected: sequencing platform type was set to PacBio-Consensus; for Holotype and OmniType, the amplicon region was set to 'auto' for all loci except for DRB1, which was set to 'DRB1 All Exon'. Regions outside of the amplicon, including primers, were added to the 'Ignored Regions' list. For the NGSgo ® -MX6-1 amplicons, the default parameters corresponding to this PCR protocol were chosen for analysis. The error rate of sequencing has been calculated based upon the reads that were selected for genotyping and includes mismatches, insertions and deletions. Sequencing strategy The objective of the study was two-fold: 1) Evaluate ONT sequencing technology for HLA typing taking advantage of the long reads, thereby eliminating ambiguities, and assess other aspects such as accuracy of sequencing and cost. 2) Evaluate pairing this nanopore sequencing technology with a multiplexed PCR protocol amplifying all 11 HLA genes that would provide high-resolution two-field (HR-2F) HLA typing in less than six hours. This shorter turn-around time would enable HLA HR-2F typing of deceased donors and potentially optimize compatibility assessments. Objective 1 Our current clinical HLA typing protocol uses the Holotype V2 kit for PCR where each locus is amplified separately; this approach was used for the first objective. The intent was to assess the post-PCR process, including library preparation, sequencing, and analysis without interference from the targeting and amplification of the HLA genes. Our goal was to minimize unknown variables and complexities introduced by alternative PCR protocols that would obscure an independent and objective assessment of the analysis pipeline and genotyping. All MinION experiments utilized the maximum number of barcodes available and were performed as sets of 24 samples. Library preparation and sequencing; metrics of relevance-A summary of the timeline for post-PCR library preparation and sequencing steps on the MinION platform is found in Table 1. Library preparation steps take approximately 6.5 h after PCR amplification. Depending on the amount of data desired, sequencing and analysis takes between 14 and 28 h, which is largely dictated by the amount of time spent sequencing. Generally, no more than 4-6 h are necessary to collect enough data to genotype 24 samples at 11 HLA loci. Given the 6.5 h for library prep, sequencing was often run overnight and terminated the following morning for convenience, generating a surplus of data. The total time for the entire post-PCR process ranged from 20 to 34 h. Using R9.4 flowcells, the current stable version, five experiments were performed, sequencing 120 diverse samples (Supplemental Table 1). The amplicons ranged from 2.7 kb (HLA-B) to 6.6 kb (HLA-DPB1), with amplicons from each sample pooled and barcoded. The number of active pores varied by flow cell, ranging from 1,225 to 1,714, impacting the total data generated for each experiment (Table 2). When washed as recommended with the most recent ONT wash kit (EXP-WSH003), a flow cell can be used for multiple sequencing experiments with minimal carryover of intact full-length amplicons from the previous experiment(s). Approximately 10% of active pores are lost with each successive experiment, allowing for 3-4 experiments per flow cell. The accuracy after four experiments was decreased minimally, by 0.48%, when compared to the first experiment (experiment 1 = 93.07%; experiment 4 = 92.59%). Sequencing ranged from 4.6 to 18.4 h ( Table 2). Although run time was variable, it was only necessary to analyze 800,000 reads per run, which were collected in the first 3.5-4.5 h, to obtain reliable genotypes. After demultiplexing, an average of approximately 30,000 reads per sample were identified. While there was variation in the overall representation of each barcoded sample, in general, this variation was not extreme (Fig. 1). The third sequencing experiment (Set 3) had two technical failures that were repeated independently and successfully genotyped. Approximately 15,000 high-quality reads were used for genotyping. The majority of reads excluded from analysis were due to 1) low alignment score, often short reads not spanning the entire amplicon, the result of incomplete amplification or DNA breakage during the experiment, or 2) because the read did not map to an HLA gene of interest. Library preparation is initiated with quantitation of the amplicons, followed by course adjustment of amplicon concentration and amplicon pooling so that the samples could be barcoded for identification. Variation in the locus balance is expected since amplicons are not pooled to exact concentrations (Fig 2A and B). After accounting for only high-quality reads, on average approximately 1,570 reads per locus were used for genotyping ( Table 2). Genotypic analysis-A consensus sequence was determined for each allele sequenced on the ONT MinION platform. In brief, on a per locus basis, a fully characterized allele(s), as defined in the IMGT/HLA database, with minimal differences to the ONT reads is chosen as reference(s) for alignment, the ONT reads are aligned to the reference allele(s) where variants were then called to produce a consensus sequence for each allele. Genotyping is performed using GenDx NGSengine ® as described in the methods. The ONT-based HLA genotypes were compared to the reference genotypes generated by the Holotype V2 protocol using the Illumina MiSeq platform (HR-3F) (Supplemental Table 2). The performance metric summary for the ONT method is found in Table 3 and the specific calculations can be found in Supplemental Tables 3a-3i. Overall the method is highly accurate at 99.98%, with a sensitivity of 99.63% and specificity of 99.99%. Genotyping at HLA-A, B, C, DPA1, DPB1, DQA1, DRB3 and DRB5 had 100% accuracy. For the alleles that did not match the known reference genotype, the majority of the problems were related to the amplification step. In DQB1, there were four differences from known typings. In one sample, the DQB1 locus failed to amplify, causing 2 allele discrepancies. In two samples, there was allele dropout of DQB1*03:01:01 when in combination with DQB1*06:01:01 and DQB1*06:03:01, where the DQB1*03:01:01 was detected at 3.29% and 0.64% depth of coverage respectively. Additionally, three samples failed to amplify the DRB4*01:01:01 allele. In two cases, only 1 allele was expected at DRB4 and no genotype was called for these samples. A third sample genotyped homozygous DRB4*01:03:01 instead of heterozygous DRB4*01:01:01 with DRB4*01:03:01, where the DRB4*01:01:01 allele was found at 3.07% depth of coverage. The aforementioned discrepancies were all due to the low representation of the particular alleles and below our internal threshold for detecting minor allele species. For the majority of heterozygous loci, alleles were generally well balanced (Fig. 3A). In the case of DRB4, which has known preferential amplification in the Holotype kit, the minor allele in heterozygous samples typically varied between 10% and 30% of the total reads. In all the cases presented above, whenever there is a missing allele, the genotyping anomaly would have been detected in our system through the use of haplotype analysis, and rectified upon further evaluation and repeat testing. Among the 2,126 expected allele calls, there was only one (1) discrepancy attributed directly to the ONT sequencing and subsequent analysis, occurring within the DRB1 locus. The reference genotype of this sample was DRB1*13:08 and DRB1*15:02:02, where the DRB1*13:08 allele was mistakenly called a novel DRB1*11 allele upon ONT genotyping. The particular problem persisted after reanalysis through the pipeline with different parameters. Considering that the DRB1*13:08 allele is incomplete in the IMGT/HLA database, and that there were sufficient reads for the locus we are convinced it is a bioinformatics issue. Additional development efforts regarding the analysis of this data are being made to optimally address this problem, and includes the addition of a step using exons only to better address incomplete alleles, however these improvements are not yet validated. For the 29 other allele determinations made for incomplete alleles in the IMGT/HLA database, encompassing 17 unique alleles, all were genotyped successfully. 3.2.3. Assessing ambiguity-Two forms of ambiguity are common with HLA NGS methods: 1) the inability to phase the length of the amplicon using short reads, and 2) exclusion of distal exons from the amplicon (e.g. DPB1 and DRB1 loci). Of the 2,126 allele calls, the reference genotype was ambiguous for 119 alleles, and were limited to the DRB1 and DPB1 loci. Sequencing with the long reads of the ONT platforms resolved all of the ambiguities caused by a lack of phase, which is 41.7% (49/119) of the total ambiguities. The remaining 70 ambiguous allele calls were all due to incomplete characterization of all exonic regions during amplification. Novel alleles-Samples with known exonic novelty were included to test the ONT-based sequencing as these alleles present certain challenges and yet are a somewhat common occurrence. Of the 2,126 alleles included, 15 alleles had a known exonic novelty and distributed across all loci except DRB4 and DRB5. After genotyping, 16 alleles were called with exonic novelty, where all 15 cases of known novelty were properly detected. However, the pipeline falsely identified an extra novel allele as described in section 3.2.2 above. While this method is sensitive to novelty and allows for proper detection in known cases, further refinement of the algorithm will be necessary to minimize the opportunity for false novel alleles. Objective 2 The first objective of this study was to assess the sequencing and analysis of nanopore data for credible HLA typing using ONT MinION data. After having shown the ONT-based method is credible in Objective 1, we now aim for a protocol that is less than 6 h to obtain HLA typing for deceased donors. For that purpose, two commercially available multiplex PCR protocols were used, GenDx and Omixon coupled with the use of the ONT Flongle for sequencing. To assess the turn-around time of a single sample on the Flongle, nine experiments were run with each of the rapid PCR protocols. The Flongle utilized the same R9.4 pores as the MinION flow cells. The time required for each multiplexed PCR protocol is reduced from the Holotype amplification protocol in Objective 1: OmniType takes about 2 h and NGSgo ® -MX6-1 takes 3.2 h when following the manufacturer's protocol. The post-PCR library preparation process for a single sample on the Flongle is reduced to 1.75 h as compared to 6.5 h for 24 samples on the MinION (Table 1 and Fig. 4). The preparation of a single sample and the elimination of barcoding of multiple samples account for the reduced time. Sequencing and analysis take an additional 1.5 h, whereby sequencing is limited to 1 h. The total post-PCR process takes a total of 3.3 h ( Table 1). The total time for all steps from DNA extraction through to the final genotype result is 5.6 h (ranging from 5.47 to 5.63 h) when using OmniType (Fig. 4). The details of the 18 Flongle experiments, 9 for each multiplexed PCR method, are shown in Table 4. The number of active pores varied between the experiments ranging from 58 to 92 active pores, resulting in 14,000 to 36,000 reads, of which 56.0% are usable ( Table 3). The NGSgo ® -MX6-1 multiplex targeting 6 loci averaged 2,460 reads per locus, whereas the OmniType multiplex targeting 11 loci averaged was 1,200 reads per locus. Balancing the representation of all loci in a multiplexed reaction is challenging, and may compromise genotyping given the limited time for sequencing to keep the whole protocol under 6 h. As such, locus balance was evaluated. For both OmniType and NGSgo ® -MX6-1, the locus balance is reproducible within each assay and we find certain loci are less represented in particular DPB1 in NGSgo ® -MX6-1 ( Fig. 2E and F) and DQA1 and DRB4 in OmniType ( Fig. 2C and D). Based on our experience of sequencing for one hour on the Flongle, even with the lowest number of reads obtained for the DRB4 locus (74-109 reads), we generated accurate genotyping. Considering that the DRB4 locus is hemizygous, we can derive that approximately 150 reads would be necessary for credible genotyping a heterozygous locus. Genotyping results on the Flongle experiments were highly similar to the MinION experiments. Using the OmniType, only 1 allele out of the 161 allele determinations was discrepant compared to the reference. The locus was expected to genotype homozygous for DQA1*01:02:01 and instead was genotyped as DQA1*01:02:01 + DQA1*01:NEW, with an artificial novelty in exon 3 in one of the two alleles. The initial analysis of this sample had 43% of reads with the incorrect base, leading to the novel allele call. In this situation, only 100 reads were available for analysis of the locus, when a typical analysis of a homozygous sample uses a minimum of 400 reads. Upon reanalysis of this sample with more reads, even with as few as 150 reads, the locus genotypes correctly. It is to be noted that the frequency of the incorrect base did not change significantly (41%), however introduction of additional reads in the new analysis revealed a strand bias of the incorrect base, which was accurately detected as noise. When the NGSgo ® -MX6-1 amplification protocol was utilized, all 108 allele determinations matched the expected reference genotypes. None of the loci amplified with either multiplexed PCR protocol had allelic imbalance below the threshold for detection, allowing for proper genotyping (Fig. 3B and C). Regarding the error rate on the Flongles, we observed that there is an increased error rate when compared to the MinION (Fig. 5). Overall, independent of the type of PCR method utilized, multiplexed or not, the Class I loci exhibit a higher error rate than the Class II loci. Discussion Given the inadequacies of current NGS methodologies for HLA typing, we have utilized ONT to resolve individual challenging HLA genotyping scenarios and for complete characterization of the MICA gene [31,32]. In the current study, we demonstrate that ONT sequencing technology has improved sufficiently to be utilized for the credible characterization of all 11 HLA loci, providing distinct advantages. Amplification protocols for targeting HLA genes were assessed independently from library preparation, sequencing, and genotyping analysis. The intent is to introduce a flexible protocol whereby users can select the targeting of the genes, through different strategies of amplification (individual locus vs. multiplex), approaches for selection and targeting of HLA genomic sequences, or scales of typing (single vs. multiple samples; selected vs. all 11 HLA loci). It was critical for each pre-PCR protocol to be assessed independently from the rest of the process because anomalies in PCR, particularly with multiplexing, may eventually affect the genotyping. Issues such as preferential amplification, drop-outs, and challenging homopolymer genomic segments within the gene, may influence genotyping accuracy. The careful selection of samples covering frequent HLA allele specificities in five populations in the US, along with samples/novel alleles with idiosyncrasies that challenge our analysis pipeline, provides credibility to our post-PCR protocol for HLA genotyping utilizing the ONT platform MinION (R9.4 version). For the first phase of this work, we used a well-tested protocol [33], whereby individual HLA genes are amplified separately, to sequence 120 samples. This credible PCR approach enables an objective assessment of the post-PCR components, including library preparation, sequencing, and analysis of nanopore reads. The data obtained from each of the five sequencing runs were reproducible and comparable (Fig. 1). The minor allele frequency was also comparable among the 11 loci, except DRB4 (Fig. 3). The different metrics assessed (Table 3) demonstrate excellent performance. Overall, our data demonstrate an impressive accuracy and total lack of phase ambiguities (due to distant polymorphisms). Other ambiguities due to the absence of some genomic segments (i.e., exon 1 in some amplicons, such as DRB1 or DPB1) persist and they are unrelated to the performance of the nanopore technology. Amplification protocols or other approaches targeting the totality of HLA genomic sequences may eventually eliminate all ambiguities. The few DQB1 and DRB4 discrepancies observed with the typing of 120 samples (2,126 allele calls) were all related to PCR issues, whereby some alleles were minimally amplified and did not exceed our threshold set by the analysis program, resulting in no call. The threshold level is an internal value set in our analysis pipeline, and differentiates heterozygous versus potentially homozygous samples. Conceptually, this threshold is not identical to the minor allele percentage used for variant calling currently employed by software programs designed for Illumina data. Of note, given our practice of assessing HLA haplotypes before reporting, these discrepant cases would be detected. Haplotype anomalies trigger an investigation to reveal minimally amplified alleles and repeat testing by NGS and another DNA-based method (SSOP) to detect the presence of another allele. The single DRB1 discrepancy observed in this study arose from a software problem in which certain alleles have incomplete sequences in IMGT or very close resemblance to others, and is an active area of development. The detection of a "new" allele, however, would have triggered a further investigation resulting in the detection of the problem. The remaining 15 "new" alleles included in the reference set of samples were detected accurately by this new sequencing approach. High-resolution typing at the fourth-field level (HR-4F) using this technology also appears realistic. A remaining challenge involves the confident characterization of alleles with homopolymers, as it is unclear whether PCR, sequencing or a combination of both is the culprit. Considering the total elimination of ambiguities due to phasing, and the very few detected discrepancies, this method is impressively accurate, simple, with short turnaround time and low cost. Using the flow sequencing cells multiple times (3)(4) reduces the cost of sequencing significantly. Indexing of 24 samples may reduce the cost further. Of note is the very low cost of the MinION device at $1,000, which is far less expensive as compared to other NGS platforms. The technology continually develops as the ONT company introduces improvements to address existing limitations. Although we do not present the data generated with the R10.0 and R10.3 flow cells (most recent versions of flow cells) in this report, the improvements for the sequencing of homopolymers further enhance the potential of this technology for HLA typing. As ONT develops technical improvements to library preparation, the time of sample preparation and, therefore, turnaround time, will decrease. Analysis software improvements for synchronous sequencing, analysis, and genotyping may expedite reporting. The sequencing platform need not continue sequencing if enough reads are generated, and genotyping is secured. Our protocol did not incorporate this dynamic approach, but sequencing was discontinued when we estimated that an adequate number of reads had been obtained. With the incorporation of the described improvements, genotyping of 24 samples, which presently takes approximately 14 h, will be further reduced. The second phase of our work assessed the pairing of two multiplexing PCR protocols with our post-PCR nanopore sequencing and analysis protocol. The objective was to take advantage of the Flongle adapter for the MinION, with a reduced number of pores and, therefore, of sequencing capacity, designed by ONT to be a cost-effective solution for single-use experiments. The Flongle happens to be able to accommodate all 11 HLA amplicons from a single sample and provide enough reads for credible genotyping in a relatively short period (approximately one hour). Potentially, however the platform could sequence more than one sample, if the sequencing time could be extended. We used two multiplex PCR protocols to assess whether different protocols have different performance after nanopore sequencing. The GenDx protocol did not amplify all 11 HLA loci, but we understand that the GenDx company will soon have an 11-locus multiplexed PCR (NGSgo ® -MX11-3). To examine as many samples as possible, the samples selected for the assessment of the two protocols were not the same. There was only one discrepancy (1 out of 161 allele calls) with the OmniType, whereby a reference homozygous DQA1 locus was typed as heterozygous, with the second allele being a "new" allele. The initial analysis of this sample was restricted to the data generated in the first hour of sequencing. However, upon reanalysis, using more reads from the same sequencing run that were available, the novel allele was found to be a false-positive, and the sample genotyped correctly. In the absence of reference typing, this "new" allele would have been further evaluated after initial typing to confirm the call, and the error would have been identified. Regarding the number of reads needed for credible genotyping on the Flongle flow cells, we found that a heterozygous sample would need more than 150 reads per locus and this number of reads can be collected in the period of one hour. There is room for improvement in the relative balance of the different loci by these two multiplex PCRs. With regards to NGSgo ® -MX6-1 multiplexed PCR, it appears that DPB1 is underrepresented among the six amplified loci. The OmniType protocol appears to have a low representation of DQA1 and DRB4 loci, while the B locus is overrepresented. Despite the low representation of certain loci by these two protocols, the genotyping performance was uncompromised. The locus imbalance can have an increased effect when sequenced on a Flongle flow cell, which is characterized by fewer active pores than a MinION and when coupled with a required short sequencing time, potentially may not generate enough reads for accurate genotyping. The Omixon company is in the process of further optimizing its multiplex PCR protocol; we look forward to its most recent improvements. We are also interested in exploring emerging techniques to apply during sequencing to mitigate the effects of locus imbalance with adaptive subsampling during sequencing, which may also further reduce the sequencing time [34]. Another interesting observation is that the error rate in substitutions, deletions, insertions, between the MinION and the Flongle is different and significant. It is unclear whether this observation is due to the PCR products or the sequencing platforms by ONT, and further investigation is warranted. These errors may be readily resolved if the same PCR products are on different platforms. Additionally, the class I loci have higher error rates than class II loci, an observation that is reproducible and independent of the sequencing platforms. These differences in error rate had no impact on genotyping, as class I and II loci were all accurately genotyped. As the technical sensitivities of the assay are likely to become relevant, an awareness of the limitations of this new system of HLA typing is warranted. The combination of features of this nanopore technology, paired to multiplex protocols, form the basis for an accurate methodology with a turnaround time of approximately six hours, likely to shorten in the future, for a single sample. This development is extremely relevant to the transplant community, given the generated typing can be HR-2F. HR-2F typing will facilitate better characterization of recipient antibody profiles for DSA. With an increasing number of studies linking the epitope mismatch loads to clinical outcomes, HR-2F typing for deceased donors has the potential to permit such analyses to optimize donor selection [11][12][13][14][15][16]. The technology may also reduce the burden on transplant center labs that routinely type deceased donors at the high-resolution level. This information would become available upon the initial offer, which may translate into savings for the overall health system. The turnaround times of HLA genotyping for a single sample on the Flongle or multiple samples on a MinION may be further reduced as individual steps in the process are optimized. Shortened multiplex PCR protocols paired to post-PCR protocols would improve the time for processing many samples in parallel. The number of commercially available indexes is the primary barrier, which we anticipate can readily be increased. Additionally, automated library preparation currently underway by ONT can expedite this step. Finally, modifications to permit simultaneous data processing and sequencing on the flow cell can further expedite HLA genotyping. The benefits of this technology for HLA characterization are bound to extend its reach beyond existing clinical and research applications. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Distribution of reads among the 11 HLA loci for the MinION and Flongle experiments. A) Percent Usable Reads -MinION; B) Count of Usable Reads -MinION; C) Percent usable reads -OmniType; D) Count of usable Reads -OmniType; E) Percent usable reads -NGSgo ® -MX6-1; F) Count of usable reads -NGSgo ® -MX6-1. The y-axis represents the percentage of the reads assigned that were used for analysis for each locus out of the total reads that were used for the sample (A, C and E) or the count of reads used for analysis per locus (B, D, and F). Timing Diagram for Flongle Experiments. * Amplification time varies based on method. Shown here is 2 h for the OmniType protocol, whereas the NGSgo ® -MX6-1 multiplex takes 3 h and 15 min. Percent error of sequencing on the two different ONT platforms: MinION and Flongle. The error rate is broken down and colored by locus and includes substitutions, insertions and deletions. For the MinION experiments, the error rate is combined for the five sequencing experiments (n = 120 samples). For the Flongle, the error rate is grouped by the PCR method (n = 9 samples each). Page 23 Table 1 Timing information for post-PCR to sequencing. NA: Not applicable Step Table 4 Flongle Sequencing Metrics.
2020-06-30T13:01:28.317Z
2020-06-25T00:00:00.000
{ "year": 2020, "sha1": "68fef691bef2b312523add8c33f6065f63589175", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.humimm.2020.06.004", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b208f94668a6a89d71401d507159869a651ff1a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
260377703
pes2o/s2orc
v3-fos-license
Three-Dimensional Histological Electrophoresis for High-Throughput Cancer Margin Detection in Multiple Types of Tumor Specimens Accurate identification of tumor margins during cancer surgeries relies on a rapid detection technique that can perform high-throughput detection of multiple suspected tumor lesions at the same time. Unfortunately, the conventional histopathological analysis of frozen tissue sections, which is considered the gold standard, often demonstrates considerable variability, especially in many regions without adequate access to trained pathologists. Therefore, there is a clinical need for a multitumor-suitable complementary tool that can accurately and high-throughput assess tumor margins in every direction within the surgically resected tissue. We herein describe a high-throughput three-dimensional (3D) histological electrophoresis device that uses tumor-specific proteins to identify and contour tumor margins intraoperatively. Testing on seven cell-line xenograft models and human cervical cancer models (representing five types of tissues) demonstrated the high-throughput detection utility of this approach. We anticipate that the 3D histological electrophoresis device will improve the accuracy and efficiency of diagnosing a wide range of cancers. C linically, surgery remains the most mature method for treating a wide range of solid tumors. Examination of intraoperative tissue sections is crucial for confirming tumorfree margins, along with determining treatment decisions and predicting patient outcomes. 1−7 In particular, confirming tumor-free margins is critical for minimizing the risk of cancer metastasis. However, these examinations are inherently subjective and rely on image interpretation, leading to variability between different observers. 8,9 Incorrect judgment of tumor margins during surgery resulted in residual tumors, putting patients at a greater risk of complications and metastasis. Moreover, limited access to disease experts can exacerbate this problem, particularly in regions with less developed healthcare systems. 10 Although many studies are devoted to directly distinguishing tumors from coexisting benign tissues in vivo, 11−14 ex vivo histopathological analysis remains the routine protocol for clinical tumor margin detection. Given the major challenges that conventional histological examination faces, artificial intelligence (AI), especially deep learning, has emerged as a promising solution that can achieve diagnostic performance comparable to that of human experts. 15−17 Applying algorithms based on deep learning improves the efficiency and accuracy of diagnosis to digitized pathological sections. However, the integration of these advances into cancer diagnosis is not straightforward due to two major challenges. First, integrating digitized modules into the routine microscopic examination workflow is difficult and requires significant infrastructure investments. Second, these predictive models used in decision support systems for pathological image analysis rely on manually engineered feature extraction based on expert knowledge. These approaches were intrinsically specific to certain types of cancers and did not perform well for clinical applications. Therefore, current predictive models are limited in their ability to be used effectively in a routine cancer diagnosis. Cancer surgeries often require resecting the entire tumor along with a healthy margin of surrounding tissue to prevent cancer recurrence. However, identifying the precise margins of tumors during surgery can be a challenging task, as tumors may infiltrate into surrounding tissues and the boundaries may be unclear. Incompletely resecting of cancerous tissue can result in tumor regrowth and reduce the survival rates of patients, while excessively resecting of healthy tissue can cause functional impairments or damage. 18,19 Surgeons frequently perform multiple conservative resections of the suspicious tumor region, followed by sending pathological examination to confirm positive/negative margins until the tumor has been completely resected. In addition, the suspicious tumor tissues that have been surgically resected are sectioned into multiple blocks for histological examination, which is vital in confirming that the margins are indeed free of tumor cells in every direction ( Figure 1). Therefore, it is important to develop an intraoperative rapid detection technique to enable the highthroughput detection of multiple suspected tumor lesions at the same time. Here, we demonstrated a high-throughput diagnosis strategy for multitype-tumor margins using a three-dimensional (3D) histological electrophoresis device ( Figure 1). During electrophoresis, the proteins within the tissue sections migrated from the negative electrode to the positive electrode ( Figure S1). This device allows for in situ visualizing and separating proteins within cryosliced tissue sections, thus facilitating the identification of the corresThis platform could automatically analyze the fluorescence intensity matricesponding fraction with the most significant protein-signal differences between benign and malignant tissues. In our parallel work, we have developed an automatic platform called Tumor finder that is integrated with the electrophoresis device. This platform could automatically analyze the fluorescence intensity matrices quantified after electrophoresis and then determine tumors according to protein signatures of human breast cancer and paracancerous proteomes. 20 To increase the speed and accuracy of visualizing proteins and further amplify the protein-signal differences between tissues, we employed a tumor-seeking dye (IR-780) to label the preelectrophoresis tissue sections. 21 This strategy avoids the time-consuming and insufficient selectivity and sensitivity that traditional immune methods rely on for tumor-specific protein labeling, realizing the intraoperative determination of tumor margins based on protein signals. To fully assess the separation performance of the electrophoresis device (in both xy-and z-direction), we test the device with four IR-780@protein standards and successfully reproduced the predesigned signal characteristics after electrophoresis. We accurately distinguished seven cell-line xenograft models (SCC-15, Hela, HepG2, 4T1, MCF-7, C6, and U87) and human cervical cancer tissue from their corresponding normal tissues (the muscle tissues for cell-line xenografts and the paracancerous tissues for human cancer). In a proof-of-concept demonstration of the high-throughput capability, we were able to accurately distinguish multiple small tumor tissues simultaneously. To achieve accurate and efficient tumor margin determination, a robust protein labeling and separation protocol is essential. The chlorine-containing cyanine dyes (e.g., IR-780) can bind to protein through supramolecular interaction and the SH-group in protein and the Cl−C bond in the Cl-containing dyes form covalent bonding by nucleophilic substitution reaction. 21−27 The reaction efficiency between the protein and IR-780 varies with the incubation temperature. 28 Therefore, to evaluate the separation performance of the 3D histological electrophoresis device, we employed four protein standards with different molecular masses (transferrin, Tf, 76 kDa; bovine serum albumin, BSA, 66.4 kDa; ovalbumin, OVA, 44.5 kDa; β-lactoglobulin, β-LG, 18.4 kDa) labeled with IR-780 at temperatures ranging from 30 to 90°C. Subsequently, the IR-780@protein standards incubated at temperatures of 30 to 80°C were grouped into six ladders according to the reaction temperatures ( Figure S2a,b). After verifying the successful separation of the six ladders by 2D SDS−PAGE (Figure 2a), we pipetted the six ladders into the 10 × 10 well array molds and then separated them using the 3D electrophoresis device ( Figure S2c). After rapid separation, we obtained one gel layer by z-directional gel fractionation and four gel layers by xy-directional gel fractionation according to the molecular masses of IR-780@protein standards (Figure 2b and Figure S2d). Signal quantification from both post-2Delectrophoresis gel and post-3D-electrophoresis gel layer Figure 1. Schematic representation of the workflow of clinical pathological examination in comparison to high-throughput analysis through a 3D histological electrophoresis device. Compared to the clinical methods, the 3D histological electrophoresis device was a time-saving and precise tool that allows for quick decision-making during surgical procedures. Additionally, the device is user-friendly and does not require extensive pathological experience. (fractionated along the z-direction) confirmed that the separation performance of 3D electrophoresis was comparable to that of 2D SDS−PAGE (Figure 2c,d). In addition, plotting the fluorescence signal of the z-directional fractionated gel layer against the electrophoresis distance (1 cm) produced a strong correlation with the signal quantification from the post-2D-electrophoresis gel (Figure 2c,d), even though the electrophoresis distance of 2D SDS−PAGE was relatively longer (6 cm). After assessing the separation performance of the 3D electrophoresis device equipped with a 10 × 10 well array using four IR-780@protein standards, we next examined its ability to separate artificial protein patterns. We first designed a blended and encoded protein pattern by mixing two IR-780@ protein standards in a 20 × 20 well array; that is, the "1234" was encoded by IR-780@BSA, while the "ABCD" was encoded by IR-780@β-LG (Figure 2e). After electrophoresis, two labeled proteins (IR-780@BSA and IR-780@β-LG) were completely separated and reproduced the predetermined Before evaluating the ability of our 3D histological electrophoresis device to predict tumor margins, we first compared the differentiation of protein content between the signals of several IR-780-labeled tumor lysates (SCC-15, Hela, HepG2, 4T1, MCF-7, C6, and U87) with that of the IR-780labeled normal tissue lysate (mouse muscle) ( Figure S3). There were notable differences in IR-780-labeld proteins between 7 tumor lysates and muscle tissue lysates ( Figure S3b), which could be attributed to the selective labeling of tumor-seeking dye to the proteins with favorite pocket conformation. 16 Additionally, since various types of tumors all exhibit tumor-overexpressed proteins within the molecular mass range of 60−100 kDa, these regions (corresponding to layers 2−4 in the postelectrophoresis gel) are utilized as the selected regions of interest (ROI) for determining multitypetumor margins ( Figure S3). We next cryocut the tumor tissue into blocks, ranging in size from 1 to 5 mm, and embedded these blocks using the optimal cutting temperature (OCT) compound, ensuring that the blocks were all less than 5 mm apart, to assess the effect of tumor margin prediction (Figure 3a). Here, we selected 4T1 tumors as an example. After 3D electrophoresis, protein signals were collected from different fractionated layers (layers 2−7) ( Figure S4a,b). We selected the layers with a maximum signal (layer 3) to analyze the tumor margins (Figure 3b). The calculated similarity between the contours of blocks in initial tissue sections and the contours outlined from the protein signals demonstrated the capability of 3D histological electro- Nano Letters pubs.acs.org/NanoLett Letter phoresis for tumor-margin prediction (Figure 3c and Figure S4c,d). Moreover, the changes of contours in the x-direction/ y-direction were even less than 50 μm (Figure 3d), eliminating the concern of the blurring effect of margin distances due to the diffusion of labeled proteins. With this technique, even tiny tumor regions can still be rapidly and stably outlined. We next accessed large numbers of samples from different xenograft models to verify the stability of 3D histological electrophoresis in predicting multitype-tumor margins ( Figure 4a). Tumor tissues were embedded with the corresponding muscle tissues (SCC-15, Hela, HepG2, 4T1, MCF-7, C6, and U87 samples). The negative controls (Muscle samples) were constructed by embedding only muscle tissues. After 3D electrophoresis, the tumor-to-muscle (or muscle-to-muscle for Muscle samples) ratios from the selected fractionated layers were calculated and plotted (Figure 4b). Additionally, the cutoff was determined by the mean plus three standard deviations to distinguish the tumor-positive tissues from normal tissues, resulting in calculated sensitivity and selectivity of 100% each (Figure 4c). The plotted patterns from the greatest protein-signal differences between benign and malignant tissues could reconstruct the tumor patterns and thus precisely predict the tumor contours (Figure 4d and Figure S5). Conversely, the plotted signals from Muscle samples failed to illustrate the tissue pattern. Comparing the initial tissue contours with the predicted contours of multitypetumor models confirmed the potential of the 3D histological electrophoresis device as an intraoperative diagnostic tool for cancer margin that benefits the oncology ( Figure S6 and Figure 4d). To demonstrate the accuracy of our device in assessing clinical tumor-positive margins with notable patients' variability, we collected cervical cancer tumors and the corresponding paracancerous and normal tissues from six patients with (Table S1). The tumor-positive margins were assessed and outlined with high tumor-to-paracancerous tissue ratios (Figure 5a−c), which was consistent with the notable protein-signal differences between benign and malignant tissues reflected by 2D SDS−PAGE ( Figure S7). Moreover, the high similarity (mean similarity: 0.9227) demonstrated that the device is capable of handling complex clinical diagnostic cases (Figure 5c,d and Figure S8). In the clinic, surgeons often resort to performing several conservative resections of the suspicious tumor region and then pathologically confirming positive or negative margins before continuing with further resections. The surgically resected tumor tissues are sometimes cut into several blocks for histological examination to confirm the tumor-free margins in all directions (Figure 1). We next explored the ability of the 3D histological electrophoresis device to detect tumor margins with high throughput. The high-throughput analysis makes the 3D histological electrophoresis device a time-saving intraoperative tool. To confirm its efficacy for high-throughput detection of tumor margins, we coembedded three tumor tissue blocks and three paracancerous tissue blocks (3 × 2, taken from patient F0175) and analyzed the section using our device. All tumor-positive regions were successfully identified and accurately contoured without boundary obscureness (Figure 5e−g and Figure S9). These results suggest that the 3D histological electrophoresis device has the potential to become an effective complementary tool for high-throughput identification and assessment of tumor-positive margins in clinical settings. Cancer surgeries often require high-throughput identification of multiple suspected tumor lesions during surgery to ensure complete resection of tumor cells while preserving normal tissues. The 3D histological electrophoresis device offers a promising solution by accurately identifying tumor margins through the separation and detection of tumor-specific proteins within tissue sections. This technology provides an independent and highly accurate detection strategy for the intraoperative discrimination of tumor margins, enabling efficient resection of cancerous tissues. We also validated the ability of our device to identify tumor margins in multitypetumor models, including human cervical cancer samples. Nano Letters pubs.acs.org/NanoLett Letter Although there is a need for further investigation into the binding mechanism between the dyes and proteins, as well as the explicit tumor-specific proteins of different cancer types affected by the IR-780 dye, we anticipate that the 3D histological electrophoresis device will be successfully integrated into the intraoperative cancer diagnosis workflow, improving the efficiency and consistency of the microscopic examination for the diagnosis of cancer and other diseases. ■ ASSOCIATED CONTENT Data Availability Statement The data that support the findings of this study are available in the Supporting Information of this article. Details of experimental materials and methods; additional data including electric circuit setup ( Figure S1); labeling of protein standards ( Figure S2); comparison of IR-780-labeled proteins between seven cell-line-derived tumors and muscle tissues ( Figure S3); tumor-margin prediction of cell-line xenograft models (Figures S4− S6); comparison of IR-780-labeled proteins between human cervical tumor tissues and the corresponding paracancerous tissues ( Figure S7); tumor-margin prediction of human cervical cancer ( Figures S8 and S9); clinical and histopathological information on the tumors in our patient cohort (Table S1) (PDF) ■ AUTHOR INFORMATION
2023-08-03T06:17:10.300Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "dcc8cfabe3bacd1d3b4f8df5022417096c3ed05b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acs.nanolett.3c02206", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5012da95310fbdb5bd4fd1257c6f61946f6fea81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
64712313
pes2o/s2orc
v3-fos-license
Genome-Wide Likelihood Ratio Tests under Heterogeneity The commonly used statistical methods in medical research generally assume patients arise from one homogeneous population. However, the existence and importance of significant heterogeneity have been widely documented. It is well known that common and complex human diseases usually have heterogeneous disease etiology, which often involves interplay of multiple genetic and environmental factors, leading to latent population substructure. Genome-wide association studies (GWAS) is a useful tool to uncover genetic association with disease of interest, while linkage analysis is a commonly used method to identify statistical association between the inheritance of a human disease and inheritance of marker loci that are in linkage with disease causing loci. We propose a likelihood ratio test for genome-wide linkage analysis under genetic heterogeneity using family data. We derive a closed-form formula for the LRT test statistic and provide explicit asymptotic null distribution. The closed form asymptotic distribution allows easy determination of the asymptotic p-values. Our extensive simulation studies indicate that the proposed test has proper type I error and good power under genetic heterogeneity. In order to simplify application of the proposed method for non-statisticians, we develop an R package gLRTH to implement the proposed LRT for genome-wide linkage analysis as well as Qian and Shao’s LRT for GWAS under heterogeneity. The newly developed open source R package gLRTH is available at CRAN. Introduction The commonly used statistical methods in medical research generally assume X. X. Han, Y. Z. Shao that patients in the study arise from one homogeneous population.However, the existence and importance of significant heterogeneity are well known and have been documented in literature for many diseases, including Alzheimer's disease [1] [2], asthma [3] [4], diabetes [5] [6], and multiple cancer types [7] [8] [9] [10]. These common and complex human diseases usually have non-unique disease etiology, which also frequently involve interplay of multiple genetic and environmental factors, leading to latent population substructure [11] [12] [13].Therefore, it is common that the patient population of a complex disease consists of various latent subpopulations, each with disease caused by mutations at different loci.Yet each of the unobservable subgroups is relatively homogeneous in etiology or diagnosis. The genome-wide association study (GWAS) and linkage analysis are two classical approaches for studying human genetic disorders.GWAS is an experimental design (typically case-control) used to detect associations between genetic variants and diseases/traits from a study population [14].The ultimate goal of the population-based GWAS is to assist researchers to have a better understanding of the biology of the disease and develop better prevention or treatment strategies for common and complex diseases.However, the standard GWAS analysis methods ignore the widely existing genetic heterogeneity.To account for latent genetic heterogeneity in GWAS, Qian and Shao [15] recently developed a novel likelihood ratio test under genetic heterogeneity (LRT-H). This methods has been shown to have superior power advantage over the commonly used Cochran-Armitage trend test (CATT) in GWAS for complex diseases where genetic heterogeneity commonly exists [15] [16]. Linkage analysis is a commonly used method to identify statistical association between the inheritance of a human disease and inheritance of marker loci before the era of GWAS.In the last two decades, linkage based gene mapping has been marginalized by the population-based genome-wide association study. Association analysis uses common variants and allows for finer mapping than linkage analysis in general.However, one major problem for association study is population stratification, which can lead to increased number of false negative as well as false positive findings if latent heterogeneity is not properly controlled for [17].Yet this is not a concern for family-based linkage analysis, as children's genotypes only depend on their parents but not on the population genotype frequencies [18] [19].Recent advancement in next generation sequencing (NGS) has made it technologically feasible and financially affordable to determine mutation profiles for families.Linkage analysis again becomes important to identify causal variants using family-based deep sequencing data.Ott et al. [20] and Shao [21] presented reviews of genetic linkage analysis in the age of NGS. For marker alleles that are associated with inheritance of complex disease, it is not uncommon that the transmission probabilities of a marker allele of interest vary across heterozygous parents, due to locus heterogeneity, etiologic heterogeneity, and many other complexities and/or combinations of them [11]. X. X. Han, Y. Z. Shao For example, breast cancer as a complex disease, is well known to be heterogeneous.Some cases of breast cancer are due to the inherited mutations in BRCA1/2 in some families [7], while in other families due to mutations in other genes (e.g.PTEN) [22].These genetic heterogeneities are often not directly observable from linkage data or GWAS data.The current available genetic linkage methods that account for latent genetic heterogeneity are based on mixture models and generally are computational expensive for genome-wide or NGS data [13] [23] [24] [25], yet ignoring heterogeneity can cause loss of efficiency in statistical test with increased numbers of false negative findings or missed opportunities. In the era of whole genome sequencing, it is important to have statistical tests that are 1) computationally efficient even for genome-wide data, 2) robust under genetic heterogeneity and 3) statistically powerful.Motivated by the Qian and Shao's [15] LRT-H for GWAS, in this paper we propose a powerful and computational efficient likelihood ratio test under genetic heterogeneity for linkage analysis based on a binomial mixture model, using family data with parental marker genotypes and genotypes of two affected siblings.We have developed an R package gLRTH to implement the newly proposed LRT for genome-wide linkage analysis under genetic heterogeneity as well as Qian and Shao's [15] LRT-H for GWAS.The package is freely available on CRAN.The purpose of this R package is to simplify the application of these two methods for non-specialists.The rest of paper is organized as follows.In Section 2, we introduce the LRT for linkage analysis under genetic heterogeneity.We derive the closed-form test statistic and provide explicit asymptotic null distribution that simplify the computations for p-values.In Section 3, we present numerical simulation studies for type I error and power analysis.In Section 4, we describe the R functions and their arguments.The paper is concluded in Section 5. Methods Genetic markers can have multiple alleles.In next generation sequencing (NGS), GWAS and other genome-wide studies, markers with two alleles are most common.Thus, without much loss of generality, we focus on markers with two alleles.Here we consider a binary trait and focus on detecting linkage under genetic heterogeneity at a single marker locus with two alleles A and a.We consider independent families each with one marke-homozygous (AA) parent, one marker-heterozygous parent (Aa) and two diseased children.Let X denote the total number of allele a inherited by the two affected children from their heterozygous parent (Aa).Then X has a binomial distribution where and b θ is the transmission probability for the marker-heterogeneous patient to pass allele a to a child. Under the null hypothesis H 0 of no linkage between the marker and any disease-causing loci, 0.5 b θ = for all families, i.e. However, transmission heterogeneity, i.e., variations among b θ generally exists in complex diseases.For example, any combination of the complexities listed in Lander and Schork [12] can result in transmission heterogeneity.Thus, under transmission heterogeneity, we assume X, the number of allele a, follows a binomial mixture distribution in the population, that is where , In particular, for many of the complex diseases with transmission heterogeneity, it is likely that J is quite large.Since it is hard to know the exact number of the sub-populations J under transmission heterogeneity, it is desirable to have a new test that is applicable without knowing the exact value of J while allowing 2 J ≥ .Suppose n independent families each with one marker homozygous (AA) parent, one marker heterozygous parent (Aa) and two diseased children are sampled from the population.For each locus, the observed genotype frequencies inherited from the heterozygous Aa parent in the two diseased children are summarized in the first row in Table 1.Under H 0 , the expected genotype frequencies are summarized in the second row in Table 1. Mixture Binomial and Maximum Likelihood Assuming the setup in the previous subsection and using notations in Table 1, the maximum likelihood estimator (MLE) of θ under the binomial likelihood in Equation ( 1) is ( ) Thus, the binomial likelihood in Equation ( 1) evaluated at θ is ( ) . Under H 0 , 1 2 θ = , the binomial likelihood value is The maximum of the mixture likelihood for X in Equation ( 3) has an explicit formula [15], that is where θ is defined in Equation (4). The Likelihood Ratio Test Using the maximum of the likelihood 0 L , M L and D L , respectively, we can write down the explicit formula of the log-LRT statistic 2 N λ as follows, ( ) Equation ( 8) can be written as following First, we may consider a classic problem for testing ( ) . The LRT statistic is well known to have a ( ) where ( ) Therefore, when 4n n n > , we can consider testing of goodness-of-fit of ( ) The LRT statistic has a 2 2 χ asymptotic distribution and can be written as χ distribution.Note that the two terms in the right hand side of equation (12) are well known to be asymptotically independent.Therefore, when we have It is easy to show that ( ) Shao [15].Thus, we obtain the explicit form of asymptotic distribution under the null hypothesis.That is, under H 0 , Importantly, to implement the LRT, there is no need to identify the exact number of mixture components J in equation (3). Type I Errors As the LRT 2 N λ has an explicit asymptotic distribution under H 0 , it is convenient to evaluate p-value and type I error.We conducted simulations to compare the empirical type I error of the LRT to the nominal significant level ranging from 10 −2 to 10 −8 .The genotype data were generated from binomial distribution ( ) . The simulation was replicated 10 11 times.As shown in Table 2, the empirical type I error is slightly smaller than the nominal level, but they are very close to each other.Therefore, using the asymptotic null distribution for the LRT is valid.The closed form asymptotic distribution allows easy determination of the asymptotic p-values. Power Comparison In the simulation studies for power comparison, the sample was generated from a two-component mixture binomial distribution as described in equation ( 3) The R Package Description and Examples The gLRTH R package is available on CRAN and the installation is standard. The purpose of this package is to implement the previously discussed two methods, i.e., LRT for genome-wide linkage analysis under genetic heterogeneity and Qian and Shao's LRT-H for GWAS [15].The gLRTH R package is composed of two main functions: gLRTH_L for linkage analysis under heterogeneity and gLRTH_A for association studies. The gLRTH_L function calculates the test statistic and asymptotic p-value for the likelihood ratio test for testing linkage.The gLRTH_L function in the package can be called with the following syntax: The required arguments are: 1) n0: Number of affected sibling pairs that both inherited A from their heterozygous parent Aa 2) n1: Number of affected sibling pairs that one inherited A and the other inherited a from their heterozygous parent Aa 3) n2: Number of affected sibling pairs that both inherited a from their heterozygous parent Aa To illustrate the gLRTH_L function, suppose we have hypothetical genetic marker M1/M2 information from a sample of 1000 n = independent families, with M2 be the marker of interest.Each family has one marker homozygous (M1/M1) parent, one marker heterozygous parent (M1/M2) and two diseased children.Suppose we have 0 100 n = families with both sibling inherited M1 from their heterozygous parent (M1/M2), 1 650 n = families have one sibling inherited M2 and one sibling inherited M1 from their heterozygous parent (M1/M2), and 2 250 n = families have both siblings inherited M2 from their heterozygous parent (M1/M2). Conclusions The commonly used statistical methods in medical research often assume patients arise from one homogeneous population.However, the impact of heterogeneity is well known and has been document in much of the existing literature for common and complex diseases.Inadequate attention to the DOI: 10.4236/ojs.2018.83030X. X. Han, Y. Z. Shao heterogeneity inherent in the complexity of complex human disease could lead to increased number of false negatives and missed opportunities in research.To solve this problem, using finite mixture models to account for latent genetic heterogeneity is an intuitive strategy.However, there are well known difficulties associated with likelihood-based inference in the context of finite mixture due to issues regarding parameter identifiability and degenerate Fisher information [27].The mixture likelihood often has many local maximum values making the numerical maximization complicated.Moreover, the likelihood irregularities lead to great challenges in deriving the limit distribution of the LRT statistic under loss of identifiability.The strength of the proposed method is that we are able to derive closed form formula for the LRT statistic and its simple closed form asymptotic distribution despite the loss of identifiability in parameters in the context of mixture likelihood.This leads to efficient computation of the test statistic and its asymptotic p-values; and thus, it is suitable for high throughput data and genome-wide studies.The proposed method also works for a single marker or a few markers.There are a few existing methods for linkage analysis that account for latent heterogeneity [13] [23] [24] [25], but the existing methods are computationally expensive for NGS and genome-wide studies. The rapid development of next generation whole-genome sequencing (WGS) has revived family-based linkage analysis for identification and characterization of functional variants.Our proposed LRT for linkage analysis under genetic heterogeneity will likely to be a powerful tool for genetic mapping of complex traits [20].In the era of precision medicine, using individual variations in genes and environment to develop diagnostics, prognostics, and therapies is the primary approach for disease prevention and treatment.For example, instead of using "one-size-fits-all-approach", "precision medicine" based on genetic markers can be used to optimize effectiveness of disease prevention and treatment as well as minimize side effects for persons less likely to respond to a particular therapeutic.Reliable disease associated SNPs could serve as predictive markers that inform our decisions about numerous aspects of medical care, including specific diseases, effectiveness of various drugs and adverse reactions to specific drugs.We believe that with the reduction in cost of whole-genome sequencing (WGS), genome-wide linkage analysis of family based WGS data as well as GWAS will facilitate the identification of causal variants and may contribute tremendously to the advancement of precision medicine.Our open source R package gLRTH is meant to be a valuable package to help researchers perform GWAS and genome-wide linkage analysis accounting for the ubiquitous genetic heterogeneity in common and complex human diseases without a lot of programming and computational burden. Pearson's classic 2 χ .4236/ojs.2018.83030X. X. Han, Y. Z. ShaoThe first term at the right-hand side of the last equality is equivalent to the statistic (via comparing observed to expected cell frequencies) for testing Hardy-Weinberg equilibrium which is know to have the 2 1 2 χ . Han, Y. Z. Shao and the empirical power for LRT and 2 are shown in Table 1 . Genotype frequencies inherited from the heterozygous Aa parents for n affected thousand replicate dataset of n disease cases Table 3 . The simulation results indicate that the LRT has power advantage over the 2 Table 3 . Empirical power (significant level is set at
2019-02-17T14:20:28.775Z
2018-06-11T00:00:00.000
{ "year": 2018, "sha1": "c175a4177d2a65e990bbb8dbb0c1b1a8bc85abd4", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=85177", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c175a4177d2a65e990bbb8dbb0c1b1a8bc85abd4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
257145936
pes2o/s2orc
v3-fos-license
Identifification and validation of ferroptosis signatures and immune infifiltration characteristics associated with intervertebral disc degeneration Ferroptosis and immune infiltration play an important role in the pathogenesis of intervertebral disc degeneration (IDD). However, there is still a lack of comprehensive analysis on the interaction between ferroptosis-related genes (FRGs) and immune microenvironment in IDD patients. Therefore, this study aims to explore the correlation between FRGs characteristics and immune infiltration in the progression of IDD. The expression profiles (GSE56081 and GSE70362) and FRGs were downloaded from the comprehensive gene expression omnibus (GEO) and FerrDb database, respectively, and the differences were analyzed using R. The intersection of IDD related differential genes (DEGs) and FRGs was taken as differentially expressed FRGs (DE-FRGs) and GO and KEGG enrichment analysis was conducted. Then, we used least absolute shrinkage and selection operator (LASSO) regression algorithm and support vector machine (SVM) algorithm to screen feature genes and draw ROC curve judge the diagnostic value of key DE-FRGs. Then CIBERSORT algorithm is used to evaluate the infiltration of immune cells and analyze the correlation between key DE-FRGs and immune infiltration. Based on the analysis results, we conducted single gene GSEA analysis on key DE-FRGs. RT-PCR and immunohistochemistry further verified the clinical value of the results of biochemical analysis and screening. Seven key DE-FRGs were screened, including the upregulated genes NOX4 and PIR, and the downregulated genes TIMM9, ATF3, ENPP2, FADS2 and TFAP2A. Single gene GSEA analysis further elucidates the role of DE-FRGs in IDD associated with ferroptosis. Correlation analysis showed that seven key DE-FRGs were closely related to immune infiltration in the development of IDD. Finally, RT-PCR and immunohistochemical staining showed that NOX4, ENPP2, FADS2 and TFAP2A were statistically significant differences. In this study, we explored the connection between ferroptosis related characteristics and immune infiltration in IDD, and confirmed that NOX4, ENPP2, FADS2, and TFAP2A may become biomarkers and potential therapeutic targets for IDD. Ferroptosis and immune infiltration play an important role in the pathogenesis of intervertebral disc degeneration (IDD). However, there is still a lack of comprehensive analysis on the interaction between ferroptosis-related genes (FRGs) and immune microenvironment in IDD patients. Therefore, this study aims to explore the correlation between FRGs characteristics and immune infiltration in the progression of IDD. The expression profiles (GSE56081 and GSE70362) and FRGs were downloaded from the comprehensive gene expression omnibus (GEO) and FerrDb database, respectively, and the differences were analyzed using R. The intersection of IDD related differential genes (DEGs) and FRGs was taken as differentially expressed FRGs (DE-FRGs) and GO and KEGG enrichment analysis was conducted. Then, we used least absolute shrinkage and selection operator (LASSO) regression algorithm and support vector machine (SVM) algorithm to screen feature genes and draw ROC curve judge the diagnostic value of key DE-FRGs. Then CIBERSORT algorithm is used to evaluate the infiltration of immune cells and analyze the correlation between key DE-FRGs and immune infiltration. Based on the analysis results, we conducted single gene GSEA analysis on key DE-FRGs. RT-PCR and immunohistochemistry further verified the clinical value of the results of biochemical analysis and screening. Seven key DE-FRGs were screened, including the upregulated genes NOX4 and PIR, and the downregulated genes TIMM9, ATF3, ENPP2, FADS2 and TFAP2A. Single gene GSEA analysis further elucidates the role of DE-FRGs in IDD associated with ferroptosis. Correlation analysis showed that seven key DE-FRGs were closely related to immune infiltration in the development of IDD. Finally, RT-PCR and immunohistochemical staining showed that NOX4, ENPP2, FADS2 and TFAP2A Introduction Lower back pain (LBP) is one of the most common complaints and the main cause of disability worldwide (Hartvigsen et al., 2018;Cieza et al., 2021). Intervertebral disc degeneration (IDD) is the most common cause of LBP . It has a significant impact on life quality and places a heavy load on the global healthcare system (Andersson, 1999). There has been little improvement in the early diagnosis of IDD, which currently still mostly relies on clinical symptoms and imaging findings. Although conservative therapy and surgical therapy are now thought to be effective ways to reduce the suffering of IDD patients, these therapies can only lessen the severity of symptoms which cann't treat the underlying cause of the disease (Ma et al., 2022). As a result, early biomarker screening is important for IDD diagnosis and treatment. This cann't only better protect the intervertebral discs' biological function but also lessen the likelihood that individuals will get LBP. Intervertebral disc (IVD) is composed of nucleus pulposus (NP), annulus fibrosus (AF) and cartilage endplate (CEP) (Maitre et al., 2015). Previous studies have reported that the main factors of IDD are genetic factors, aging, malnutrition and overload history (Sharifi et al., 2015). More and more studies showed that the immune system have played an important role in the progress of IDD (Capossela et al., 2014;Ye et al., 2022). NP is isolated from the host's immune system by surrounding AF and CEP and becomes an immune privileged organ. Once this complete structure is destroyed, NP will be exposed to the immune system, further damaging the internal environment of the disc and causing a series of immune reactions . The activation of infiltrating immune cells, including macrophages (Silva et al., 2019) and CD8 + T cells (Ye et al., 2022) in the intervertebral disc microenvironment help to accelerate the process of IDD. So far, only a few studies on immune infiltration in IDD development have been reported Li K et al., 2022). Ferroptosis can regulate cell death by accumulation of iron-dependent lipid peroxides and reactive oxygen species (ROS) (Dixon et al., 2012). Current studies have found that ferroptosis is involved in the IDD process (Lu et al., 2021;Yang et al., 2021;Zhang et al., 2021), which may be an important factor of IDD. At present, the mechanism of ferroptosis in IDD and its relationship with immune infiltration remain unclear. In this study, the key differentially expressed ferroptosis-related genes (DE-FRGs) were selected by bioinformatics analysis, and they were comprehensively analyzed by functional and enrichment analysis and immunoinfiltration analysis. Following that, machine learning models, expression validation across several data sets, and ROC curve analysis were used to assess the dependability of these crucial DE-FRGs. Finally, RT-PCR and immunohistochemistry were used to further corroborate the results. By integrating ferroptosis and immunological infiltration, we expect to offer a fresh viewpoint on the diagnosis and therapy of IDD. Data collection In the Gene Expression Omnibus (GEO), with intervertebral disc degeneration and nucleus pulposus cells as the retrieval conditions, the species was set as human, and finally two datasets (GSE56081 and GSE70362) were selected for analysis. GSE56081 contains five nucleus pulposus samples from IDD patients and five normal nucleus pulposus samples. GSE70362 included samples from 10 IDD patients and 14 normal nucleus pulposus samples. The two datasets were normalized using the "preprocessCore" package. In addition, cell classification inference of immune cell composition was performed in both the initial and validation datasets, allowing the immunomodulatory effects of key IDD biomarkers to be compared across different IDD samples. The ferroptosis-related genes (FRGs) were derived from FerrDb database. The flowchart of the analysis process was summarized in Figure 1. Screening and identification of differential genes The "preprocessCore" packages in R software were used to avoid batch effects, and the normalization of both datasets was validated by boxplots with the ggpolt2 package. Differential genes (DEGs) were determined using the limma software package analysis, with |log2 (fold change)|>1 and p < 0.05 as DEGs with statistically significant differences. Volcano plots and heatmaps were generated using "ggplot2" and the "pheatmap" packages, showing statistically significant DEGs. Intersection of FRGs with DEGs using the Venn diagram package and showing the expression levels of DE-FRGs by drawing heatmap. Gene ontology (GO) and kyoto encyclopedia of genes and genomes (KEGG) pathway analysis of DEGs To further reveal the important functions and biological pathways of co-expressed differentially genes, we performed GO function and KEGG pathway enrichment analysis on them. The "clusterProfiler", "org.Hs.e.g.db", and "ggplot2" packages in the R software were used to analyze and visualize the GO and KEGG pathway enrichment of differential genes. Setting the minimum gene set to five and the maximum gene set to 5000, p values of <0.05 and FDR of <0.25 were considered statistically significant. FIGURE 1 The flowchart of the analysis process. Frontiers in Genetics frontiersin.org 04 Screening and identification of key genes In order to further identify key genes from differentially expressed ferroptosis-related genes, the Lasso regression algorithm and the SVM-RFE support vector machine recursive feature elimination algorithm were used to screen eigengenes. The gene results were intersected, and the R software "venneuler" package was used to draw a Venn diagram to visualize the results. Validation of diagnostic markers For the eigengenes obtained by the above intersection, the value of the obtained genes as diagnostic markers was verified in two independent data sets, GSE56081 and GSE70362, and the diagnostic value was evaluated by drawing the receiver operating characteristic (ROC) curve. With p < 0.05 is the threshold to determine. Assessment of immune cell infiltration In order to evaluate the infiltration of immune cells in IDD and the relationship between the gene expression of the screened IDD diagnostic biomarkers and the infiltration of immune cells in the intervertebral disc tissue, the CIBERSORT algorithm (https:// cibersort.stanford.edu/) was used to analyze the GSE56081 and GSE70362, and the appropriate samples were screened according to p < 0.05 and the percentage of each type of immune cell in the sample was calculated. Wilcox test was used to compare the differences between the two groups, and the correlation between the infiltration rates of various types of immune cells was determined through the corrplot package. Human NP tissue sample collection This study used NP tissue, which had the informed consent of the patient's family and was approved by the Medical Ethics Committee of Real time polymerase chain reaction (RT-PCR) The primer sequence of each gene was shown in Table 2. A total of 10 NP tissues were used, of which 5 were degenerative and five were normal. Nucleus pulposus cells are digested by trypsin and treated with the TriZol reagent (Invitrogen) to extract total RNA after ultrasonic fragmentation. The PrimeScript RT-PCR kit (Takara) was used to synthesize cDNA. The 7500 real-time fluorescent PCR system (Thermo Fisher) was used for RT-PCR, and GAPDH was used as the endogenous control. Immunohistological analyses in human NP tissues The expression of DE-FRGs was evaluated by immunohistochemistry of sections of nucleus pulposus with different levels of degeneration. NP tissue was fixed with 4% paraformaldehyde, then paraffin embedded and sectioned. Histological analysis was performed. Sections were desaffinified, rehydrated, and stained with Eosin (HE), Masson, and Alcian blue, respectively. Immunohistochemistry was performed according to the aforementioned methods (Yi et al., 2020). Sections were incubated with a primary antibody (dilution 1:200) resistant to ENPP2 (14243-1-AP, Wuhan proteintech), NOX4 (14347-1-AP, Wuhan proteintech), FADS2 (680261-LG, Wuhan proteintech), Frontiers in Genetics frontiersin.org and TFAP2A (67076-1-lg, Wuhan proteintech). Next, the slices were incubated with the secondary antibody (pv6000, Beijing Zhongshan). Buffer was used instead of primary antibody as negative control. Finally, three fields of each slide were randomly selected and observed with a microscope (Olympus, Tokyo, Japan). Frontiers in Genetics frontiersin.org 07 Statistical analysis Analyze the statistical data obtained from GEO database through R-3.6.1. Use GraphPad Prism 8 and SPAS software to process other data. p < 0.05 was considered statistically significant. Identifification of IDD-related genes The data sets GSE56081 and GSE70362 were homogenized using the "preprocesscore" package ( Supplementary Figures S1A, B). Next, we used the "limma" package to analyze the differences between the two datasets. A total of 339 differential genes were screened, including 182 upregulated genes and 157 downregulated genes. Figures 2A, C show the volcano map of differential genes analysis, while Figures 2B, D show the analysis heat map of differential genes. To explore the biological function of the differential genes related to IDD, we performed GO and KEGG enrichment analysis. The results of GO analysis showed that these differential genes were mainly related to neutrophil mediated immunity, neutrophil activation involved in immune response, mitotic nuclear division and so on ( Figure 3C). In addition, the KEGG enrichment pathway analysis shows multiple important signaling pathways such as pathways neurodegeneration-multiple. Identification of DE-FRGs In order to explore the relationship between ferroptosis and intervertebral disc degeneration, we crossed the previously obtained differential genes with FRGs and obtained a total of 16 DE-FRGs ( Figure 4A). Supplementary Figures S2 shows the volcanic map and heatmap of ferroptosis related differential Frontiers in Genetics frontiersin.org genes in GSE56081 and GSE70362 separately. The enrichment results of GO and KEGG showed that the DE-FRGs were closely related to cellular aldehyde metabolic process, oxidoreductase activity and chemical carcinogenesis-reactive oxygen species (Figures 4B, C). In addition, we use node graph to show the relationship between KEGG results of top5 and related differential genes ( Figure 4D). Screening key differential genes of IDD by machine learning Next, we analyzed the differential expression of 16 DE-FRGs in two data sets. As shown in Figures 5A, B genes were upregulated, and nine genes were downregulated in two databases. In order to further identify the key genes, we further screen the characteristic genes through machine learning. Seven genes were screened by LASSO analysis and 16 genes were screened by SVM algorithm. Seven feature genes were obtained through LASSO algorithm and SVM algorithm, and 16 feature genes are obtained through SVM-RFE algorithm ( Figures 5C, D). The genes obtained by the two methods were intersected to obtain seven characteristic marker genes, and the Wayne map was drawn ( Figure 5E). Finally, heat map is used to display the logFC values of seven genes in the difference analysis results of two data sets, including two highly expressed genes and five low expressed genes ( Figure 5F). Correlation analysis and diagnostic value evaluation of key characteristic genes Next, we use the "circle" package to analyze the correlation of seven core genes in the two data sets. As shown in Figures 6A, B, NOX4 and PIR were negatively correlated with other genes, while TFAP2A, ATF3, ENPP2, FADS2 and TIMM9 were positively correlated with other genes. We drawed ROC curves and evaluate the diagnostic value of key characteristic genes by Frontiers in Genetics frontiersin.org 10 Immunocyte infiltration analysis In order to explore the connection between IDD and immune infiltration, we used the ssGSEA function of "GSVA" package to evaluate the degree of immune cell infiltration in GSE56081 dataset. Figure 7A showed the correlation between immune cells in IDD, macrophages and natural killer cells were positively correlated with many other immune cells including activated CD4 T cells and CD8 T cells. Figure 7B showed the difference in the infiltration of immune cells in normal intervertebral disc NP cells and degenerative intervertebral disc NP cells. The activated CD4 T cells, CD8 T cells, dendritic cells and so on were increased in degenerated NP cells, while eosinophil and type 2T helper cell were decreased in degenerated NP cells. Next, we use Cibersort algorithm to analyze the relationship between the expression of seven key DE-FRGs in GSE56081 database and immune cell infiltration ( Figure 7C). The upregulated expression of NOX4 and PIR in IDD was positively correlated with multiple immune cells, while the downregulated expression of TIMM9, ATF3, ENPP2, FADS2 and TFAP2A were negatively correlated with multiple immune cells. These results suggest that high immune infiltration may play an important role in intervertebral disc degeneration. Function enrichment analysis of key ferroptosis-related DEGs In order to further clarify the role of DE-FRGs in IDD, we conducted gene set enrichment analysis (GSEA) function enrichment analysis. Use the data in GSE70362 to analyze the correlation between seven key genes and all genes and use the heat map to display the positive correlation top50 gene respectively ( Figure 8A). Based on the results of correlation analysis, we used the "clusterProfiler" package for single gene GSEA analysis of Reactome ( Figure 8B). The enrichment results indicate that ATF3 and PIR may be related to the regulation of mitochondrial related genes. ENPP2, NOX4 and TIMM9 may be related to RNA transcription and immune system, while FADS2 and TFAP2A may participate in the IDD process by regulating the cycle of NP cells. RT-PCR and immunohistological assessments validation of bioinformatics results Next, we used RT-PCR and immunohistochemical staining to confirm whether key DE-FRGs can be applied non-selectively to IDD patients. RT-PCR experiment results showed that differences in the expression levels of ENPP2, NOX4, FADS2 and TFAP2A genes were statistically significant. ATF3, PIR and TIMM9 showed no statistically significant difference. The expression of NOX4 was upregulated, whereas the expression of ENPP2, FADS2 and TFAP2A was downregulated (Figure 9). We also confirmed the above results by immunohistochemistry ( Figure 10). These results were consistent with the bioinformatics analysis. In addition, the MRI images of patients with different levels of degeneration were shown in Supplementary Figure Discussion IDD is a protracted and gradual process that is influenced by oxidative stress, trauma, infection, inflammation, and other factors (Feng et al., 2016). The majority of patients with IDD are diagnosed in the late stages of degenerative changes because there are frequently no obvious clinical symptoms in the early stages, and the current clinical treatment strategy focuses more on managing their symptoms than on finding a solution for early diagnosis and prompt intervention. Ferroptosis is a type of regulatory cell death that depends on iron (Dixon et al., 2012). According to research, neurodegenerative disorders, cancer, stroke, and ischemiareperfusion injury are all intimately related to ferroptosis (Reichert et al., 2020;Chen et al., 2021;Yan et al., 2021). Recent studies have shown that ferroptosis is involved in the process of IDD (Lu et al., 2021;Yang et al., 2021;Zhang et al., 2021), and immune infiltration plays an important role in the process of IDD . However, at present, there has been no comprehensive study on ferroptosis and immune infiltration in IDD. Therefore, we aimed to identify the ferroptosis signature gene in IDD and further investigate its correlation with immune infiltration. Firstly, we analyze the differences between the two data sets, and a total of 339 differential genes were screened. The differential genes may have immune-related functions and may be strongly associated with IDD, according to GO enrichment analysis, which revealed that these differential genes were primarily connected to neutrophil activation, degranulation, and immunological response. The majority of white blood cells in the circulation are neutrophils, which are also the main effector cells of innate immunity (Lodge et al., 2020;Rosales, 2020). Neutrophils have the ability to produce and release chemokines, which can regulate acute injury and repair, cancer, autoimmune and chronic inflammatory processes (Tecchio The mRNA relative expression levels of NOX4, ENPP2, FADS2 and TFAP2A in the IDD (n = 5) and normal groups (n = 5) were verified by RT-PCR operated, and the expression was calculated using the 2 −ΔΔCT method. The expression of NOX4, ENPP2, FADS2 and TFAP2A ware statistically significant (*, p < 0.05; **,p < 0.01), whares there ware no difference in ATF3, PIR and TIMM9 expression. Frontiers in Genetics frontiersin.org and Cassatella, 2016; Liew and Kubes, 2019), and play an important role in the progression of IDD . The ssGSEA function of the R language's "GSVA" package was used to assess the degree of immune cell infiltration in order to further assess the link between immune infiltration and IDD. The findings demonstrated that whereas eosinophils and Th2 cells reduced in the deteriorated nucleus pulposus compared to the normal group, the number of macrophages, CD8T cells, dendritic cells, and Th17 cells rose. The importance of macrophage infiltration in the etiology of IDD has been validated by recent research Yan et al., 2022). According to Andrea et al., dendritic cells had a role in the immune response's beginning in IDD, while macrophages further strengthened the immune response and controlled disc absorption (Geiss et al., 2016). Furthermore, th17 cells, a distinct subset of T helper cells, contribute to the pathophysiology of IDD by secreting interleukin-17 (Shamji et al., 2010). In addition, elevated levels of Th17 cells and interleukin-17 can exacerbate pain in IDD patients (Cheng et al., 2013). Our findings agree with reports in earlier literature. While this is going on, neutrophils, T cells, and macrophages can also emit cytokines including TNF-a, IL-1b, and Frontiers in Genetics frontiersin.org IL-17 that accelerate IDD by encouraging the recruitment of immune cells into intervertebral disc tissues and the degradation of extracellular matrix (Risbud and Shapiro, 2014). Then, we used CIBERSORT algorithm to analyze the relationship between seven DE-FRGs and immune cell infiltration. We discovered that various immune cells are favorably correlated with NOX4 and PIR, which are highly expressed in IDD, while numerous immune cells are negatively correlated with TIMM9, ATF3, ENPP2, FADS2, and TFAP2A, which are low expressed. According to these data, increased immune infiltration might be a significant factor in disc degeneration. Next, ferroptosis genes in IDD were screened for the first time using a combination of machine learning and immune infiltration, and a total of seven important genes, including NOX4, PIR, TFAP2A, ATF3, ENPP2, and TIMM9, were found. While TFAP2A, ATF3, ENPP2, FADS2, and TIMM9 are low expressed genes in IDD, NOX4 and PIR are highly expressed genes, suggesting that these seven genes may be involved in the ferroptosis process that leads to IDD. ROC curve was used to analyze the relationship between the sensitivity and specificity of these genes, and the results showed that the AUC values of these important genes were high, indicating that these genes had high diagnostic value. In order to further confirm the reliability of the bioinformatics analysis results, RT-PCR and immunohistochemistry were used for verification. The mRNA levels of ENPP2, NOX4, FADS2 and TFAP2A in IDD group and normal group were significantly different. The results of immunohistochemistry also confirmed the above results. NADPH oxidase 4 (NOX4) is the main source of reactive oxygen species (ROS) and is expressed at elevated levels in animal models of IDD (Feng et al., 2017). Research shows that NOX4 promotes ferroptosis of astrocytes by lipid peroxidation induced by mitochondrial metabolic damage in Alzheimer's disease (AD) (Park et al., 2021). Xiao et al. (Xiao et al., 2022) confirmed that NOX4, as one of the genes associated with ferroptosis, is an effective biomarker for the development of gastric cancer. In addition, NOX4 is also one of the marker genes of immune infiltration, and is highly correlated with the prediction of gastric cancer immunotherapy (Xin et al., 2022), the pathogenesis of keloid disorder (Yin et al., 2022), and the clinical outcome of colon cancer . Our results also confirm that NOX4 is positively correlated with the proportion of multiple immune cells, suggesting that NOX4 is very likely to accelerate the immune process of disc degeneration by promoting immune infiltration, but this needs to be verified by further molecular biology experiments. ENPP2, a member of the ecto-nucleotide pyrophosphatase/ phosphodiesterase (ENPP) family, is known as autoprotein (ATX) (Borza et al., 2022). As a secretory enzyme, ENPP2 produces lysophosphatidic acid (LPA) signaling molecule, which significantly inhibits ROS production and ferroptosis in cardiomyocytes by regulating the expression of GPX4, ACSL4 and NRF2 (Bai et al., 2018;. On the other hand, a recent study showed that ENPP2 could inhibit tumor infiltration of cytotoxic CD8 + T cells and thereby injure tumor regression (Matas-Rico et al., 2021). Therefore, ENPP2 may play a role in immune infiltration. Fatty acid desaturase 2 (FADS2) could fix in fatty acyl chain to adjust unsaturated fatty acids through the introduction of a double bond between carbon. Zhu et al. confirmed that FADS2 was correlated with immune infiltration of tumor cells through participation in peroxisome proliferator-activated receptors (PPARs) signaling through GSEA enrichment analysis (Zhu et al., 2021). Furthermore, interference with FADS2 expression could protect immortalized primary hepatocytes and lung cancer cells from erastin-induced ferroptosis. Transcription factor AP2 alpha (TFAP2A) is a transcription factor and also repress ferroptosis (Huang et al., 2020) and is associated with immune infiltration (Sun Y L et al., 2020). In this study, machine learning was used to identify differential genes associated with ferroptosis, and CIBERSORT analysis was used to explore patterns of immune cell infiltration. Then the reliability of differential genes was evaluated by ROC curve and GSEA analysis of single genes. Finally, it was further verified by RT-PCR and immunohistochemistry. However, there are still limitations to this study. First of all, the database chosen for this study has a tiny sample size, which necessitates further research using larger samples. Second, this study is based on a secondary analysis of the information that has already been released. There should be some justifiable uncertainty regarding the validity of the data, even though they are largely consistent with results from earlier investigations. Conclusion Taken together, this study showed significant differences in the expression levels and immune infiltration of FRGs between IDD patients and healthy controls. Based on this comprehensive bioinformatic analysis, we identified the key genes, and immune infiltration characteristics of IDD. In addition, we confirmed the difference between key DE-FRGs in IDD patients using molecular biology experiments. Together, these findings may extend our understanding of ferroptosis and immune infiltration in patients with IDD. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. Ethics statement The studies involving human participants were reviewed and approved by Medical Ethics Committee of Fuyang People's Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Author contributions HY, XC and ZW designed this project; YZ and WJ recruited research participants and collected nucleus pulposus specimens; KW and HC analyzed the data and drew the image; FZ and DC interpreted the data and wrote the manuscript. All authors reviewed and approved the manuscript. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-02-24T16:47:22.966Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "a6541b766316cfae5b38936b0c1b8b24d1cde846", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2023.1133615/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4c0be9b870bb0f71fda4cdf8f19f2d14d1bc6ae", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
229723534
pes2o/s2orc
v3-fos-license
Optimal surveillance mitigation of COVID'19 disease outbreak: Fractional order optimal control of compartment model In present time, the whole world is in the phase of war against the deadly pandemic COVID'19 and working on different interventions in this regard. Variety of strategies are taken into account from ground level to the state to reduce the transmission rate. For this purpose, the epidemiologists are also augmenting their contribution in structuring such models that could depict a scheme to diminish the basic reproduction number. These tactics also include the awareness campaigns initiated by the stakeholders through digital, print media and etc. Analyzing the cost and profit effectiveness of these tactics, we design an optimal control dynamical model to study the proficiency of each strategy in reducing the virulence of COVID'19. The aim is to illustrate the memory effect on the dynamics of COVID'19 with and without prevention measures through fractional calculus. Therefore, the structure of the model is in line with generalized proportional fractional derivative to assess the effects at each chronological change. Awareness about using medical mask, social distancing, frequent use of sanitizer or cleaning hand and supportive care during treatment are the strategies followed worldwide in this fight. Taking these into consideration, the optimal objective function proposed for the surveillance mitigation of COVID'19, is contemplated as the cost function. The effect analysis is supported through graphs and tabulated values. In addition, sensitivity inspection of basic reproduction number is also carried out with respect to different values of fractional index and cost function. Ultimately, social distancing and supportive care of infected are found to be significant in decreasing the basic reproduction number more rapidly. Introduction A deadly coronavirus that basically initiated from Wuhan city of China, all of a sudden incarcerated the people all around the world. This strain of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has affected more than 210 countries and territories. It has brought devastating consequences on public health as well as on social and economic activities. Governments around the world prompted surveillance on mitigating the global spread of COVID'19. Among many of these dramatic measures, majority are substantiating to be effective in reducing the virus transmission. Imposing curfew and locking down the cities in addition public awareness campaigns such as, stay-at-home, encouraging social distancing, cleanliness that include frequent washing hand, using sanitizers through digital and print media are the key measures in restraining this virus. On enforcing these policies and engaging communities in these campaigns, undoubtedly enormous social and economic cost is expected. But until an effectual vaccine or treatment becomes available, these strategies may play important roles [1][2][3][4][5][6]. Variety of research has been conducted at an extraordinary pace to analyze the COVID'19 in different perspectives [7][8][9]. Epidemiological dynamical systems to control the breakout of this pandemic through basic reproduction number has been obtained by various researchers [10][11][12]. Clinical studies to determine therapeutic solutions through the findings of the biological features of this virus [13]. Perceptions on impact of government's preventing strategies on other environmental, social and economic activities [14]. Decision making models to consider an effective managing prevention strategy of COVID'19 transition [15]. Machine learning models to predict the high risk and efficiently triage the patients with high accuracy [16]. Mathematical models fit out to be substantial contrivances in investigating the dynamical controls of the infectious diseases [17]. Research articles, based on optimal control models can be found in the literature to a great extent in this regard [18][19][20][21][22][23][24]. In the recent times of battle against COVID'19, numerous authors have added their valuable contributions in this connection. Grigorieva et al. formulate two SEIR-type model to investigate the costeffective quarantine strategies and analyzed the optimal solutions numerically [25]. Analysis of interventions of COVID'19 through transmission model and observing the most effective nonpharmaceutical strategies to lessen the disease nuisance in Pakistan is found in the literature [26]. In particular, plenty of endeavors have been carried out in different context of cost-effective strategy, to control the transmission of this deadly pandemic [27,28]. In this attempt, we design mathematical model that covers two major areas, epidemiology together with dynamical optimal control. Firstly, the compartmental model is taken into account with the control variables and stability analysis are carried out. Secondly, optimizing cost functions is subjected to the compartmental model to assess the costeffectiveness of the prevention strategies. As aforementioned, there exist significant mathematical efforts in this connection, but the novelties that invigorate the proposed assessment can be classified as: • This study is not only susceptible, exposed, quarantine, infected and recovery compartments, but also the isolation and precautions. Thus, the model is named as SEQIMRP i.e. susceptible-expose-quarantinedinfected-isolated-recovered-protected. • The non-pharmaceutical control variables, awareness campaigns about using mask, encouraging social distancing, signifying frequent use of sanitizer and washing hands, supportive care during treatment. • Regulatory of basic reproduction number through these campaigns. • Incorporating fractional order derivative for dynamical scrutiny of the model with. This significant contribution will undoubtedly add great perspicacity of COVID'19 interventions. The proposed SEQIMRP model with proportional fractional [29] signifies the broader application of the fractional definition. Its expansion elegantly converts the fractional order derivative operator into integer order that the fractional order index reallocates linearly in the equations. By virtue of this, the dynamics of COVID'19, for instance the basic reproduction number and equilibrium points can be interpreted with memory effects. Subsequently, historical values of these parameters or the compartmental functions will enable to devise defensive precautionary steps, revealed from the past experiences. In addition, the effect of memory on the optimality of awareness strategies is also illustrated through the proportional fractional derivative. The designed system provides a novel contribution in epidemiological study of epidemic and pandemic diseases. It will instruct the healthcare researchers a new mode of generating results and might be capable to investigating prior information about the risk factors or transmission rate for preparatory measures. The remaining paper contains sections of formulating the dynamical system, stability analysis of equilibrium points and optimality assessment. Furthermore, numerical discussions are also carried out to evidently establish an effective conclusion. Model formulation for COVID'19 optimal control Susceptible-expose-quarantined-infected-isolated-recovered-protected (SEQIMRP) Mathematical models based on disease dynamics are quite helpful in studying the functional behavior of any virus, which then helps to overcome or lessen its contaminating breakout. The destructive coronavirus converted into a pandemic within a few months and affected billions of peoples around a globe. Early laboratory research and scientific experiments to construct a drug or vaccine could not triumph. Many epidemiological models also expressed significant contributions in this connection to determine the basic reproduction number and predict the dispersion, recovery and mortality rates [11,12,30]. Here, to analyze the dynamical behavior and impact of COVID'19 pandemic, a system of differential equations is designed with respect to compartmental classes and prevention measures on the basis of following assumptions. • Regardless of different risk rate of COVID'19 for different age-group and pre-existing disease carriers, the model assumes a homogeneous mixing of individuals in the population. • Prevention strategies: Usage of medical mask (mm), social distancing (sd), frequently cleaning hands(ch) and supportive care(sc) during treatments are taken into account as control variables of the optimal system. • The individuals in any compartment, following the operational prevention strategies, are assumed as will not get infected and are defined by means of protected compartment. • Susceptible is outlined in the form of logistic growth that encompasses maximum sustainability to survive in the available resources in an environment. • Exposed are quarantined that might recover and use prevention measures later to insulate themselves from virus. • Treatment of infected COVID'19 patients is the isolation process, which is explained in the isolation compartment. These compartments with the supportive care from staff might recover and move to recovery compartment. • Treated individuals after recovery do not participate in transmitting the disease as they use the operational prevention strategies. • The assessments of basic reproduction number and stability analysis are carried out in fractional calculus environment. Theorem 1. ((Boundedness)) Let Π ∈ R 7 + is the set of all feasible solutions of the system (6), then there exists uniformly bounded subset of R 7 + such that: Proof:. By applying proportional fractional derivative and its expansion, as defined in the Eqs. (4)-(5), on Eq. (3), we get the expression of the form: On simplifying by using system (6) and suppose d * N be total proportion of deaths in all compartments i.e. In addition, since0 < α⩽1 where 0 < S(t) kS ⩽1, so the above inequality reduces to On integrating Therefore as t→∞ , we obtained the final statement of boundedness as Theorem 2. ((Existence and Uniqueness)) Assume the matrix of right hand side of system (6) be the real-valued functionΛ(F(t) ) : ∂F(t) are continuous and Then, satisfying the initial conditions (2), there exists a unique, nonnegative and bounded solution of the system (6). Proof:. Boundedness of system (6) can be followed from Theorem 1, now assume, the system (6) can be expressed as: where, and Eq. (16) can be further expanded into: such that (17) can be rewritten as, Next, we prove the non-negativity of the solutions by using the positivity of initial conditions (2) i.e.,O i > 0 for i = 1,2,…,7. Considering first equation of system (6), it can be deduced to: On manipulating, we get S(t)⩾0 Thus, proved the non-negativity of S(t). Analogously, all the remaining equations of system (6) can be proved to have non-negative solutions with the assumption of positive initial conditions. Optimal control problem Furthermore, the dynamical model (6) of COVID'19 would be incomplete if the assumption of optimal control of infection and intervention cost is not incorporated. Therefore, we formulate optimal control problem by means of the cost function type of quadratic function as: respectively. Moreover, here w i , for i = 1, 2, ..., 7, are the weights of human population cost, whereas φ K , for K = 1,2,3,4, are the weights of undertaken intervention cost for COVID '19. At this juncture, intervention cost comes from government campaigns of using mask, social distancing and frequently washing hand. In addition, the hospitalization cost for drugs, ventilators and trained medical staffs for supportive care of the COVID'19 infected individuals also become higher with the increase in number of patients. Therefore, if greater cost is implemented of campaigns of enforcing the people on usage of mask, social distancing and frequently washing hand will reduce the COVID'19 transmission, which on the other hand it reduces the supportive care cost. Thus, we assume φ K > 0 , for K = 1, 2, 3, 4. Analogously, the objective of the present scenario is to control the spread out of COVID'19, which ultimately leads to minimize the infected individuals, therefore we consider w 4 > 0and remaining equal to zero. Basic reproduction number R 0 In this sequel, we utilize the next generation method, to structure the R 0 for the governing model (6). For this purpose, a sub-model of the SEQIMRP is considered that includes the four infected classes i.e. exposed, quarantine, infected and isolated individuals. Therefore, the equation: will have X → as a vector of theE(t),Q(t), I(t), and M(t), which is outlined as, , can be further split down as, where, From Eq. (21), we can extract and manipulate, The spectral radius Λ(K) is the required basic reproduction number, so after some simplification we get Consequently, the generated R 0 contains the fractional derivative index α as well, which advantageously enables to inspect R 0 . The health care researchers will be capable to investigate the trajectory of basic reproduction number for the COVID'19 at small change. Dynamical anatomization In this section, on the strength of proportional fractional derivative, dynamical analysis of equilibrium points and optimality conditions are discussed in fractional environment as follows: Characterization of optimal control It is evidently clear from Theorem 1 that there exist a unique solution of system (6). Now to optimize the solution, we define the Lagrangian by In addition, describing the Hamiltonian Has the inner product of the right hand side of the state system (6) and the adjoint variables Ω = (ω 1 , ω 2 , ω 3 , ω 4 , ω 5 , ω 6 , ω 7 ), we get where Ω is to be determined. Now, utilizing the Pontryagin's maximum principle for the Hamiltonian H, following theorem is obtained to determine the adjoint variables. Numerical simulation and deliberation In this segment, numerical investigations of the aforementioned system are carried out by considering some numerical values of the parameters, as shown in Table 1. The graphical predisposition analysis of R 0 with respect to the strategies are also added in the discussion. Moreover, the simulations of all compartmental class, with prevention and without prevention campaign cases are plotted and tabulated by using Mathematica 11.0. Sensitivity analysis of parameters with optimality The sensitivity analysis of R 0 by means of control variables are described in Table 2 and Figs. 2-7 for the parameters mentioned in Table 1 and at different values of α. These control variables define the strategic campaigns utilized to prevent the deadly transmission of the COVID'19. It can be clearly seen from the Figs. 2-7 that at each value of α, the influential strength of each campaign together minimizes the significance of R 0 . The generation of colorized output in these figures, ranging from light to dark, indicates the gradual decrease in R 0 from largest to lowest value. The obtained value of R 0 without any awareness campaign is greater than 1, which gradually reduces to less than 1 on increasing awareness campaigns that can be seen from the Table 2 Table 2 explains the sensitivity of R 0 with some different values of intervention strategies, which elucidates that for mm = 0.3, sd = 0.7,ch = 0.5,sc = 0.9, the value of R 0 decreases more rapidly than Table 2 Sensitivity inspection of R 0 and optimal surveillance J based on prevention strategies for weights Equilibrium states and optimality Moreover, solving SEQIMRP system different plots are attained that define the stability of Π 1 and Π 2 . In the current scenario, evaluations of these equilibrium points are produced on the basis of the prevention campaigns. Commencing from Table 3 recovery and mortality. Since, no prevention measures are taken at initial spread stage of COVID'19, therefore the curve of protected population yields a constant straight line on zero. This further elaborates the circumstances where everyone is at high risk of being infected that the pandemic situation becomes worst. Contrarily Table 4 depicts the values, which are generated for mm = Evidently from Table 4, when prevention measures are taken into account to some extent, we attain the disease-free state of the dynamics at each value of α. In addition, it also shows the value of R 0 to be less than one, which is proved in Running awareness campaigns about using mask, social distancing, hand wash and also invigorating supportive care of the patients will decrease the basic reproduction number and eventually the deadly spread of COVID'19. Conclusion The declaration of PHEIC by the WHO about the COVID'19 outbreak, agitate the scientific community and the healthcare professionals of the countries. After the failure of several experiments on the inoculations, the only operational plan of action to decelerate the spread of COVID'19 is to adopt non-pharmaceutical restrictions. For this purpose, different unprecedented measures are taken into account such as lockdown, closure of institutions and initiating different awareness campaigns. Here, we discussed the cost and public effectiveness of the awareness campaigns taken into consideration by the stakeholders. These maneuvers include the strict imposition of using medical mask in public places, social distancing of 6 feet, frequent use of hand wash and sanitizers, training medical staffs and officers for extraordinary supportive care of COVID'19 patients in hospitals. The optimal control function was designed with the epidemic dynamical system SEQIMRP to mutually Fig. 8. Dynamics of S(t) ∈ Π 2 of SEQIMRP, for parameters described in Table 1 and mm = 0, sd = 0, ch = 0 and sc = 0, at α = 0.8, 0.95, 1and t ∈ [0, 30]. The system was formulated with the proportional fractional derivative, in order to analyze the basic reproduction number at each chronological change. Ultimately, through the aforementioned analytical and numerical illustrations, the following propitious facts can be extracted: • The strategies of using medical mask, social distancing, frequently sanitizing hands and supportive care of COVID'19 for speedy Table 1 and mm = 0, sd = 0, ch = 0 and sc = 0, at α = 0.8, 0.95, Table 1 and mm = 0, sd = 0, ch = 0 and sc = 0, at α = 0.8, 0.95, 1and t ∈ [0, 30]. Fig. 16. Dynamics of E(t) ∈ Π 1 of SEQIMRP, for parameters described in Table 1 recovery are significant attempts to win this battle against this pandemic. • The awareness and necessitating of these lines of attacks may change the state of pandemic into a stable disease-free environment. • These can greatly lesser the basic reproduction number from R 0 > 1 to R 0 < 1. • The optimal surveillance mitigation with respect to cost effectiveness, social distancing and supportive care may reduce the diffusion of COVID'19 more hastily. • Illustrations at different fractional derivative index show systematic reading in the susceptible, expose, quarantined, infected, isolated, recovered and protected population. • Without precautions, as the fractional derivative approaches the whole change, the readings represent step by step increase in susceptible, expose, quarantined, infected, isolated and recovered population. • Following precautions, as the fractional derivative approaches the whole change, the number individuals in protection increases gradually, while expose, quarantined, infected, isolated and recovered remain zero. • Competency in prior recognition of the track of COVID'19 transmission risk through the proportional fractional derivative model. • Proficiently trace the basic reproduction number and take preparatory measures before becoming a deadly pandemic. In the current phase, understanding the epidemiological characteristics is a serious bone of contention question for researchers and health professionals. The successful investigations may significantly help out the stakeholders in making effective standard operational procedures of interventions. The designed model SEQIMRP will categorically aid a great contribution in dynamically scrutinizing and exhibiting the optimal strategy to control the deadly escalation of COVID'19. Fig. 18. Dynamics of I(t) ∈ Π 1 of SEQIMRP, for parameters described in Table 1
2020-12-31T09:02:49.884Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "b7340c711109b54db2a88098739610179d9a857d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.rinp.2020.103715", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "755c0e65cd019c1b823ce8b6e7578b4e6e237903", "s2fieldsofstudy": [ "Mathematics", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12953405
pes2o/s2orc
v3-fos-license
Rapid and efficient cancer cell killing mediated by high-affinity death receptor homotrimerizing TRAIL variants The tumour necrosis factor family member TNF-related apoptosis-inducing ligand (TRAIL) selectively induces apoptosis in a variety of cancer cells through the activation of death receptors 4 (DR4) and 5 (DR5) and is considered a promising anticancer therapeutic agent. As apoptosis seems to occur primarily via only one of the two death receptors in many cancer cells, the introduction of DR selectivity is thought to create more potent TRAIL agonists with superior therapeutic properties. By use of a computer-aided structure-based design followed by rational combination of mutations, we obtained variants that signal exclusively via DR4. Besides an enhanced selectivity, these TRAIL-DR4 agonists show superior affinity to DR4, and a high apoptosis-inducing activity against several TRAIL-sensitive and -resistant cancer cell lines in vitro. Intriguingly, combined treatment of the DR4-selective variant and a DR5-selective TRAIL variant in cancer cell lines signalling by both death receptors leads to a significant increase in activity when compared with wild-type rhTRAIL or each single rhTRAIL variant. Our results suggest that TRAIL induced apoptosis via high-affinity and rapid-selective homotrimerization of each DR represent an important step towards an efficient cancer treatment. The tumour necrosis factor family member TNF-related apoptosis-inducing ligand (TRAIL) selectively induces apoptosis in a variety of cancer cells through the activation of death receptors 4 (DR4) and 5 (DR5) and is considered a promising anticancer therapeutic agent. As apoptosis seems to occur primarily via only one of the two death receptors in many cancer cells, the introduction of DR selectivity is thought to create more potent TRAIL agonists with superior therapeutic properties. By use of a computer-aided structure-based design followed by rational combination of mutations, we obtained variants that signal exclusively via DR4. Besides an enhanced selectivity, these TRAIL-DR4 agonists show superior affinity to DR4, and a high apoptosis-inducing activity against several TRAIL-sensitive and -resistant cancer cell lines in vitro. Intriguingly, combined treatment of the DR4-selective variant and a DR5-selective TRAIL variant in cancer cell lines signalling by both death receptors leads to a significant increase in activity when compared with wild-type rhTRAIL or each single rhTRAIL variant. Our results suggest that TRAIL induced apoptosis via high-affinity and rapid-selective homotrimerization of each DR represent an important step towards an efficient cancer treatment. Tumour necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) is a promiscuous member of the TNF superfamily capable of binding to five different receptors. TRAIL induces apoptosis via the death receptors (DRs) 4 and 5, 1-2 followed by recruitment of the Fas-associated death domain. [3][4][5] Formation of the death-inducing signalling complex (DISC) is completed by binding of procaspase-8 and -10, leading to their cleavage and activation, that in turn activate downstream caspases (e.g. caspase-3), resulting in apoptosis. [6][7][8] TRAIL binds also to three decoy receptors, DcR1, DcR2 and OPG. 9 Decoy receptors lack a functional death domain, and consequently they do not induce apoptosis and can prevent TRAIL binding to death receptors. 2,[9][10][11] TRAIL is generating considerable interest as a possible anticancer therapeutic agent because of its selective activation of apoptosis in cancer cells. 12,13 The formation of heterotrimeric non-apoptotic TRAIL-receptor complexes 7,14,15 and the receptor binding promiscuity of TRAIL endorsed the concept that DR selectivity may allow the creation of more potent TRAIL apoptosis inducers. [16][17][18][19] DR5 was described as contributing more than DR4 to the overall apoptotic activity of TRAIL in apoptosis signalling of cancer cells. 16,20,21 Recently, DR4 has gained increased importance, with the realization that certain cancer cells depend mainly on this receptor for apoptosis via TRAIL. 17,18,22 Whereas potent DR5-selective variants were created previously, 16,19 no enhancement of apoptosis by selective activation of DR4 has been accomplished by using TRAIL variants. A previous generation of DR4-selective variants designed by us 18 or by others 17 were either equally active when compared with rhTRAIL WT or largely inactive. [16][17] Here we report the design of highly active, DR4-specific TRAIL variants by using computational protein design and a rational combination strategy. This resulted in the first DR4-selective variants that not only have an enhanced selectivity to DR4, but also a significant increase in apoptosisinducing activity in many cancer cells in vitro. Both receptor selectivity and increased affinity for DR4 appear to contribute to the more efficient apoptosis induction by these TRAIL variants. These DR4-selective variants and the previously described DR5-selective variants 19 allowed us to establish the contribution of DRs in apoptosis signalling of different cancer cells. Cell lines signalling via both DRs showed a large increase in efficacy upon combined treatment with selective TRAIL variants when compared with rhTRAIL WT . Our results show these novel DR4-specific variants alone or in combination with a DR5-specific variant as a powerful strategy for targeting DR4/DR5-responsive cancer cells. DR4-selectivity screening. RhTRAIL variants were expressed and purified as described before, 19,23 and variants were characterized by surface plasmon resonance (SPR). Apoptosis-induction assays were performed in cell lines mediating apoptosis primarily via DR4 or DR5. Mutations at positions 149, 159, 199, 201 and 215 showed increased selectivity to DR4 as predicted by FoldX (Supplementary Figure S1), whereas mutations at positions 193, 204, 212 and 251 did not show the expected changes. Interestingly, positions 199, 201 and 215 have been found before in DR4 variants selected by phage display. 16 Variants showing an increase in binding ratio of DR4 over DR5 (DR4/DR5) of more than fourfold as measured by pre-steady state SPR (Supplementary Table 1), and/or presenting a more than threefold increased activity on DR4-mediated cell lines, and showing inactivity in DR5-mediated cell lines were chosen for further studies. R149I, S159R and S215D were selected. Rational combination of single variants. Several aminoacid substitutions resulted in high DR4/DR5 binding ratios relative to rhTRAIL WT , thus favouring DR4 binding. The characteristics of these single mutants were used for rational design of nine combination variants, containing between two and six mutations each. A single mutant (G131R) constructed previously 23 showing increased affinity to both DRs and a double mutant (N199R; K201H), showing a decrease in affinity to DR5 have been combined before (to be published elsewhere), and were therefore incorporated in the new combination variants. Mutation S159R was incorporated in all the combination variants, showing a high affinity ratio change (Supplementary Table 1). Mutants displaying similar or higher binding affinities to DR4, together with a lowered affinity for DR5, were characterized. Mutations that resulted in more than 50-fold increased DR4/DR5 binding ratio compared with rhTRAIL WT as measured by SPR were selected. Two combination variants showing the highest affinity increase to DR4 and affinity decrease for DR5 were selected for further studies. These variants were named 4C7 (G131R/R149I/S159R/N199R/K201H/S215D) and 4C9 (G131R/R149I/S159R/S215D). Death receptor selectivity of 4C7 and 4C9. Binding of rhTRAIL WT and variants 4C7 and 4C9 to DRs using SPR showed an increased affinity to DR4-Ig with a 9-10-fold lower dissociation constant (K D ) when compared with rhTRAIL WT ( Table 1). In contrast, 4C7 and 4C9 showed a much lower affinity to DR5-Ig with regard to rhTRAIL WT with a 40-and 230-fold increase in dissociation constant to DR5-Ig for 4C9 and 4C7, respectively (Table 1). DR4-Ig competed with a 5-7-fold increased effect for the variants 4C7 and 4C9 when compared with rhTRAIL WT (Figure 1a). Inversely, DR5-Ig was less capable to compete for 4C7 and 4C9 binding to immobilized DR4-Ig ( Figure 1b). A 29-fold increased binding of 4C9 to immobilized DR4-Ig was observed compared with rhTRAIL WT when competing with DR5-Ig, indicating a preference to bind to DR4. 4C7 was even less sensitive to competition by DR5-Ig, with 80% binding efficiency for DR4 even when competing with 5000 ng/ml of DR5-Ig. These results indicate that the variants 4C7 and 4C9 bind highly preferentially to DR4. Decoy receptors DcR1-Ig and DcR2-Ig were found to equally compete for binding to rhTRAIL WT 4C7 and 4C9 (Supplementary Figure S2). Interestingly, another combination variant (4C6: R149I/S159R/S215D) showed not only lowered binding to DR5, but also decreased binding affinity to both decoy receptors in solution (Supplementary Figure S2). Biological activity of DR4-selective variants. Comparison between single variants and combination variants in BJAB cell lines showed that 4C7 displays a greatly improved capability to induce cell death when compared with rhTRAIL WT , to the single variants and to a combination of three mutations (Supplementary Figure S3). To confirm the selectivity of 4C7 and 4C9 in vitro, we tested their activity in 24 showed that 4C7 and 4C9 do not require DR5 to deliver similar levels of apoptosis in both cell lines. In addition, 4C7 and 4C9 showed a significant increase in cell death induction in BJAB cells when compared with rhTRAIL WT in all concentrations measured (Figure 2a and b). Cell assays in DR5-mediating A2780 and Jurkat cells showed no apoptosis-inducing activity of the variants (o10%) at almost all concentrations measured (Figure 2c and d). These results support that 4C7 and 4C9 induce apoptosis essentially via DR4. Furthermore, enhanced activity of TRAIL variants 4C7 and 4C9 was shown on a number of colon adenocarcinoma cells sensitive to TRAIL-induced apoptosis when compared with rhTRAIL WT , including Colo205, SW948, HCT-15 and DLD-1 (Figure 3a-d). A reduction of 3-5-fold in cell viability of Colo205 by variants 4C7 and 4C9 when compared with rhTRAIL WT was observed ( Figure 3a). The SW948 cell line showed a strong decrease in cell viability upon treatment with the variants (Figure 3b). Moreover, an impressive decrease in cell viability has been obtained in HCT-15 and DLD-1 cell lines by 4C7 and 4C9 when compared with rhTRAIL WT (up to 88% cell death) (Figure 3c and d). Our results indicate that these cell lines can be triggered very efficiently to undergo apoptosis by these novel DR4-specific variants. We then tested whether selective activation of DR4 would have an impact on cancer cell lines less sensitive to TRAILinduced apoptosis. Remarkably, our variants 4C7 and 4C9 showed a significant increase in cell killing in breast cancer MCF-7 and pancreatic cancer PANC-1 cell lines (Figure 3e and f). Both cell lines express DR4 and DR5 on their surface (Supplementary Figure S4). Furthermore, the variants were tested for activity on normal cells, namely fibroblasts. No significant activity for rhTRAIL WT or variants 4C7 and 4C9 could be observed, suggesting that the increased apoptotic activity on cancer cells had no effect on normal cells (Supplementary Figure S5). Real-time analysis of caspase activation. The variant 4C7 was subject of study by analysing caspase activation using FRET. Activation of the initiator caspase-8/10 was followed by using a probe based on CFP and YFP (Venus), 25 interconnected by a linker containing the preferred cleavage motif for caspase-8 (IETD). 26,27 Owing to its high signal/noise ratio, this probe allows the efficient detection of low caspase activities. 25 The 4C7 variant-induced profile of intracellular IETDase activity shows increased cleavage of the CFP-IETD-Venus probe in ovarian carcinoma OVCAR-3 cells at much earlier time points when compared with rhTRAIL WT (Figure 4). Cleavage by activated initiator caspases could be observed within 15-20 min after 4C7 addition. More significant differences between rhTRAIL WT and 4C7 occur after 50 min of incubation with 50 and 100 ng/ml (Figure 4a and b). Effector caspases have also been shown to cleave IETD recognition sites, 27 and consequently a more rapid increase observed after cleavage by activated caspase-8 is most likely owing to the activation of effector caspases during apoptosis or elevated caspase-8 activity, resulting from feedback via caspase-6. Analysis of downstream activation of effector caspase-3/7 based on DEVDase activity using a probe containing CFP and YFP indicates that the rate of cleavage remains low at initial time points, and raises rapidly after 75 min over a 30-40 min period for 4C7 (Figure 4c and d). The rate of DEVDase cleavage for rhTRAIL WT was substantially lower. These results indicate a much faster and robust initiator and effector caspase activation by variant 4C7 compared with rhTRAIL WT in living cells. Combination of death receptor-selective variants. In order to establish receptor activity in different cell lines, we tested the new DR4-selective variants or our DR5-selective mutant (D269H/E195R). 19 Most cell lines tested undergo apoptosis primarily via one of the DRs; however, some cells did show sensitivity via both DRs. We reasoned that selective activation of both DRs by TRAIL variants may lead to an additive effect in cell death induction and that targeting both DRs separately may be of therapeutic relevance in cancer cell lines. OVCAR-3 cells express both DRs, with almost no detectable levels of decoy receptors (Supplementary Figure S4). Both 4C7 (ED 50 ¼ 6.61±1.3 ng/ml) and 4C9 (ED 50 ¼ 7.86 ± 1.53 ng/ml) caused a substantial decrease in viability when compared with rhTRAIL WT (ED 50 ¼ 46.8± 1.1 ng/ml), with the DR4-selective variants reaching cell death levels of 88 and 83%, whereas rhTRAIL WT was only capable of 50% cell death induction at the highest concentration measured (250 ng/ml) (Figure 5a). Similarly, activation of DR5 by D269H/E195R in this cell line resulted in increased apoptosis induction, with a nearly sevenfold increase in cell death compared with rhTRAIL WT . Interestingly, the combination of equimolar concentrations of DR4 and DR5 TRAIL-selective variants resulted in an even lower cell survival at all concentrations measured, when compared with the single treatment with either only rhTRAIL WT or selective variants. At the highest concentration used, the combination of selective variants was capable of killing almost 95% of cells upon 24 h incubation. In the Colo205 cell line, the cooperation between DR4-and DR5-selective induction was less pronounced, even though at high concentrations higher levels of cell death could be shown (up to 94%). At these concentrations, also a difference with D269H/E195R could be observed (Figure 5b). In the colon carcinoma CL-34 cell line, both 4C7-and DR5-specific variant showed higher activities than rhTRAIL WT at almost all concentrations and when combined induced higher killing levels compared Clonogenic survival assays. To determine the effect of these variants on long-term survival of cancer cells, we performed clonogenic assays ( Figure 6). Both DR4-and DR5-selective variants significantly reduced clonogenic growth of OVCAR-3 cells when compared with rhTRAIL WT (Figure 6a and b). The combination of the receptor-specific variants results in an increased ability to kill OVCAR-3 cells. TRAIL-DISC immunoprecipitation. Analysis of the components of the death receptor (DR) DISC formed upon treatment with flag-tag rhTRAIL WT and flag-tag rhTRAIL 4C9 indicate the ligation of DR4 and DR5 by flag-tag rhTRAIL WT . Only DR4 but not DR5 could be pulled down in the cells treated with flag-tag rhTRAIL 4C9 (Figure 6c), further showing the DR selectivity of these variants. In addition, the activation of procaspase-8 within the 4C9-mediated DISC was more pronounced than in the rhTRAIL WT -induced DISC, indicating a more efficient activation of pro-caspase-8 by the DR4-selective variant. Discussion TRAIL binds to five different receptors, of which only two are capable of inducing apoptosis, DR4 and DR5. It is generally accepted that decoy receptors can inhibit TRAIL-induced apoptosis because they can bind TRAIL, but lack a functional death domain to form the DISC. 2,9,11 However, the clarity of this decoy concept has been blurred with the observation that presence of decoy receptors not necessarily translates in protection against apoptosis. 21,28 In a recent study, DcR2 has been reported to heteromerize with DR5 to form inactive complexes. 14-15 Heteromerization of death receptors DR4 and DR5 may also lead to inactive complexes. 7,15 In a large number of cancer cell lines, differential contributions of the DRs have been reported, 29 but it is not yet clear whether they differ in their function in the apoptotic signalling. In light of these studies, the introduction of receptor selectivity can contribute to the elucidation of the function of each death receptor. Such variants are expected to have increased in vivo activity and are candidates for new anticancer therapies. Computational design based on the crystal structure of the TRAIL-DR5 complex allowed the successful introduction of DR5 selectivity by two-point mutations. 19 The highly selective variant D269H/E195R was shown to be very efficient to induce apoptosis in an ovarian carcinoma xenograft mouse model. 30 For DR4, a highly refined homology model was made, using data from mutational analysis. 18,19,23 A set of 21 single mutation TRAIL variants predicted to show DR4selective behaviour were designed, and experimentally characterized. Two of the positions within this set were also reported from a phage display selection study. 16 All the 21 mutants were characterized for their ability to bind receptors and to induce apoptosis. Several mutations showed lower affinity for DR5 and higher affinity for DR4. In most cases, these effects could be well explained upon visual inspection of the mutant models. A representative example is given by the mutation S159R that has a large beneficial effect on the DR4/DR5 binding ratio (Supplementary Table 1 Ser-159 to Arg, the conformation of the side chain of Arg-115 of DR5 is slightly changed and the hydrogen bond interactions with His-161 are destroyed (Figure 7a). The hydrogen bond interaction of Arg-115 with His-161 is lost and the salt-bridge with Glu-155 is somewhat weakened. This results in an overall loss of interaction energy for the S159R mutation (DDG interaction : þ 1 kcal/mol). On the other hand, in the TRAIL-DR4 complex the equivalent residue of Arg-115 is Pro-115 and, as a consequence, no direct interactions are observed between Glu-155, Ser-159 and His-161 of TRAIL and DR4 receptor residues (Figure 7d). Mutation S159R allows Arg-159 to create a hydrogen bond with the backbone oxygen atom of Cys-113 of DR4 ( Figure 7c). As no interactions are destroyed with respect to the WT situation and one new interaction is established, there is a net gain in interaction energy (DDG interaction : À1.4 kcal/mol). The loss in interaction energy between TRAIL and DR5 on the one hand, and a gain in interaction energy between TRAIL and DR4 upon Ser to Arg mutation on the other hand, explain the observed increase in DR4 binding specificity. As TRAIL was reported to have a lower affinity for DR4 than to DR5, 31 a single mutation seems not sufficient to produce a variant with high affinity and selectivity for DR4. Single mutations were therefore combined based on observed biochemical and biological properties. Two criteria were used for the selection: (1) at least fourfold increase in DR4/DR5 binding ratio based on pre-steady state SPR measurements and (2) at least threefold increased biological activity on DR4sensitive cancer cells. Nine combinations of the selected mutations were made and most new variants showed affinity improvement to DR4 and largely reduced affinity to DR5. From these, two variants were chosen for further characterization based on their biochemical and apoptosis-inducing characteristics: 4C7 (carrying mutations G131R/N199R/ K201H/R149I/S159R/S215D) and 4C9 (carrying G131R/ R149I/S159R/S215D). Characterization of the affinities of these two proteins for DR4 and DR5 showed that they have not only a decreased affinity for DR5, but also an increased affinity for DR4 ( Table 1). The selective behaviour of these variants was also shown by immunoprecipitation experiments using Colo205 cells, where only DR4 but not DR5 could be detected upon treatment with a flag-tagged version of 4C9 (Figure 6c). Variants 4C7 and 4C9 showed a remarkable capacity to induce apoptosis in several cancer cells in vitro, including a panel of human colon adenocarcinoma (Colo205, SW948, DLD-1, HCT-15 and CL-34), Burkitt's lymphoma (Figure 3). The lack of activity on DR5-mediated A2780 and Jurkat cells and the comparison of activity between BJAB and BJAB DR5-deficient cell line with rhTRAIL WT clearly confirmed the DR4 selectivity of the variants 4C7 and 4C9 (Figure 2). The increased activity of the variants was also evident in TRAIL-resistant MCF-7 and PANC-1 cells (Figure 3). The higher affinity and selectivity of our variants for DR4 (Table 1 and Figure 1) seems to correlate to their higher cellular activity in cell lines sensitive via the DR4 receptor. To further analyse the activity of the variant 4C7, we determined caspase activation in OVCAR-3 cells using FRET and fluorescent probes (Figure 4). A much faster and stronger onset of caspase activation could be observed for 4C7 in comparison to rhTRAIL WT caspase-8/10 activation as seen by the IETDase probe cleavage that started already at very early time points with a nearly eightfold faster activation when compared with rhTRAIL WT (Figure 4a and b). Downstream effector caspase activation based on DEVDase activity using a probe containing CFP and YFP indicated a significant cleavage rate starting at 75 min over a 30-40 min period for 4C7 (Figure 4c and d). DEVDase cleavage for rhTRAIL WT was substantially slower in the time points taken. These results allow us to conclude that variant 4C7 induces a much faster and robust initiator and effector caspase activation than rhTRAIL WT in the OVCAR-3 cell line, and underlines the efficacy of the DR4-specific variant. The high efficacy of our new DR4-selective variants in addition to the earlier published DR5 variant D269H/E195R 19 inspired us to test whether their combination would lead to even higher apoptotic activity. Combination treatment using the DR4-selective variants with the DR5-selective variant enhanced apoptosis-inducing activity when compared with the responses obtained for the single treatments ( Figure 5). Analysis of response curves using different concentrations of DR-specific variants in combination indicates a significant additive effect on cell death induction on the ovarian carcinoma OVCAR-3 cell line. A similar effect was observed in colon carcinoma cell lines Colo205 and CL-34 at high concentrations of the two TRAIL variants. Even though part of the effect can be explained by the higher affinities and higher association rate constants of these variants for their cognate receptor, these results also point to the inability of these variants to form inactive heteromeric receptor complexes between their target DR and the other receptors, as was shown for rhTRAIL WT . Furthermore, our DR4-and DR5-selective variants strongly inhibited clonogenicity of OVCAR-3 ovarian carcinoma cells (Figure 6a and b), showing the enhanced efficacy of these variants compared with rhTRAIL WT in reducing long-term survival of cancer cells. Figure 7 Structural impressions of the area around 159 for S159R and rhTRAIL WT as determined by FoldX: (a and b) TRAIL-DR5 and (c and d) TRAIL-DR4. The subunits of TRAIL are depicted in lime and receptors in green. The template selected was 1D4V, the structure at 2.2 Å resolution and of monomeric human TRAIL in complex with the ectodomain of DR5 receptor. The homotrimer was generated using the protein quaternary structure server from the EBI (http://pqs.ebi.ac.uk), having the symmetry coordinates in the PDB file In summary, the newly constructed DR4-specific variants lead to the formation of high-affinity homotrimeric TRAIL-DR4 complexes. These TRAIL variants alone or in combination with our DR5 agonist may prove to be a promising treatment for a range of different cancers. Materials and Methods Modelling and computational design of variants using FoldX. Currently, only the crystal structure of TRAIL alone or in complex with DR5 is known. A homology model of DR4 was built based on the coordinates of TRAIL and DR5, from the template TRAIL in complex with the ectodomain of DR5, pdb: ID 1D4V, 32 using the high sequence identity between DR5 and DR4 (450%), with no insertions or deletions between both death receptors. Binding effects of mutations situated in the TRAIL-receptor interface 18,23 allowed the construction of a refined homology model for the TRAIL-DR4 complex using the protein design capabilities of FoldX. 33,34 These coordinates were used to construct the homotrimer TRAIL using the protein quaternary structure server from EBI (http://pqs.ebi.ac.uk). Single mutants of TRAIL were predicted by FoldX. Interaction energies were calculated as the sum of the energies of the individual receptor and ligand subunits and subtracting them from the global energy of the complex. A detailed description of the protein design algorithm FoldX (version 3.0) is available elsewhere in http://foldx.crg.es. Construction, expression and purification of rhTRAIL variants. Mutants were constructed by PCR reaction using the megaprimer mutagenesis method. The polymerase used was Phusion polymerase supplied by Finnzymes (Espoo, Finland). Introduction of mutations was confirmed by DNA sequencing. RhTRAIL WT and variants were cloned into pET15b (Novagen, Madison, WI, USA) using the BamHI and NcoI sites and transformed into Escherichia coli BL21 (DE3). Homotrimeric TRAIL proteins were expressed and purified as described before. 19,23 Flag-tag rhTRAIL WT and flag-tag rhTRAIL 4C9 were constructed by introducing the sequence encoding a flag-tag N-terminally of the rhTRAIL (aa 114-281) sequence in pET15b. Analytical gel filtration, dynamic light scattering and non-reducing gel electrophoresis confirmed that rhTRAIL WT and variants were stable trimeric molecules and did not form higher-order molecular weight aggregates. Receptor binding by surface plasmon resonance and competitive ELISA. Binding experiments were performed using a surface plasmon resonance-based Biacore 3000 (GE Healthcare, Eindhoven, The Netherlands). Research grade CM4 sensor chips, N-hydroxysuccimide, N-ethyl-N 0 -(3-diethylaminopropyl) carbodiimide, ethanolamine hydrochloride and standard buffers, for example, HBS-N, were purchased from the manufacturer (GE Healthcare). Immobilization of Staphylococcal protein A (Sigma, Zwijndrecht, The Netherlands) on the sensor surface of a CM4 sensor chip was performed following a standard amine coupling procedure. Protein A was coated at a level of B1000 response units. DR4-Ig and DR5-Ig receptors (R&D Systems, Minneapolis, MN, USA) were captured at high flow rate to low densities (5-20 RU) resulting in binding of a trimeric TRAIL molecule to only one receptor molecule and allowing global fitting of the data to a 1 : 1 Langmuir model. 35 A 100 ml aliquot of rhTRAIL WT and variants was injected at concentrations ranging from 1 to 250 nM at 50 ml/min and at 371C using HBS-N supplemented with 0.005% surfactant P20 as running and sample buffer. Binding of ligands to the receptors was monitored in real time. Between cycles, the protein A/sensor surface was regenerated using 10 mM glycine, pH 1.7 and a contact time of 25 s. For competitive ELISA, Nunc maxisorb plates were coated for 2 h with DR4-Ig (100 ng per well) in 0.1 M sodium carbonate/bicarbonate buffer (pH 8.6) and the remaining binding places subsequently blocked with 2% BSA for 1 h. After washing six times with Tris-buffered saline/0.5% Tween-20 (TBST) (pH 7.5), serial dilutions of soluble DR4-, DR5-, DcR1-or DcR2-Ig (0-5000 ng/ml) and rhTRAIL WT or mutants (100 ng/ml) in PBS (pH 7.4) previously incubated for 1.5 h at room temperature were added to the wells and incubated for 1 h. After washing, a 1 : 200 dilution of anti-TRAIL antibody (R&D Systems) was added and incubated for 1 h at room temperature, and, after washing with TBST, subsequently incubated with a 1 : 25 000 dilution of a horse radish peroxidase-conjugated swine anti-goat antibody. Finally, a 100 ml of one-step Turbo TMB solution (Pierce, Rockford, IL, USA) was added. The reaction was quenched with 100 ml of 1 M sulphuric acid and the absorbance measured at 450 nm on a microplate reader (Thermo Labsystems, Breda, The Netherlands). Binding of rhTRAIL or variants to immobilized DR4-Ig with 0 ng per well of the soluble receptors was taken as 100%, and binding at other concentrations of soluble receptors was calculated relative to this value. TRAIL receptor expression in cell lines. Cells were removed from culture dishes, harvested by centrifugation and washed twice with 1% BSA in PBS. Cells were incubated with 1 : 100 dilution of primary antibodies DR4 and DR5: neutralizing mouse monoclonal antibodies (Alexis, San Diego, CA, USA); DcR1 and DcR2: neutralizing goat polyclonal antibodies (R&D Systems) in 1% BSA in PBS for 40 min on ice. After two washes with 1% BSA in PBS, cells were resuspended in 1 : 50 dilution of FITC-labelled secondary antibody and incubated for 40 min on ice. Excess secondary antibody was removed by washing first in 1% BSA in PBS and then PBS. Cells were fixed in 1% formaldehyde/PBS before analysis by flow cytometry (FacsCalibur, Beckton Dickinson, San Jose, CA, USA). Cell viability and apoptosis assays. To determine sensitivity to rhTRAIL and variants, cells were plated in 96-well plates, allowed to adhere for 24 h and then treated with rhTRAIL in various concentrations, ranging from 1 to 250 ng/ml. After 24 h incubation, the cells were subjected to an MTS viability assay (Promega, Leiden, The Netherlands) following the manufacturer's protocol. Cell viability was determined after 1-2 h of incubation by measuring the absorption at 490 nm on a microplate reader (Thermo Labsystems). Apoptosis induction was measured by Annexin V staining and quantified by flow cytometric analysis as described before. 23 Live cell microscopy: caspase activation in living cells using FRET. Time lapse images were collected using a Leica SP2 AOBS CLSM microscope equipped with an environmental chamber at 371C and 5% CO 2 and  10 or  20 objective magnification. Images were collected every minute. OVCAR-3 ovarian carcinoma cell line expressing pSCAT8 producing CFP-IETD-Venus 25 and pDEVD containing CFP-DEVD-YFP 36 were derived by transfection with Fugene 6 (Roche, Almere, The Netherlands) according to the manufacturer's protocol. Plasmids pSCAT8 and pDEVD were a kind gift from Dr. Markus Rehm. Ratio images were obtained using the Image J software and available custom plugins by calculating the ratio from the obtained CFP and YFP images corrected for background. Signals were normalized by subtracting the minimum value across all time points from each single time-course experiment. Clonogenic survival assay. The clonogenic ability of ovarian carcinoma cells (OVCAR-3) was determined by plating the cells at the appropriate dilutions in DMEM medium, treated for 24 h with rhTRAIL WT and variants and the colonies were counted after 2 weeks. Briefly, 100 or 200 cells were plated out in 94 mm Petri dishes (Greiner) in 10 ml DMEM medium containing 10% fetal bovine serum (Invitrogen), penicillin (10 U/ml) and streptomycin (10 mg/ml), and incubated for 16 h at 371C and 5% CO 2 . The cells were treated with 5 ng/ml of rhTRAIL WT or variants for 24 h. After treatment, the medium was replaced and cells incubated for 2 weeks at 371C and 5% CO 2 . The colonies were fixed and stained using 0.1% Brilliant blue (Sigma), 50% methanol and 10% acetic acid, and destained with wash buffer (10% methanol and 7% acetic acid). Colonies containing more than 50 cells were scored. TRAIL-DISC immunoprecipitation. The anti-FLAG (M2 antibody, Sigma) was covalently conjugated to epoxy-coated Dynabeads (Invitrogen) following the manufacturer's instructions (5 mg antibody per 1 mg bead). Colo205 cells (approximately 50 mg per sample) were treated with 250 ng/ml flag-tag rhTRAIL WT or flag-tag rhTRAIL 4C9 for 15 min. Cells were harvested by scraping in the medium and centrifugation. After washing with PBS, the cell pellet was weighted and the cells lysed with nine volumes of Extraction buffer A (Dynabeads Co-immunoprecipitation kit, Invitrogen) supplemented with 50 mM NaCl and incubated on ice for 15 min. The lysates were cleared by centrifugation and mixed with 1.5 mg of antibody-coupled Dynabeads. The lysates were rotated with the beads for 3 h at 41C, after which the supernatant was separated from the beads, the beads washed and the DISC complex eluted by following the manufacturer's instructions. The immunoprecipitate was denatured in Laemmli buffer before loading on gels and analysis of the DISC components. Conflict of interest Wim J Quax, Luis Serrano and Afshin Samali are founding members of TRISKEL Therapeutics.
2017-04-03T00:35:16.251Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "4ef976d879ddc9317d04e352643793f7c4f63a93", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/cddis201061.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08379b1604acf7608f6c8726edfbcd9309d29db0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249210105
pes2o/s2orc
v3-fos-license
TubeFormer-DeepLab: Video Mask Transformer We present TubeFormer-DeepLab, the first attempt to tackle multiple core video segmentation tasks in a unified manner. Different video segmentation tasks (e.g., video semantic/instance/panoptic segmentation) are usually considered as distinct problems. State-of-the-art models adopted in the separate communities have diverged, and radically different approaches dominate in each task. By contrast, we make a crucial observation that video segmentation tasks could be generally formulated as the problem of assigning different predicted labels to video tubes (where a tube is obtained by linking segmentation masks along the time axis) and the labels may encode different values depending on the target task. The observation motivates us to develop TubeFormer-DeepLab, a simple and effective video mask transformer model that is widely applicable to multiple video segmentation tasks. TubeFormer-DeepLab directly predicts video tubes with task-specific labels (either pure semantic categories, or both semantic categories and instance identities), which not only significantly simplifies video segmentation models, but also advances state-of-the-art results on multiple video segmentation benchmarks Introduction We observe that video segmentation tasks could be formulated as partitioning video frames into tubes with different predicted labels, where a tube contains segmentation masks linked along the time axis. Based on the target task, the predicted labels may encode only semantic categories (e.g., Video Semantic Segmentation (VSS) [8,64]), or both semantic categories and instance identities (e.g., Video Instance Segmentation (VIS) [74,84] for only foreground 'things', or Video Panoptic Segmentation (VPS) [44,79] for both foreground 'things' and background 'stuff') ( Fig. 1). However, the underlying similarity of several video segmentation tasks (i.e., assigning tubes with predicted labels) has been long overlooked, and thus models developed for video semantic, instance, and panoptic segmentation have fundamentally diverged. For example, some VSS methods [29,94] warp features between video frames, while the modern VIS model [5] predicts hundreds of frame-level instance masks [34] and then propagates them to other neighboring frames. To make matters more complicated, stateof-the-art VPS methods [68,80] adopt separate prediction branches, specific to semantic segmentation, instance segmentation, and object tracking, respectively. In this work, instead of exacerbating the bifurcation between video segmentation models, we take a step back and rethink the following question: Can we exploit the similar nature between video segmentation tasks, and develop a single model that is both effective and generally applicable? To answer this, we propose TubeFormer-DeepLab that builds upon mask transformers [75] for video segmentation by directly predicting class-labeled tubes, where the labels encode different values depending on the target task. TubeFormer-DeepLab Specifically, similar to other Transformer architectures [10,73], TubeFormer-DeepLab extends the mask transformer [75] to generate a set of pairs, each containing a class prediction and a tube embedding vector. The tube embedding vector, multiplied by the video pixel embedding features obtained by a convolutional network [48], yields the tube prediction. As a result, TubeFormer-DeepLab presents the first attempt to tackle multiple core video segmentation tasks in a general framework without the need to adapt the system for any task-specific design. Naïvely applying the image-level mask transformer [75] to the video domain does not yield a satisfactory result, mainly due to the difficulty of learning attentions for videoclip (i.e., multi-frames) features with large spatial resolutions. To alleviate the issue, we introduce the latent dualpath transformer block that is in charge of passing messages between video-frame (i.e., single-frame) features and a latent memory, followed by the global dual-path transformer block that learns the attentions between video-clip features and a global memory. This hierarchical dual-path transformer framework facilitates the attention learning and significantly improves the video segmentation results. Interestingly, as shown in Fig. 2, our latent memory learns task-specific attention, while the global memory learns the spatio-temporally clustered attention for individual tube regions. Additionally, we split the global memory into two sets, thing-specific and stuff-specific global memory, with the motivation to exploit the different nature of 'thing' (countable instances) and 'stuff' (amorphous regions). During inference, practically we could only fit a video clip (i.e., a short video sequence) for video segmentation. The whole video sequence segmentation result is thus obtained by applying the video stitching [69] to merge clip segmentation results. To enforce the consistency between video clips, we additionally propose a Temporal Consistency loss that encourages the model to learn consistent predictions in the overlapping frames between clips. Finally, we propose a simple and effective data augmen-tation policy by extending the image-level thing-specific copy-paste [27,32]. Our method, named clip-paste (cliplevel copy-paste), randomly pastes either 'thing' or 'stuff' (or both) regions from a video clip to the target video clip. Method In this section, we introduce the formulation of several video segmentation tasks, followed by a general formulation that inspires our TubeFormer-DeepLab. We then present its model design, training and inference strategies. Video Segmentation Formulation Let us denote with v ∈ R T ×H×W ×3 an input video clip containing T video frames of spatial size H × W (T could be equal to the video sequence length if memory allows). The video clip is annotated with a set of class-labeled tubes (a tube is defined as segmentation masks linked along the time axis): T ×H×W do not overlap with each other, and c i denotes the ground truth class label of tube m i . Below, we briefly introduce several tasks. Video Semantic Segmentation (VSS) is typically formulated as per-video pixel classification, where the pixel features for classification are enriched by warping [94] or aggregating [64] features from neighboring frames. Formally, the model predicts the probability distribution over a predefined set of categories C = {1, ..., D} for every video pixel: , where ∆ D is the D-dimensional probability simplex. The final segmentation outputŷ is then obtained by taking its argmax (i.e., y i = arg max cpi (c), ∀i ∈ {1, 2, . . . , T × H × W }). Video Instance Segmentation (VIS) requires to segment and temporally link object instances in the video. For each detected foreground 'thing' i in the video, the model predicts a video tube (i.e., video-level instance mask track) m i ∈ [0, 1] T ×H×W with a probability distributionp i over C defined for only thing classes. Depending on the target dataset or evaluation metric, the model may generate overlapping video tubes (e.g., Youtbue-VIS [84] adopts track-mAP, allowing overlapping predicted tubes, while KITTI-MOTS [74] adopts HOTA [62], disallowing so). Video Panoptic Segmentation (VPS) requires temporally consistent semantic and instance segmentation results for both 'thing' and 'stuff' classes. Specifically, the model predicts a set of non-overlapping video tubes T ×H×W denotes the predicted tube, andp i (c) denotes the probability of assigning class c to tubem i belonging to a predefined category set C that contains both 'thing' and 'stuff' classes. Depth-aware Video Panoptic Segmentation (DVPS) builds on top of VPS by additionally requiring a model to estimate the depth value of each pixel. Similar to VPS output, the prediction has the following format: notes the estimated depth value and d max is the maximum depth value specified in the target dataset. Accordingly, the dataset contains ground truth depth. General task formulation. Despite the superficial differences between tasks, we discover the underlying similarity that video segmentation tasks could be generally formulated as the problem of assigning different predicted labels to video tubes and the labels may encode different values depending on the target task. For example, if only semantic categories are predicted, it becomes video semantic segmentation. Similarly, if both semantic categories and instance identities are required (i.e., one predicted tube for each category-identity pair), it then becomes either video instance segmentation (if only foreground 'thing' classes are considered) or video panoptic segmentation. This motivates us to develop a general video segmentation model that directly predicts class-labeled tubes (and optionally depth, if required). TubeFormer-DeepLab Architecture We first introduce TubeFormer-DeepLab-Simple, our video-level baseline, which will be improved by our proposed latent dual-path transformer, resulting in the final TubeFormer-DeepLab. TubeFormer-DeepLab-Simple. We adopt the per-clip pipeline which takes a video clip and outputs clip-level results. Inspired by [75], our TubeFormer-DeepLab-Simple integrates a CNN backbone and a global memory feature in a dual-path architecture, i.e., global dual-path transformer. Figure 3. TubeFormer-DeepLab architecture overview. TubeFormer-DeepLab extends the mask transformer [75] to generate a set of pairs, each containing a class prediction p(c) and a tube embedding vector w. The tube embedding vector, multiplied by the video pixel embedding features x v obtained by a convolutional network, yields the tube predictionm. We introduce a hierarchical structure with the latent dual-path transformer block that is in charge of passing messages between frame-level features x f and a latent memory x l , followed by the global dual-path transformer block that learns the attentions between video-clip features x v and a global memory x m . Given an input video clip v, the CNN backbone processes the input frames independently, and generates pixel features x v ∈ R T ×H×W ×C , where C is channels. The pixel self-attention is performed at the frame level (frameto-frame, F2F) via an axial-attention block [76]. Afterwards, the global dual-path transformer operates in a per-clip manner, taking the flattened video pixel features x v ∈ R T HW ×C and a 1D global memory x m ∈ R N ×C of length N (i.e. the size of the prediction set). Passing through the global dual-path transformer, we expect three attentions: (1) memory-to-video (M2V) attention (in which the video features encode per-clip information to the memory feature), (2) memory-to-memory (M2M) self-attention, and (3) video-to-memory (V2M) attention (in which the video pixel features refine themselves by receiving tube-level information gathered in the global memory). The global dual-path transformer blocks can be stacked multiple times at any layers of the network. On top of the global memory, there are two output heads: a segmentation head and a class head, each composed of two Fully-Connected (FC) layers. The global memory of size N is independently passed to the two heads, resulting in N unique tube embeddings w ∈ R N ×C and N corresponding class predictions p(c) ∈ R N ×|C| . Note that the possible classes C c include "none" category ∅ in case the embedding does not correspond to any region in a clip. Our video tube predictionm is computed in one shot as a dot-product between the decoded video pixel features x v and the tube embeddings w: The final video-clip segmentation can be obtained by combining N binary video tubes with their corresponding class predictions. TubeFormer-DeepLab with Latent Dual-Path Transformer. Modeling long-range interactions in video-clip (i.e., multi-frames) features is especially difficult, when dealing with high-resolution inputs or a large number of input frames. To both alleviate the issue and facilitate the attention learning, we propose a hierarchical structure, which allows two levels of attention mechanisms: frame-level, followed by video-level. Note the video-level attention is performed by the aforementioned global dual-path transformer. Prior to the global dual-path transformer, we introduce a new latent dual-path transformer block in charge of passing messages between frame-level features and a latent memory. It processes individual video frames in parallel (batchwise). Our latent memory is inspired by the graphical models with latent representations [40,47,90], allowing a lowrank representation for the graph affinity of high complexity. Concurrent with IFC [40], we discovered that latent features facilitate attention learning. However, we deployed them in a different framework (e.g., dual-path transformer and no cross-frame communication). Specifically, the initial latent memory x l ∈ R L×C is copied per frame and paired with each frame's features x f ∈ R HW ×C (flattened) to construct the input. Passing through the latent dual-path transformer, the latent memory first collects messages from frame features via latentto-frame (L2F) attention, and perform latent-to-latent (L2L) self-attention among themselves. Afterwards, the per-frame knowledge from the latent memory is propagated back to the frame features via frame-to-latent (F2L) attention. Note the latent memory features are trainable parameters like the global memory features. However, they are only deployed in the latent space (i.e., intermediate layers) and will not be used in the final output layers. As shown in Fig. 3, our hierarchical dual-path transformer blocks consist of a series of one axial-attention block, the latent dual-path transformer, and the global dualpath transformer. The stacking of multiple blocks will alternate the latent and the global communications, allowing the pixel features to refine themselves by attending to both frame-level and video-level memory, and vice versa. This in turn enriches the features of all three paths: pixel-, latentmemory and global-memory paths, and enables learning more comprehensive representations of the given video clip. Global memory with split thing and stuff. To further improve the segmentation quality, we propose to split the global memory into two sets: thing-specific and stuffspecific global memory. Originally, the global memory in [75] deals with thing masks and stuff masks in a unified manner. However, the design ignores the natural difference between them -There could be multiple instances of the same thing class in an image, but at most one mask is allowed for each stuff class. We thus allocate the last |C stuff | out of N elements in the global memory specifically for predicting stuff classes. The ordering is enforced by assigning the stuff-specific global memory to the ground truth stuff classes, instead of including them in the bipartite matching. Training Strategy VPQ-style loss. To train TubeFormer-DeepLab for various video segmentation tasks in a unified manner, we adopt a VPQ-style loss that directly optimizes the set of classlabeled tubes. Similar to the image-level PQ-style loss [75], we draw inspiration from video panoptic quality (VPQ) [44] and approximately optimize VPQ within a video clip. To start with, a VPQ-style similarity metric between a class-labeled ground truth tube y i = (m i , c i ) and a predicted tubeŷ j = (m j ,p j (c)) can be defined as: denotes the probability of predicting the correct tube class c i and Dice(m i ,m j ) ∈ [0, 1] measures the Dice coefficient between a predicted tubem j and a ground truth tube m i . We match the predicted tubes to the ground truth tubes, and optimize the predictions by maximizing the total VPQstyle similarity. The implementation details follow the PQstyle loss in [75]. In addition, we generalize the auxiliary losses used in [75] to video clips, resulting in a tube-ID cross entropy loss, a video semantic segmentation loss, and a video instance discrimination loss. Shared semantic and panoptic prediction. Originally, the auxiliary semantic segmentation loss in [75] is applied to the backbone feature with a separate semantic decoder. Instead, we propose to apply the loss directly to the decoded video pixel features x v (cf . Eq. (1)) with a linear layer, which learns better features for segmentation. Temporal consistency loss. The VPQ-style loss benefits the learning of spatial-temporal consistency within an input clip. To further achieve the clip-to-clip consistency over a longer video, we propose to use a temporal consistency loss applied between clips. Specifically, we minimize the distance between the N tube logits predicted from the overlapping frames of two clips. We use L1 loss for the consistency metric. The loss is back-propagated through the dot-product of the pixel features and N global memory features, affecting both pixel and global memory paths. TubeFormer-DeepLab thereby achieves implicit multi-clip consistency, which makes our training objective symmetrical to the whole-video inference pipeline (Sec. 3.4). Clip-level copy-paste. Additionally, we propose a simple and effective data augmentation policy by extending the image-level thing-specific copy-paste [27,32]. Our augmentation method, named clip-paste (clip-level copypaste), randomly pastes either 'thing' or 'stuff' (or both) region tubes from a video clip to the target video clip. We use clip-paste with a probability of 0.5. Depth prediction branch. To grant TubeFormer-DeepLab the ability to perform monocular depth estimation, we add a small depth prediction module (i.e., ASPP [14] and DeepLabv3+ lightweight decoder [17]) on top of the CNN backbone features x v . Note that we found the performance slightly degrades if we add the depth prediction to the decoded video pixel features x v , indicating that it is not beneficial to share depth estimation with segmentation prediction in our case. We apply Sigmoid to constrain the depth prediction to the range (0, 1), and then multiply it by the maximum depth. Following [69], we use the combination of scale invariant logarithmic error [25] and relative squared error [31] as the training loss. The depth loss weight is set to 100 when jointly trained with the other losses. Inference Strategy Clip-level inference. The clip-level segmentation is inferred by simply performing argmax twice. Specifically, a class label is predicted for each tube:ĉ i = arg max cpi (c) . And then, a tube-IDẑ t,h,w is assigned per-pixel: z t,h,w = arg max imi,t,h,w . In practice, our inference sets tube-IDs with class confidence below 0.7 to void. For video instance segmentation, we also explore permask assignment scheme [21,87], which treats the prediction of each object query as one object mask proposal. Video-level inference. At the clip level, TubeFormer-DeepLab outputs temporally consistent results for T video frames. To obtain the video-level prediction, we perform clip-level inference for every T consecutive frames with T − 1 overlapping frames (i.e., we move along the temporal axis by only one frame at each inference step). The cliplevel results are then stitched together by matching tubes in the overlapping frames based on their IoUs, similar to [69]. Datasets KITTI-STEP [79] is a new video panoptic segmentation dataset that additionally annotates semantic segmentation for KITTI-MOTS [74]. It contains 19 semantic classes (similar to Cityscapes [22]), among which two classes ('pedestrians' and 'cars') come with tracking IDs. For evaluation, KITTI-STEP adopts STQ [79] (segmentation and tracking quality), which is the geometric mean of SQ (segmentation quality) and AQ (association quality). VIPSeg [63] is also a new video panoptic segmentation dataset for diverse in-the-wild scenarios. It contains 124 semantic classes (58 'thing' and 66 'stuff' classes) with 3536 videos, where each video spans 3 to 10 seconds. VSPW [64] is a recent large-scale video semantic segmentation dataset, containing 124 semantic classes. VSPW adopts mIoU as the evaluation metric. YouTube-VIS [84] contains two versions for video instance segmentation; The YouTube-VIS-2019 contains 40 semantic classes and the YouTube-VIS-2021 is an improved version with higher number of instances and videos. Youtube-VIS adopts track mAP for evaluation. SemKITTI-DVPS [69] is a new dataset for depth-aware video panoptic segmentation, which is obtained by projecting the 3D point cloud panoptic annotations of Se-manticKITTI [3] to 2D image planes. It contains 19 classes, among which 8 are annotated with tracking IDs. For evaluation, SemKITTI-DVPS uses DSTQ (depth-aware STQ), which considers depth inlier metric [25] in addition to STQ. Implementation Details TubeFormer-DeepLab builds upon MaX-DeepLab [75] with the official codebase [78]. The hyper-parameters mostly follow the settings of [75]. Unless specified, we use their small model MaX-DeepLab-S, which augments ResNet-50 [35] with axial-attention blocks [76] two stages (i.e., stage-4 and stage-5). We also experiment with scaling up the backbone [16] by stacking the axialattention blocks in stage-4 by n times, and refer them as TubeFormer-DeepLab-Bn in the experiments. For VPS, we pretrain the models on Cityscapes [22] and COCO [56], while for other experiments, we only pretrain on COCO. The pretraining procedure is similar to prior works [5,36,79]. Using the pretrained weights, TubeFormer-DeepLab is trained on the target datasets using a batch size of 16, with T = 2 for all datasets except T = 5 for YouTube-VIS dataset. We use the global memory size N = 128 (i.e., output size), latent memory size L = 16, and C = 128 channels. We use 'TF-DL' to denote TubeFormer-DeepLab in the results. Main Results [VPS] We evaluate TubeFormer-DeepLab on the challenging video panoptic segmentation dataset, KITTI-STEP [79] in Tab Tab. 5 and 6 show the comparison with the state-of-theart methods on YouTube-VIS 2019 and 2021 datasets [84]. Note that TubeFormer-DeepLab predicts a single unique mask per object, while other methods often generate multiple overlapping masks, which are favored by the AP metric. Among end-to-end methods, our TubeFormer-DeepLab-B4 outperforms VisTR [77] by +7.4, and IFC [40] by +2.9 AP. Our model with T = 5 sets the highest scores among methods that employ a small value of T . Also, our gains in AR 1 are significant, indicating the benefit of TubeFormer-DeepLab in the non-overlapping segmentation scenario. Our model performs comparably to Seq Mask R-CNN [54]. We point out that TubeFormer-DeepLab is an end-to-end near-online method, while Seq Mask R-CNN relies on STM [66]-like structure to propagate mask proposals through the whole sequence, and thus is offline (T =36). [DVPS] We evaluate TubeFormer-DeepLab on the SemKITTI-DVPS dataset [69] for depth-aware video panoptic segmentation. Tab. 7 shows the test set results. Adding a depth prediction branch to the same exact TubeFormer-DeepLab used for KITTI-STEP outperforms ViP-DeepLab [69] by +3.4 DSTQ and achieves the new state-of-the-art of 67.0 DSTQ. Ablation Studies We provide ablation studies on the KITTI-STEP val set [79]. To compensate for the training noise, we report the mean of three runs for every ablation study. Hierarchical dual-path transformer. In Tab. 8a, we verify that the gains demonstrated by TubeFormer-DeepLab come from the proposed hierarchical dual-path transformer. Note that our baseline method (TubeFormer-DeepLab-Simple) already uses the axial attentions and the global memory. Introducing the new latent memory and its communication with the video-frame features (F-L attention: L2F, L2L, and F2L) brings a large improvement of +1.7 STQ. We also ablate adding attentions between the global memory and the latent memory (M2L and L2M), which show no improvements. This suggests the frame-latent (F-L) attention is sufficient to build effective hierarchical attentions between the latent and the global dual-path transformers. We also ablate different latent memory size, and set the default size L to be 16. Training strategy. In addition, Tab. 8b shows that the proposed temporal consistency loss helps TubeFormer-DeepLab to learn clip-to-clip consistency, and improves the inference on longer videos than the training clip length (T ), as demonstrated by +0.5 STQ gain. The proposed clip-level copy-paste (clip-paste) augments more training samples for tube-level segmentation, and further improves by +0.9 STQ. Scaling. We study the scaling of TubeFormer-DeepLab in Tab. 8c. Pretraining on ImageNet-22k dataset brings +1.6 STQ and adding COCO to the training further gives +1.9 STQ. We also explore scaling up the backbone by stacking the axial-attention blocks in stage-4 by n times (TubeFormer-DeepLab-Bn). The increase of every n will introduce +13M parameters. We notice increasing the stack from n = 1 to n = 3 improves the STQ from 73.19 to 74.25. Further scaling to n = 4 starts to saturate, probably limited by the scale of KITTI-STEP dataset. We observe TubeFormer-DeepLab can further scale to n = 4 on larger- (a) Varying transformer attention types. Frame-latent (F-L) attention is introduced in the proposed latent dual-path transformer, and includes latent-to-frame, latent-to-latent, and frame-to-latent attentions. We also ablate memory-to-latent (M2L) and latent-to-memory (L2M) attentions, and different latent memory size L. Architectural improvements. We ablate our new architectural designs: (1) sharing the semantic and panoptic predictions, and (2) splitting the global memory for separate thing and stuff classes. As shown in Tab. 8d, we observe a performance drop of -1.1 STQ by reverting the change of either (1) or (2) from TubeFormer-DeepLab. Visualization In Fig. 4, we visualize how the proposed hierarchical dual-path transformer performs attention onto the input clip of three consecutive frames. We first visualize the global memory attention by selecting four output regions of interest from TubeFormer-DeepLab video panoptic prediction. We probe the attention weights between the four tubespecific global memory embeddings and all the pixels. We see the global memory attention is spatio-temporally well separated for individual thing or stuff tubes. In addition, we select four latent memory indices and visualize their attention maps in Fig. 4c. We find that some latent memory learns to spatially specialize on certain areas (left vs right side of the scene) or attends to semantically-similar regions (cars or backgrounds) to facilitate per-frame attention. With the hierarchical attentions made by the global and latent dual-path transformers, TubeFormer-DeepLab can be a successful tube transformer. Finally, we provide more visualizations for each video segmentation task in Sec. 6 and video prediction results at https://youtu.be/twoJyHpkTbQ. More Experimental Results In this section, we provide more experimental results, comparing our methods with published works in detail. We do not include the unpublished and concurrent ICCV 2021 challenge entries, which usually adopt complicated pipelines, e.g., model ensembles, separate models for different sub-tasks (e.g., tracking, and segmentation), multi-scale inference, or pseudo labels. In the tables, we explicitly list the adopted backbones and decoders for a detailed comparison. We note that most of the state-of-the-art approaches for different video segmentation tasks have fundamentally diverged, while our proposed TubeFormer-DeepLab is a simple and unified system for general video segmentation tasks. [VIS] Tab. 11 summarizes our results on Youtube-VIS-2019 val set, along with several state-of-the-art methods. Our TubeFormer-DeepLab-B1 (per-pixel) performs worse than other state-of-the-art methods, including MaskProp [5], Seq Mask R-CNN [54], and the concurrent work IFC [40], since our per-pixel inference scheme generates non-overlapping predictions (i.e., only one prediction for each pixel in the final output), which is disfavored by the track AP metric. To bridge the gap, we adopt the maskwise merging scheme (denoted as per-mask) [21,87], where each object query generates a mask proposal. The per-mask scheme significantly improves over the per-pixel scheme by more than 2 AP in the TubeFormer-DeepLab framework. Our large model TubeFormer-DeepLab-B4 with per-mask scheme outperforms MaskProp, VisTR, and IFC, and performs comparably with the best model Seq Mask R-CNN, which relies on STM [66]-like structure to propagate mask proposals through the whole sequence. Notably, our model yields the best AR 1 and AR 10 (+3.9 and +3.0 AR better than the second best Seq Mask-RCNN method, respectively), demonstrating the high segmentation quality in our predictions. Also, TubeFormer-DeepLab employs a smaller clip value (T = 5), while other state-ofthe-art proposal-based approaches use a large value of clip (T = 13 or 36). Visualization In Fig. 5, 6, and 7, we visualize how the proposed hierarchical dual-path transformer performs attention for video panoptic/semantic/instance segmentation tasks (VPS, VSS, and VIS, respectively). We use input clips of three consecutive frames for visualization. For each sample, we select several output tubes of interest from the TubeFormer-DeepLab prediction. In column-b, we probe the attention weights between the selected tube-specific global memory embeddings and all the pixels. Across all three tasks, we observe the global memory attention is spatio-temporally clustered for individual tube regions, while respecting different requirements among the tasks. That is, one global memory answers for each semantic category in VSS, but for each instance identity in VIS, while both cases appear in VPS task. In column-c, we select four latent memory indices and visualize their attention maps. Commonly for all tasks, some latent memory learns to spatially specialize on certain areas (left vs right side of the scene) or attends to the tube boundaries. Interestingly, we find that some latent memory focuses on relatively far-away region (Fig. 5c-bottom right), which often requires more attention. Sometimes, it has more interests to the moving object parts or small objects (e.g., moving arms and a road-block cone in Fig. 6cbottom left and bottom right, respectively). The task-specific behavior of the latent memory can be also compared between Fig. 6c and Fig. 7c. The latent memory in VSS does not distinguish instances of a same semantic class. In contrast, the attention is instance-specific in VIS. As shown in Fig. 7c-top left, the occluded noses of two elephants are highlighted, which is expected to help the instance discrimination. Also, different latent memory attends to a single, or different multiples of the instances. Discussion We notice that recently there is some hype in the literature regarding the development of universal or unified segmentation models for semantic, instance, and panoptic segmentation. We would like to emphasize that the goal of panoptic segmentation is to unify semantic and instance segmentation, and thus a well-designed panoptic segmentation model should naturally demonstrate a fair performance on semantic segmentation and instance segmentation as well. For example, Panoptic-DeepLab [20] and its Naive-Student version [12] already demonstrate that a modern panoptic segmentation model could simultaneously achieve state-of-the-art performance on semantic, instance, and panoptic segmentation. Our work follows the same direction by working on the video segmentation tasks. Limitations Currently, the proposed TubeFormer-DeepLab performs clip-level video segmentation with the clip value T = 2 (for VPS and VSS) or T = 5 (for VIS). Our model thus performs short-term tracking and may miss objects that have track lengths larger than the used clip value. This limitation is also reflected in the AQ (association quality) reported in Tab. 1 of the main paper (i.e., KITTI-STEP test set results). We leave the question about how to efficiently incorporate long-term tracking to TubeFormer-DeepLab for feature work. In any case, our proposed TubeFormer-DeepLab presents the first attempt to tackle multiple video segmentation tasks from a unified approach. We hope our simple and effective model could serve as a solid baseline for future research. Conclusion We introduced TubeFormer-DeepLab, a novel architecture based on mask transformers for video segmentation. Video segmentation tasks, particularly video semantic/instance/panoptic segmentation, have been tackled by fundamentally divergent models. We proposed a new paradigm that formulates video segmentation tasks as the problem of partitioning video tubes with different predicted labels. TubeFormer-DeepLab, directly predicting classlabeled tubes, provides a general solution to multiple video segmentation tasks. We hope our approach will inspire future research in the unification of video segmentation tasks.
2022-06-01T07:34:07.971Z
2022-05-30T00:00:00.000
{ "year": 2022, "sha1": "762c1ed2ba0e4afb999b545119909559fc9fda2f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef8f5acfc5d25368feac205b7a7e03b54032d267", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
228893236
pes2o/s2orc
v3-fos-license
The Role of an Entrepreneurial Mindset in Digital Transformation-Case Study of the Estonian Business School This chapter focuses on entrepreneurial mindset in digital transformation and presents a short case study about leading the digital transformation in one Estonian private business school, where the ongoing digital process has changed the organisation itself and also the ways how students are taught and trained for coping and leading in the digital world. In order to better understand the context and environment, a brief introduction to the digitalisation topic and slightly more detailed overview of digitalising in higher education sector is provided fi rst. Introduction We can argue that among different transformations taken place within entrepreneurial activities, there has been a major shift to digitalisation that has rapidly intensified especially during the last decade. Different authors have conceptualised and described digitalisation in different ways, but they all have agreed that digitalisation has and still is one of the major transformations which has changed the ways how work and business are done and that affects basically everything around us. Today digitalisation and the need for digitally savvy people is present everywhere. This also applies to the universities, both as organisations and as teaching institutions. Universities need to transform themselves to become more digital and they also need to help their students to cope and lead these digitalisation processes within their own organisations. Estonia is a country that is known for its digital M. Kooskora (&) Estonian Business School, Tallinn, Estonia e-mail: mari.kooskora@ebs.ee development and in very many areas digital services have already practically replaced the traditional paper-form and person-to-person interactions among state, people and businesses and digitalisation in all areas has even become a norm and normative need in the society. Education sector is no different and 'the educational revolution in Estonia aims to implement modern digital technology more efficiently and effectively in learning and teaching' (Education e-estonia 2018). However, digitalisation and digital technologies are just tools, to help people and make interaction and services better and easier for them; the success of the transformation always depends on the culture and mindset, values and ethical considerations of people, especially of those who lead this change. This chapter focuses on entrepreneurial mindset and presents a short case study about leading the digital transformation in one Estonian private business school, where the ongoing digital process has changed the organisation itself and also the ways how students are taught and trained for the changes needed to be coped and lead in the digital world. In order to better understand the context and environment, a brief introduction to the digitalisation topic and slightly more detailed overview of digitalising in higher education sector are provided first. Importance and Impact of Digitalisation In today's highly competitive business environment, it is vital for organisations, both public and private (Grönroos 2006), to change as the environment and people's needs have already changed significantly and keep changing in the future, and therefore focusing on change processes is extremely important. One of the major transformations of today's world is digitalisation and together with globalisation these have brought along a much faster and less predictable environment whereas today's technology accelerates the speed at which companies make decisions and process information (see Earley 2014). When trying to create the understanding of digitalisation, we see that it is a wide topic where multiple definitions exist. For example, Patel and McCarthy (2000) were among the first people to mention the concepts of digitalisation and digital transformation, however they did not conceptualise either of the terms. More recently, Ilmarinen and Koskela (2015) describe digitalisation to be the biggest transformation of our generation and see digitalisation as a process where digital technology is used in order to benefit all parts of life, thus enabling both the societies and organisations to create new opportunities to grow, improve, change and renew themselves. Westerman et al. (2014) define digitalisation as the usage of different digital technologies to change existing business models or provide new revenue and value-producing opportunities, whereas the authors find that replacing workers with automation processes can save significant amounts of time (Westerman and Bonnet 2015). Several authors (e.g. Kvist and Kilpiä 2006;Ilmarinen and Koskela 2015;Matt et al. 2015) see digitalisation as a transformation process, which involves changing organisation's key business operations into a digital form, while affecting products and processes, but also organisational structures and management concepts. Digitalisation was made possible by rapid technological progress and devices with increased computing power performing more demanding tasks and enabling digital services of higher quality (Mollick 2006) have accelerated its speed. Besides the higher quality and computing power of devices, the prices of smartphones with complex technological attributes have decreased 50 times from 2007 to 2014 (Ismail 2014). Furthermore, the declining cost of storing, processing, replicating and distributing digits has given the organisations ability to shift their products and services to digital format (Grover and Kohli 2013) and ultimately implement new business strategies that can utilise the opportunities created by digitalisation. The Internet already plays an indispensable role in the everyday life of billions (Bock et al. 2015). Being connected on the web has became a societal phenomenon and about 3 billion connected consumers and businesses (as well as governments and other organisations) search, shop, socialise, transact, and interact every day using personal computers and, increasingly, a broadening range of mobile devices. The digital economy is growing at 10 per cent a year, significantly faster than the global economy as a whole (ibid). Due to the rapidly increasing number of smartphones and tablets, billions of individuals and organisations have been able to fully take advantage of this digital revolution. Either purchasing music, books, newspapers, or any other item online, making banking transactions, being a communicator, whether through personal email, texting, watching published videos or providing digital services by themselves. The impact of digitalisation is seen everywhere around the world. Digital technologies have changed operations in organisations and enabled far-reaching social and political changes. Today the digital economy is an increasingly important source of jobs, however also the reason of job losses for millions globally. Rapid and continuous technology developments are transforming the skills required for most existing jobs and creating completely new types of roles, and changing current job functions. Already more than 47% of people, even in remote areas, are online and the development of blockchain, advanced robotics, and the Internet of things presents a profound shift for the future (DMCC 2019). According to Snabe (2015), digitalisation provides a unique opportunity for global leaders to shape our future, however at the same time, also places a momentous responsibility on their shoulders to ensure these transformations will have a positive impact on business and society. Acknowledging the increased competitiveness of the business world, Day-Yang et al. (2011) state that digital transformation has become increasingly essential for organisations that seek to survive and attain competitive advantage. Furthermore, according to Mok and Leung (2012), digitalisation enhances peoples' economic, political and social lives and thus it is fundamental for organisations to focus on the new trends it brings. While studying the strategies related to digital technology Fitzgerald et al. (2014) found most managers to believe in technology bringing transformative change to businesses and concluded that accomplishing digital transformation is critical for companies wishing to survive. Therefore as complicated transformations take place, companies need to create management practices to oversee them and as above mentioned authors agree (e.g. Kvist and Kilpiä 2006;Ilmarinen and Koskela 2015;Matt et al. 2015) coordination, prioritisation and implementations of digital transformation can all be done successfully when a digitalisation strategy exists. According to Fitzgerald et al. (2014) technology opens routes to new ways of doing business and a clear plan helps the organisation to avoid mistakes in that process. In addition, Westerman (2016) also points out the new opportunities that digitalisation brings along and lists three technology-driven forces that are transforming the nature of management. These are automation, data-driven management and resource fluidity, whereas technology helps businesses to increase efficiency and productivity as well as innovation and customer satisfaction. We can discuss further that digitalisation results from multiple different aspects. According to Tolboom (2016), one reason for digitalisation is the changing customer behaviour and demand. Customers today expect to get service faster and this had led organisations to offer online services that are constantly available for customers. Kvist and Kilpiä (2006) found one of the reasons for digitalisation to be companies' willingness and need to be more customer-centric, wanting to focus more on customers' relationships and making customers' lives easier. Ilmarinen and Koskela (2015) state similarly that with the possibilities digitalisation creates, companies can focus more on customer wishes and preferences. Another reason behind digitalisation is that organisations want to end using multiple services and channels for doing business and with digital services, all can be found in one place. Additionally, Pagani (2013) highlights the competitive advantage, added value and higher profits that can be attained with the use of a digital business strategy. Fosic et al. (2017) acknowledge that while companies have had IT strategies for decades already, these were only to support the business strategy, and propose that companies should no longer have separate IT and business strategies, but just one digital business strategy that applies for both, the IT and business side. Thus by utilising a digital business strategy organisations can be more competitive in today's challenging business world. Opportunities and Threats of Digitalisation Several authors and also practitioners agree, that digital solutions can simplify systems, provide improvement in services, facilitate trade and make business activities faster and easier. According to Matt et al. (2015), the benefits of digitalisation include increases in sales and productivity and innovations in value creation. With digitalisation stakeholder interaction often increases as well and organisations can spend more time on customers, clients and other stakeholders when certain processes are digitalised. This was also affirmed in Berman's (2012) study which showed that companies wishing to gain opportunities from digitalisation should focus on reshaping customer value propositions and transforming their operations to offer more customer interaction and collaboration. Furthermore, the research indicated that engaging with customers at every value creation point in the relationship, companies can differentiate themselves from competitors. However, there are also threats related to digitalisation and one of these is losing customers in this process (Matzler et al. 2015) as not everyone is satisfied with transformation of traditional services into digital ones. The switching costs related to customer changing the supplier can be divided into three categories: financial costs, procedural costs and relationship costs. These switching costs can originate from financial aspects, and time and effort related matters or from old relationships ending and new ones beginning. Multiple studies (e.g. Hsu et al. 2011;Molina-Castillo et al. 2012) found that switching costs occur when a customer changes from one product to another, and customers considering to switch compare the revenue and costs of switching, and decide to stay when the costs of changing would become higher than the original costs. Additionally, Burnham et al. (2003) relate switching costs to switching intentions and behaviour. Further, it is proposed that companies can avoid switching costs by strategic planning and trying to minimise the negative affects of the change on customers. According to Bentley (2012) modern economies, different industries and governments as well as societies rely on the help of computers and the digital format of text, audio and pictures and the modern world could not operate in the way it does without digitalisation any more. Grönroos (2006) sees one of the threats of digitalisation in the low level of knowledge that regular employees have of the technology they use. As the using new technology and computers have become so easy and intuitive, most people are unaware of the science behind them. Furthermore, Bentley (2012) claims that when technology related problems occur, ordinary employees are unable to fix them and while people with special IT skills are required to help it often takes time and means costs for the organisation. Related concerns are expressed by Fosic et al. (2017), who state that IT and Internet are not sufficient by themselves and that human capital is needed for operating with these devices. Besides many important opportunities discussed above digitalisation makes people more dependent on technology and thus also more vulnerable. The risk of cyber incidents increases significantly and highlights the importance of cybersecurity. The Internet of things, big data, altering working and business environments, fundamental changes in value-added processes and business as such and the integration of digital and physical worlds in a so-called Industry 4.0 bring along new type of risks and threats. There is the fear of interruption and disruption due to the business and human challenges brought upon us by new business models and increasing competition, often coming from non-traditional players and 'disruptive' newcomers. With market entry barriers coming down and (the impact of) digitalisation speeding up, organisations find themselves with the challenge to perform in a volatile, uncertain, complex and ambiguous environment (I-Scoop 2016), and therefore businesses have no option but to be innovative and agile. Entrepreneurial Mindset for Digital Transformation To better cope with the new challenges related to various changes in the environment there is also a need for a new type of mindset, the way how and why we think about things we do and how we interpret the world, and new set of skills. The uncertainty around us creates high level of risks, but also great opportunities. Innovation starts with the right mindset (Meyers 2016) and according to McGrath and MacMillan (2000), uncertainty can be used for one's benefit when a person employs and develops an entrepreneurial mindset. Furthermore, Morris and Kuratko (2002) emphasise the need for entrepreneurial mindset especially in the current business environment and believe that for sustaining the competitiveness people must unlearn traditional management principles, be creative and innovative and have the ability to rapidly sense, act and mobilise. Thus, the entrepreneurial mindset can be understood as a person's specific state of mind which orientates towards entrepreneurial activities and outcomes (Financial Times 2019), often in the pursuit of opportunity with scarce, uncontrolled resources. For Senges (2007) people with an entrepreneurial mindset are those who passionately seek new opportunities and facilitate actions aimed at exploiting these opportunities and according to Koe et al. (2012, 198) entrepreneurial people recognise opportunities, take risks, seize opportunities, and ultimately feel satisfaction. In doing so, these opportunities exist for business ideas and individuals who are able to identify them and exploit the ideas through the creation of new businesses to pursue their goals (Bygrave 1997), Kuratko and Hodgetts (2004) also interpret this as a dynamic process of vision, change and creation. Digital transformation is one of the major changes in current business environment that gives people with entrepreneurial mindset the opportunity to enter the marketplace and provide innovative, often web-or data-based solutions, new products and services. The movement being stimulated by the fast pace of progress in the fields of mobile technology, big data, predictive analytics, cloud infrastructure, self-learning algorithms, personalisation and the growing dominance of information and communication technologies (Digital Transformation Initiative 2015) enables also new, but digitally minded entrepreneurial players to start up their companies and achieve great success, often relatively fast. However, not all the people with entrepreneurial mindset become successful entrepreneurs, but only those who are really able to launch, manage, grow and promote new business (Humbert and Drew 2010). According to Maltsev (2016), entrepreneurs create and develop their own business using their own expertise and abilities and their own or externally borrowed resources. In doing so, the entrepreneur has to fulfil a wide variety of roles and activities in the creative and development process-from establishing a business development concept to running business processes (such as product manufacturing or customer service). While Coulter (2001) views entrepreneurship as a process in which a person or a group of people uses common efforts and measures to grow and pursues opportunities and goals, to create value through innovation and originality and thereby fulfil their desires and needs then according to Timmons (1994) an entrepreneur can be considered a person who has the ability to create and construct a vision from virtually nothing and to make it work for his own benefit. Although becoming a digital entrepreneur seems to be easier than so-called traditional entrepreneur and may be very attractive opportunity for many, it requires certain characteristics that all people with entrepreneurial mindset may not possess. Even when each entrepreneur is unique there are several common features that can be highlighted. Among these, Costin (2012, 14) has listed intelligence, independence, high motivation, energy, initiative, innovation orientation, creativity, desire for success, originality, optimism, self-confidence, dedication, ambition, perseverance, activity, good leadership and leadership qualities, and the willingness and courage to take risks. However, entrepreneurs with right entrepreneurial mindset and required leadership skills and characteristics have better chance to succeed than those without, whether in digital or non-digital businesses. Moreover, entrepreneurs are increasingly confronted with different precarious situations, while also experiencing a great deal of time stress, fatigue and strong emotions. Even in these intensive circumstances, they are more susceptible to mistakes, both in their decision-making process and in their judgment and reasoning (Baron 1998). This, in turn, may culminate in ethically questionable or unethical behaviour (Rutherford et al. 2009). According to Shane (2003), such tensions when entrepreneurs are more likely to exhibit unethical behaviour are most likely to arise during the foundation or start-up phase of companies, because starting entrepreneurs do not yet have the necessary social connections and feel pressure to prove and establish themselves as successful entrepreneurs. Payne and Joyner (2006) believe that the propensity to face ethical dilemmas may also stem from the need to balance one's own values, customer needs, employee expectations, and responsibilities towards stakeholders, including shareholders. Likewise, (especially start-up) entrepreneurs can be self-centred and inclined to self-interest (Baron 1998), with a degree of self-justification due to their strong passion and high commitment to their business idea. Being a digital entrepreneur requires strong leadership, focus and discipline, moreover the only way businesses can succeed at digital transformation is to create digital entrepreneurs, people who have the necessary skills and mindset. Furthermore, the concepts of right principles, values, ethics and responsibility have become even more important with the fast emerging digital transformation (see also Kooskora 2013;BBVA 2012). During the time of great changes it is utmost important to define what is right and wrong, good and bad, acceptable and not acceptable and both in theory and in practice, generally and in specific circumstances. For that people need clear guidelines, that can be helpful in dealing with ethical issues such as fairness, safety, transparency (Kooskora 2012) and the upholding of fundamental rights related to digitalisation. Moreover, especially the digital leaders who are making decisions having great impact on many around them have to consider and stand for the right values that are often at risk and know what must be done to preserve them. With the help of digital ethics, we can ensure that human beings, not technology, remain our primary consideration during this digital age. Discussing further it should be pointed out that this digital transformation requires new leadership roles, skills and also digitally minded leaders with high level of integrity. Moreover, digital leadership is much more than a job title, it is an entirely new mindset (Kaganer et al. 2013). According to Kerr (2019), the digital mindset requires open mindedness and today's leaders have to be aware and understand all the capabilities that technology has to offer and put it in use. These leaders have focus on better future and constantly seek and find new ways to use technology in order to enhance employee engagement, drive customer satisfaction and unleash competitive advantage. However, the digital world is not about technology, but people (Becerra 2017). Digital leadership is about empowering others to lead and creating self-organised teams that optimise their day-to-day operations. Leadership today is no longer hierarchical-it needs participation, involvement and contribution from everyone (Dubey 2019), and leaders need to create a compelling vision and communicate with clarity so that everyone understands what the team is trying to achieve and why. Great leaders know that people can achieve great things when they are driven by a strong purpose and find work meaningful. They understand that when people know the why, they figure out the how and can achieve remarkable results. Furthermore, when organisations create a culture of learning, failures and experiments lead to inventions and innovations, therefore digitally minded and entrepreneurial leaders provide support and energise everyone and inspire them with an inclusive vision. Digital leaders are adaptable and able to handle pressure and constant changes, and to take decisions with agility (Dubey 2019), they understand the value of diversity, inclusion and open-mindedness and can navigate the challenges of technological disruptions. According to The World Economic Forum's 2018 Future of Jobs (2018) report no less than 54% of all employees will require significant re-and up-skilling by the year 2022 and of these about 35% are expected to require additional training of up to six months, while 9% will require re-skilling lasting 6-12 months and 10% will require additional skills training of more than a year. Therefore, the digital leadership will need to address the skill gaps, prepare themselves and their teams to face the future by creating an environment of lifelong learning and with the adoption of new technology and solutions, new professions, skills and industries will emerge. This is why it is important for companies to identify, develop and place future-oriented innovative, entrepreneurial, critical thinking leaders who are able to create a long-term sustainable value for all stakeholders. To conclude this brief overview, it can be said that digitalisation is the use of digital technology to provide new opportunities for people and organisations. Smith (2004) views technology as a division of knowledge that deals with the creation and use of technical means and their interrelation with life, society, and the environment. According to Mäkkylä (2017), digitalisation has enabled new concepts, procedures and new agents into different fields and changed people's behaviour. With the help of the Internet, people have become more aware of their preferences, their requirements have increased and knowledge of the available alternatives is greater. Cherif and Grant (2014) suggest that digitalisation has initiated the Internet's ability to conveniently display information and therefore the communication between service providers and potential customers has changed and improved. Industries' services have been transferred into digital services which has enabled newcomers into the field and forced traditional agents to renew themselves. 5 Case Study 5.1 Leading Digital Transformation at Estonian Business School Digitalisation in the Higher Education Sector Similarly to various other sectors, the role of universities in the society and economy and the ways how education is delivered is changing and continues to change in the next decades. Compared to other sectors, the impact of global change is even more present in higher education and the whole nature of higher education changes significantly (Coskun 2015;Bridgstock and Cunningham 2016) as universities need to become more digital learning institutions. Whereas the market has become global everywhere, universities are also competing globally for students, academics and funding, and it is believed that only those that stay relevant and leverage new digital capabilities will benefit in this digital age (PwC 2015;McKinsey 2015). In order to overcome challenges related to technological changes, universities have to respond digitalisation in a quick and effective way and develop strategies that help to benefit from these changes. Therefore, many universities all over the world are developing digital strategies and invest heavily in IT systems (Jones 2016;Newman and Scurry 2015). Being digitally well-equipped to ensure effective use of modern technology is required for achieving a successful digital transformation, and the whole university including students, staff and academics has to be prepared to work with digital tools and techniques. Universities that efficiently follow a digital framework are equipped with the competencies to drive innovation and disruption approaches (Tapscott and Williams 2010;Khalid et al. 2018). Whereas twenty-first century students have many expectations of universities, their experiences and expectations of future employability after university education are now more critical and require universities to change. The digital age brings along new challenges and opportunities for university leaders and faculty as teaching methods, ways of learning and research techniques are all changing fast. A digitally sophisticated generation is expecting to learn and to be taught using methods in accordance with their personal preferences, which requires implementing modern technologies, including smart mobile, cloud-based IT, wearable devices and advanced analytics (Kirkwood and Price 2013;McKinsey 2015). Digital technologies are considered as vital elements of student education and linked with substantial changes to the ways students learn and experience (Coskun 2015;Henderson et al. 2017). Moreover, adapting educational institutions and training providers to the digital age can be regarded as a cornerstone of any long-term strategy to foster digital skills, as formal schooling is still considered the main way how people acquire and develop digital skills. A core function of academic institutions is to continually update and advance their management and learning process and for a digital success, the right balance and connectivity among students, staff and departments are the key elements for survival. However, the role of senior management in supporting and helping to take most out the substantial benefits linked with the digital change is essential. Khalid et al. (2018) argue that in order to meet the needs of the knowledge society, students' learning preferences, as well as technological development of faculty members, university leaders must be aware of a growing imperative to reshape their structures and processes, pedagogic and curricula practices. Digital skills are developed through life-long learning programmes while adding new techniques and capabilities, and inhibiting culture to accepting modern technologies and development (Hill et al. 2015). The knowledge, skills and competences that such programmes deliver help to shape digital leadership skills and entrepreneurial mindset. Digital literacy includes skills, knowledge and confidence to use advanced technology and while digitalisation has enabled various innovative teaching techniques, for instance, richer distance learning, flipped classroom and hybrid teaching models, not all universities and faculty members have welcomed these changes. Being omnipresent in social media and active use of innovative interactive techniques for teaching is not too appealing for all academics. Another reason behind this lies in the technological development and required infrastructure, implementing new technologies and digital tools need investing a lot of time and money and supporting leaders with digital mindset. Nevertheless, e-learning is already widespread and MOOCs (Schuwer et al. 2015) have become popular among students around the world, therefore most universities are interested in developing and creating online learning opportunities. However, some of the leading universities, including Cambridge and Oxford (Berger and Frey 2016), have found more useful and implement blended learning models, where online learning is complemented with face-to-face interaction helping students to develop relevant skills while tackling real-world challenges. Problem-based learning (PBL) is often used to foster critical thinking, problem-solving, and interpersonal skills (Frey and Osborne 2013), the skills needed to compete in the twenty-first century labour market and MOOCs to improve the learning experience rather than wholly shifting the provision of education online. Moreover, the senior management must consider that universities those are not adopting new digital change will not be able to fully compete in the contemporary digital era. Therefore, to implement this change within the universities, it is critical to create a high level of digital awareness, develop digital vision and determine how to gain the necessary digital capabilities and develop entrepreneurial mindset. To avoid falling behind competition, universities must rethink how they should operate in the evolving digital era. Digitalisation is deeply embedded also in the Estonian educational sector. The educational digital revolution in Estonia aims to implement digital technology more efficiently and effectively in learning and teaching, and to improve the digital skills of the entire nation (e-Estonia 2019). Estonia can be happy for its developments in this sector, with being first in Europe in the OECD PISA test, having 100% of schools using e-school solutions, and every 10th student studying IT every year. Digital solutions and tools are widely used in all other educational forms and it is ensured that every student receives the necessary knowledge and skills to access modern digital infrastructure for future use. One example of the digital transformation in the education system is that by 2020 all study materials in Estonia will be digitised and available through an online e-schoolbag. In 2005, Estonian state created a database named Estonian Education Information System (EHIS) that brings together all the information related to education in Estonia (ehis.ee). The database stores details about education institutions, students, teachers and lecturers, graduation documents, study materials and curricula. The service is intended for anyone in education, whether students enrolled in general, vocational, higher or hobby programmes, or the teachers and academic staff providing that education. It is also possible to access information on the qualifications and further training completed by teachers and academics. EHIS is also part of monitoring the education system so that the authorities can make sure it prepares people for the labour market of the future. Higher education is free in Estonia at public universities and applying for university studies by simply transferring one's details to the desired university is the most common use of the EHIS database (EHIS 2019). Availability of numerous of education e-solutions is definitely very helpful for Estonians as most of them believe that raising smarter kids is the smartest investment a country can make and for staying smart life-long learning is a must. Brief Introduction to Estonian Business School Founded in 1988, Estonian Business School (EBS) is the oldest privately owned business university in the Baltics (see ebs.ee) educating and training current and future managers in the areas of business administration, leadership and entrepreneurship and conducting research in related fields. With more than 1500 students, EBS's goal is to provide enterprising people with academic knowledge, skills and values for its successful implementation and offering degrees at Bachelor's, Master's as well as Doctoral levels. When EBS was founded in 1988, it was the first institution in Estonia to introduce diploma business education and since business administration did not exist in soviet universities, there was no teaching tradition, no faculty and no textbooks: a difficult starting position. However, the size of the country and its orientation towards the West has meant that EBS has stressed the international and innovation perspectives from the start, and the rapidly changing environment has encouraged EBS to respond and adapt at an adequate speed. Starting from the scratch can also be seen as an advantage since the university was and still is not tied down by outdated procedures and overwhelming traditions from the past, which also makes its digital transformation as a logical and natural step ahead. Adapting to the Estonian context has meant, for example, that EBS uses many practitioners and higher-level managers as lecturers in its courses, revising traditional programmes to fit actual needs from the industry, and applying management theories and best business practices in the running of the institution itself as well. EBS also acknowledges and appreciates most of its students working full-time or part-time in addition to studying, encouraging and shaping their entrepreneurial mindset. By using both English and Estonian as languages of instruction, EBS is preparing students for the Estonian market and beyond. Today more than 30% of students come from abroad, from 12 different countries and 20% of faculty members are foreigners. In year 2011, EBS was the first university to establish its subsidiary in neighbouring country Finland. The goal of EBS Helsinki Branch is to provide Finnish students with the possibility to study international business administration by way of session-based learning in English in the students´home country. EBS Helsinki is located in the modern and innovative Technopolis Ruoholahti business park, benefitting from various digital solutions and tools. Along with developing high-quality learning environment in Helsinki, EBS has significantly increased the investments into transformation to more innovative and digital solutions also in Tallinn's main campus and now these tools are more widely and rapidly implemented in teaching and training activities and being daily used by all students, staff and academics. Study Methodology For getting more information about the digital transformation at Estonian Business School and for illustrating this discussion with real-life examples, I conducted personal in-depth interviews with EBS owner and chancellor Mart Habakuk (hereafter M.H.), who coming from real estate industry took over the university's management after his father's Madis Habakuk's sudden death in 2016. Prof. Madis Habakuk was the founder and owner and also long-time rector of EBS who was actively involved in management until the day he passed away. He also kept EBS constantly updated and adapted to the changes in the environment and several big changes were made rather often, moreover, several e-solutions were available from the beginning, including WebCT, Moodle, online study system (ois), free use of electronic databases, etc. However, his son Mart Habakuk, coming from business sector and having much more radical views and readiness for innovation and digitalisation, started a new digital transformation process immediately after becoming the chancellor of the university. For gathering the material for this empirical case study (Yin 2012), I conducted personal unstructured in-depth expert interviews (Saunders et al. 2009) in August 2019. My purpose of having these interviews was to have open conversation and therefore indicated just the main topics and areas related to a more general view on digitalisation, digitalisation in the university, future of learning and teaching, leading the digital transformation, and values and mindset of the digital leader. The interviews took place in an open atmosphere, and after I had explained him the purpose of this study, the chancellor was willing and ready to openly share his views and thoughts about these topics. The interviews were conducted in EBS Tallinn campus, in Estonian language. These were recorded, wholly transcribed and translated into English, I also took notes during the interviews to keep an eye on the process, and to be able to ask additional questions for drawing attention to some topics needed to be covered. The recordings lasted for 59 min and the amount of transcribed text was 30 pages. The chancellor was chosen as the respondent with a clear purpose (see Creswell 2009) to get rich data, to know more about his views and experiences, and especially about his entrepreneurial mindset as being the digital leader, whereas he is the person who initiated the digital transformation and makes most important decisions related to digitalisation at EBS. The information collected from these interviews enables to better understand the importance of entrepreneurial mindset in digitalisation process taking place at EBS and know what were and are the reasons behind decisions related to digitalisation. For analysing I used the case-by-case qualitative content analysis (Frechtling and Sharp 1997), searching for meaningful patterns and creating categories, drawing relations between different topics and focusing on the values and entrepreneurial mindset. The transcribed texts were read several times and different categories marked, during the analysis inductive open in vivo coding was used, in order to create the detailed understanding and decode meanings. Digitalisation The first topic was about conceptualising digitalisation in general. It can be said that here his view goes in line with the ideas of authors discussed previously (Matt et al. 2015;Ilmarinen and Koskela 2015;Westerman et al. 2014). For M.H., digitalisation means using technology in order to do things better and more efficiently, or as he put it in words: 'When looking from more distant, digitalisation might seem to be the use of digital documents or some kind of new program, however with more inside look we realise that it means implementing new products and technology that often is new hard-and software, to make things better and more efficiently'. M.H. also made an interesting comparison to the innovation related to steam engine and new technology back then, emphasising that everything starts with the purpose, and why these new applications are needed and he also indicated that today the tools and equipments are just more developed, saying that 'however the purpose has remained the same, to do things better and more efficiently and when this new technology includes software, then it can be also called as digitalisation'. Digitalisation in the University Next I wanted to know what is the meaning of digitalisation for the university. In his answer, M.H. stated that digitalisation for the university is not as purpose per se, but in order to make its products and services better, it is possible to set up several hypothesis. In his view, learning has to take place over long time, not like one-twodays sprints; it is important to learn several things at one time, in order to create connections between different subjects; he also highlighted the importance of learning and teaching from each other, based on own experiences and that has been read from some books or other forms of courses. Learning about something and then sharing this with the others. Similarly with Henderson et al. (2017) he also emphasised the role of experimenting and trying different solutions. The role of technology and digital tools was just seen as helping people, both students and faculty in this process. Digitalisation of university means a range of different trials and experiments, what might work and what not, and it is also clear, that what works with one might not work with the other, and this depends on the student, on the subject, the instructor and relatively little on the technology. M.H. told also more specifically about the EBS's experiences and what has been done in the university during this new digital transformation process. What was really interesting to hear was that there are several trials and experiments taking place at the same time and the success of these is mainly determined by the facts whether these help students and whether corporate customers will buy these for their employees. '… from the digi-and start-up world (that is also indirectly related to the digital world) it can be seen how new things are done, first there is an idea, then you can look for best practices from the world, put together the brief overviews, find people to test these with, which ones would they buy … and when the majority would buy the same you have selected, you are on the right track and can use these with students. These should be relevant and specifically meeting the students' needs'. Future of Learning and Teaching Learning together and sharing the knowledge was emphasised several times during these interviews. The chancellor also argued from the student's perspective, saying that 'in today's high pace environment … it would be more faster and efficient doing it individually, and thus via different forms of online and on-demand courses, where you can learn the basics and which might not be so exiting, but need to be known'. He also found it possible and even necessary to have group works in the virtual world, where students do not need to be physically present, but also expressed his concerns stating that: 'there's not yet enough evidence that it will replace meetings with others. And there are things which have been and also will stay, these are face-to-face meetings, working in groups and learning from each other'. When talking about teaching at the university, he called the lectures with 500 students edutainment, which are meant for the superstars, 'who come and do something awesome', but added, 'when you look at the learning process as a whole, when you learn some tools or skills, then these big lectures are not so optimal choices'. Helping to develop certain skills and entrepreneurial mindset, to learn how to use new and innovative tools were topics that seemed to be very important for him as he returned to these several times and considered these as the main purpose and role of the university in the twenty-first century. As the same ideas are also found from Frey and Osborne's (2013) studies, then the importance of digitally minded entrepreneurial people in academic sector cannot be underestimated. Looking at the whole learning process and helping students there was something that M.H. considered especially relevant for the future: '… but what I believe that may emerge is the personal learning cloud and big qualitative change in online courses, that are not courses any more, but learning paths'. The importance of life-long learning and university's role facilitating the process was another topic that was repeated several times: '… and the new role of the university is being a place where people do these things which are more efficient done as face-to-face, where someone helps when one is stuck. Thus it's possible to ask either from the fellow student or from a faculty member.' (see also Hill et al. 2015). M.H. views faculty members as facilitators, mentors, who help the students to achieve their purposes, and who need to be present when students need help, in most cases in teams and sometimes also individually. '… it's is more like a mentor-student relationship and the traditional belief, that a faculty member is the most knowledgeable person is outdated today. A faculty member should help students to achieve their purposes and can suggest what skills are needed and in which order'. Turning their head towards customers (as also discussed by Tolboom 2016; Ilmarinen and Koskela 2015; Edelman 2010), creating a supporting infrastructure (Matt et al. 2015) and encouraging atmosphere for recognising opportunities and taking risks (Koe et al. 2012) and developing entrepreneurial mindset have been also considered significant during transformation processes. According to M.H., the digitalisation transformation activities are directly related to the investments made into the infrastructure and providing new spaces where students can work in teams (either in real life or by using new digital tools and solutions) on the assignments faculty members have given them. '…this (our digitalisation activities (M.K.) … relates to the experiments we are making with the infrastructure right now, creating more learning spaces outside the auditoriums, there were no such places earlier and now there will be about 10% of the whole area for informal learning spaces. … It's an experiment now, and it will be interesting to see how students will adopt it and start using it. It also should change the whole image and mindset of people to study together more, also when using online learning…'. With this statement, M.H. once again gave proof that the whole digitalisation process is carried through with the purpose to increase sheared (online) learning, make things better and more efficient especially for the students, who represent the paying customers for a private university like EBS. Interesting examples and ideas were expressed by M.H. especially about the future learning opportunities and methods. Some of these solutions are already existing, others being currently developed and constantly improved. '…Today the big companies such as Amazon and Google have their own academias, where with very reasonable price and constantly improving quality courses are offered and those who want and are able to motivate themselves, can create even groups from people with similar mindsets, and able to get the same education within the same time, at 10 times lower price. But of course universities have several arguments against it, for example the public sector is a thankful customer, who thinks that people should be taught and motivated to learn…' Here we can argue, that according to M.H. the future learning activities should not take place at the university at all, although this can be considered true and rather probable, however this also endangers the future perspectives of universities as such. Leading the Digital Transformation As Mart Habakuk is really a person with an entrepreneurial mindset, being the initiator and brain behind the digital transformation process at EBS, it was interesting to know more about his experiences when leading this change. Khalid et al. (2018) have emphasised the role of university leaders and hearing how the process is lead at our university enabled to understand certain decisions and choices much better. Although at first M.H. considered this topic more complicated, the answers showed that in case of EBS and for himself personally as well the vision of the leader and encouraging others to work towards that vision (e.g. Kouzes and Posner 2012) are the main leading principles in this digital transformation process. M.H.: '…basically it is telling your stories, and making sure that you can help to remove the obstacles, that do not allow people to do things they are able to do if they want … and as the things that can be done are so many, and it's not possible to do them all, even half of these not, then to filter out the single ones where it's feasible to make an effort and put recourses in, looking where the impact is the biggest and always measuring on what … so we also like to deepen the way of thinking, shape the mindset, that we are not here to become the best university in the Eastern Europe, but for helping our students to achieve their purposes'. Here again, his concerns about helping the students to achieve their purposes were heard: '…and everything we do or leave undone, we need to think whether it helps our students to achieve their purposes or not … and when not, then what can help them … and making this way of thinking to become prevailing'. The same idea was also mentioned when talking about main obstacles in this process as often faculty members are relaying too much on what they are used to do and may be hesitant when implementing new solutions and digital tools (see Fosic et al. 2017): '… but a big thing is whether we can get our faculty members to integrate the world-class content and solutions into their own courses. So that also the content not produced by themselves is ok, and should be used in order to help students to achieve their purposes. So in principle to offer solutions to overcome the skill caps students might have…' Values and Mindset of the Digital Leader Final interesting and relevant topics that were discussed were related to the values and mindset of the leader in the digital transformation process. The answers again gave proof to the ideas expressed by several authors who have analysed the digital leaders' activities and principles (e.g. Kaganer et al. 2013;Becerra 2017;Dubey 2019;Khalid et al. 2018). The values were expressed in the best way through M. H.'s views how to measure success and what are the principles behind decisions that are made in the university. Working together on the common purpose, sharing ideas and information was repeated several times, also the ideas how to support our students in the best way and even why is it important to help others in the same field. According to M.H.: '… values … mainly how to make people do things that are needed, make sense and get agreements that we are going to achieve these together …our main success measurement is the number how many persons do not leave the university after graduation, but come back for different courses and events, keeping in touch with us … this also shows that they are interested and want to learn more … and so we can offer special modules, at multiple levels … (it's not yet) not so acknowledgeable, but our main purpose should really be to help students … and when doing things well, money will follow, it's the result … (we have also to consider) … availability is not only the privilege of wealthy … we can help our students to get the best on the market … and when doing something and creating something, helping also the others, sharing information and best practices, helping the others to succeed as well (is important) … as the goldsmiths are all on the same street, when everyone succeeds, then all will be successful … (and our main purpose is) …to wake up the 21 century persons, and make them valuing themselves, so that also the others will benefit from it'. All these ideas were something that I really liked to hear and now hope that these values (e.g. Kooskora 2012Kooskora , 2013; BBVA 2012) will start playing even bigger role in the university's activities as well. To conclude this case study, it is just one example how digitalisation transformation is lead in one Estonian private university. It highlights some most important aspects and shows what are the ideas and thoughts behind decisions made during the process and emphasises the role of entrepreneurial mindset. It attempts to look and make sense of the choices that the digital leader has made, not to generalise to other universities in Estonia nor anywhere else, but to advance theory and conceptualisation. Although all cases are different depending on the environment and certain situations as well as concrete persons, their views and values, this case study still presents some certain aspects and patterns that can be also considered characteristic for the twenty-first century organisations. Turning the head towards the customers, hearing their voice, considering the needs and expectations of different stakeholders, involving own organisation's members in the process, leading them by shared vision and telling stories, creating the supportive environment and encouraging entrepreneurial atmosphere, empowering people and valuing their skills are just some of these. Formulating the overall purpose to help their customers, understanding that right and good activities make the money to follow and helping others to succeed as well can definitely be considered as values that may help to succeed in the changed environment of the twenty-first century. Moreover, while developing relevant online and blended courses there is a need to collaborate closely with different stakeholders. Identifying the skills that are demanded by employers and designing course content to facilitate the development of skills that are aligned with industry demand need considerable input from many stakeholder groups and development of entrepreneurial mindset. Furthermore adapting the curriculum should go beyond the infusion of digital skills to also address the role of digital leadership skills, the skills required of an individual to initiate and achieve digital transformation across companies and industries, and develop digital leadership mindset. Concluding Remarks The discussion about digitally minded leaders with entrepreneurial mindset and short case study about digitalisation and leading the digital transformation process showed clearly, that although the new solutions and tools gained through digitalisation are helpful they do not have any value without the people. Digitalisation just gives the tools that should make people's lives better and their activities and work more effective, however how successful the process is and will be depends on the people and especially those who are leading it. In order to compete in the much-changed environment, organisations need to succeed in merging their activities and technology. Whereas while facing some of the greatest challenges as well as greatest opportunities from the digital transformation, much depends on people with entrepreneurial mindset and the vision of the leaders.
2020-11-19T09:13:57.908Z
2020-11-14T00:00:00.000
{ "year": 2020, "sha1": "668f80e4901405e1f82a224913567426d5b0feec", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-53914-6_8.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "85e5005fd2e8a8032cedd8f5eac08c44a839d51d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Sociology" ] }
258400157
pes2o/s2orc
v3-fos-license
Colocated MIMO Radar Waveform-Array Joint Optimization for Sparse Array Colocated multiple-input multiple-output (MIMO) radar can transmit a group of distinct waveforms via its colocated transmit antennas and the waveform diversity leads to several advantages in contrast to conventional phased-array radar. The performance depends highly on the degrees available, and element spacing can be deemed as another source of degrees of freedom. In this paper, we study the joint waveform and element spacing optimization problem. A joint waveform and array optimization criterion is proposed to match the transmit beampattern, the suppression range, and the angular sidelobes, under the constraints of minimal element spacing and total array aperture. Meanwhile, the effect of receive beamforming on suppressing mutual correlation between returns from different spatial directions is also incorporated into the optimization criterion. The optimization problem is solved by the sequential quadratic programming algorithm. Numerical results indicate that with more degrees of freedom from array spacings, colocated MIMO radar achieves a better transmit beampattern matching performance and a lower sidelobe level, compared with a fixed half-wavelength spaced array, but the benefits from additional degrees of freedom from array spacing optimization have a limit. Introduction Space-borne radar can search for targets in a greater volume from space and thus always receives intensive attention from researchers in many countries. Unlike radar systems on other platforms, space-borne radar systems [1] put higher standards on stability, robustness, and survivability in space. At the end of the 20th century, techniques regarding space-borne radar grew rapidly, and smart satellites provided another solution for spaceborne radar. Just like unmanned aircrafts, such smaller satellites may fly together stably in space; they could be considered distributed antennas of a novel radar system with high stability, robustness, and survivability. The antenna array may operate in the developed phased-array radar mode, but now multiple-input multiple-output (MIMO) radar [2], with more degrees of freedom and better performance in many aspects, is a better choice. According to the distance between radar antennas, MIMO radar can be classified into two kinds, i.e., distributed MIMO radar [3] and colocated MIMO radar [4]. Both kinds of MIMO radar have several advantages over their conventional counterparts [4,5]. The former has widely separated radar antennas to observe different aspects of radar targets, whereas the latter has colocated antennas in space to observe only one aspect of targets. The criterion to determine whether two signals are received by two diversity channels can be found in [6]. The distributed MIMO radar is generally incoherent, i.e., the phase differences between transmit/receive antennas are either not coordinated or not exactly known, because with widely separated radar antennas, independent target returns and independent interference are often obtained and the optimal processing algorithms are incoherent in most situations. The colocated MIMO radar is coherent on both the transmit and the receive ends and can operate in a much more flexible mode than phased-array radar. The coherence between antennas can achieve a much longer detection distance with numerous coherent antennas; thus, this type of radar is more suitable for space-borne radar to detect targets at a long distance. Well-designed waveforms are critical to realize claimed advantages, and therefore, radar waveform optimization is a hot topic in the MIMO radar field [7][8][9][10]. For distributed MIMO radar, waveform optimization just needs to suppress auto and mutual correlation sidelobes of transmit waveforms [4] and thus is less sophisticated than that for colocated MIMO radar [11]. For colocated MIMO radar, however, waveform optimization can make colocated MIMO radar operate in a complicated mode, e.g., to steer multiple transmit beams one at a time into multiple spatial directions [12] (Phased-array radar can also illuminate multiple beams into multiple directions within one transmission, but the interference performance is worse than that for colocated MIMO radar). Therefore, the MIMO radar scheme may be an interesting choice for space-borne radar. Waveform optimization for colocated MIMO radar mainly has two goals, i.e., matching a desirable transmit beampattern [4] and suppressing auto and cross-range sidelobes [10,11]. These two goals are often expressed in different forms. First, two different pursuits should be combined together in optimization and thus a trade-off is required. Second, it is difficult to match a directional transmit beampattern together with range sidelobe suppression. Third, there are different measures of the sidelobe level in existence, so nearly orthogonal waveforms designed for distributed MIMO radar are unsuitable to colocated MIMO radar even with an omnidirectional transmit beampattern because their sidelobe level measures are different [13]. If one ignores range sidelobes and concentrates on the transmit beampattern, the optimization problem may be convex, and then a global optimal point may be found [12]. A major difficulty for radar waveform optimization lies in range sidelobe suppression with the constant-modulus constraint, which comes from the fact that radar transmit circuits often operate in saturation mode. The saturation operation mode can circumvent the demand for accurate inner-pulse power control required by amplitude-modulated waveforms. For waveform optimization, however, the constant-modulus constraint would make range sidelobe suppression suffer from numerous local minima to reach the global optimal point. Therefore, we have to use optimization algorithms such as the genetic algorithm [9], simulated annealing algorithm [14,15], and sequential quadratic programming (SQP) algorithm [16,17]. For such optimization algorithms, the final performance relies on a subtle optimization function, but sufficient degrees of freedom are also critical. Rich degrees of freedom from signal diversity are the source of advantages of colocated MIMO radar and the key to yield better flexibility compared with its phased-array counterpart. However, in waveform optimization with range sidelobe suppression, the degrees of freedom are still insufficient in some situations. Smart antenna swarms in space can set the element spacing more flexibly and thus the element spacing can be considered as another kind of degrees of freedom for optimization. In this paper, we consider waveform design for colocated MIMO radar with a sparse transmit array in the background of space-borne radar. The spacing of elements is optimized together with transmit waveforms. Meanwhile, an attenuation factor is introduced to measure how much the receive beamforming would affect cross-correlation sidelobes in spatial receive channels and then is incorporated into our waveform design criterion. We define three groups of quantized angular frequencies to match a transmit beampattern, to represent spatial receive channels, and to simulate target returns from various spatial directions. Unlike [12], who matches the transmit beampattern through the waveform covariance matrix, we directly squeeze the difference between the desirable transmit beampattern and real transmit beampattern. Meanwhile, an offline parameter is used to balance two pursuits and an offline parameter is used to control the total transmit aperture after optimization. The sparse array in radar can achieve a large aperture with a given number of elements and the element positions should be optimized to avoid grating sidelobes [18]. For colocated MIMO radar, the sparsity of the transmit array can improve range resolution without introducing grating sidelobes in the receive end. That is different from phased-array radar, whose angular resolution is determined merely by the aperture of the receive array. Meanwhile, in a sparse array, the locations of elements may be optimized, which is also a source of degrees of freedom and has the potential to improve performance. In [19,20], transmit aperture optimization is addressed in the background of a low-cost lightweight antenna array and orthogonal waveforms are illuminated through antennas. In [21], a sparse circular array and its advantages are introduced. In [12], waveform covariance matrix optimization is addressed in transmit beampattern matching, but range sidelobes and array optimization are not addressed. In [22], waveforms for colocated MIMO radar have been optimized to enhance the anti-jamming performance, provided that an electronic attack device operates only in saturation mode, but the element spacing is not optimized for more degrees of freedom. More performance improvement can be achieved by addressing the receive end processing. We notice that colocated MIMO radar needs to suppress sidelobes of angular waveforms [11], i.e., waveforms illuminated into different spatial directions, termed as angular waveforms in [23], and the receive beamforming operation [24] can make us impose slighter weights on suppressing cross-correlation of angular waveforms [16,25]. In nature, cross-correlation sidelobes of angular waveforms represent how much a spatial receive channel suffers from target returns from various spatial directions. In the receive end, the receive beamforming operation for array radar has the same function and is often more efficient in suppressing this kind of interference. Notwithstanding that, one has to suppress such cross-correlation sidelobes by waveform optimization [11], but the number of cross-correlation sidelobes is often greater than that of auto-correlation sidelobes; thus, many degrees of freedom are consumed on suppressing cross-correlation sidelobes. If we incorporate the effect of receive beamforming, less attention may be placed on cross-correlation sidelobes, and then some degrees of freedom can be released for better use. An extreme case is addressed in [16], where only auto-correlation sidelobes are suppressed through waveform optimization for a receive array with a larger aperture, leaving cross-correlation sidelobes suppressed at the receive beamforming stage. Huge sidelobe performance improvement is achieved. Numerical results are given to show waveform optimization results. We find that additional degrees of freedom result in better sidelobe level and a better transmit beampattern performance, without grating sidelobes present in the receive end. Aided with receive beamforming, the overall sidelobe level reaches a much lower level. Meanwhile, a numerical simulation is also performed to examine the sidelobe level improvement, indicating that as the total aperture increases, the benefits have a limit. The rest of this paper is organized as follows. In Section 2, the sidelobes and transmit beampattern of the sparse MIMO transmit array is formulated, the attenuation factor of the receive beamforming is introduced, and the waveform optimization criterion is presented. In Section 3, numerical results are given to show how much the additional degrees of freedom from spacing optimization affect transmit beampattern matching performance and the sidelobe level output. In Section 4, some discussions about parameter settings are given and some conclusions about their applications are drawn. Waveform and Array Optimization for Sparse MIMO Array For simplicity, we consider a group of smart satellites flying in line at the same speed and carrying antennas in the same orientation. That is, a colocated MIMO radar system with a linear sparse N t -element transmit array and a linear half-wavelength spaced receive array with N r elements. A system diagram is shown in Figure 1. Assume that the satellites can maneuver to construct a given distribution in space if needed; no position error is considered in this paper. For the sparse transmit array, the distance between the ith element and the (i + 1)th element is denoted by d i , i = 1, 2, · · · , N t − 1. The transmit steering vector can then be written as where f 0 denotes carrier frequency, j denotes the imaginery symbol, exp(·) denotes the exponential function, θ is the spatial direction of interest, c is the speed of light, and [·] T is the transpose operator. To be concise, the directions are treated as a frequency term in the linear array configuration, and we define a normalized angular frequency by The element spacing between elements is normalized by the wavelength as where λ = c/ f 0 denotes the wavelength. In particular, for a half-wavelength array, With β i , we can express the transmit steering vector in another form as where the matrix translates element spacings to element positions, and β = [β 1 , β 2 , · · · , β N t −1 ] T is a vector of element spacings. For a uniform linear array, β is a vector of identical members. It is more convenient to run the optimization process over β k which all have the same range. To control the overall aperture of the transmit array, we define the total aperture by for which the real array aperture is Dλ/2. In order to make a fair comparison with the aperture of a uniform half-wavelength spaced array, for which D = N t − 1, we define a measure of array aperture extension by For the half-wavelength spaced linear array, η = 1. As real antenna spacing often has a lower bound λ/2, we set η ≥ 1, and it would increase with total aperture D. Angular Waveform and Transmit Beampattern Assume that the colocated MIMO radar system under concern has a transmit waveform matrix denoted by S = [s 1 , s 2 , · · · , s N t ] ∈ C N s ×N t , where s i is the waveform transmitted by the element on the ith satellite, and N s is the number of codes of each waveform. After transmitted waveforms S are transmitted from transmit antennas into surveillance, they will constructively or destructively interfere with each other to form different waveform signatures in different spatial directions, subsequently termed as angular waveforms. For a spatial direction θ, a coherent combination of S would yield an angular waveform as The transmit beampattern is defined as the power of angular waveforms in different spatial directions, i.e., where (·) H denotes the conjugate transpose operator, and denotes the transmit waveform covariance matrix of S. Attenuation Factor of Receive Beamforming We assume that the transmit waveforms are narrow-band. In this case, the waveform covering a target is an angular waveform and target returns also bear the same waveform signature, to which the receive end would match. There is often a Doppler modulation and we do not address this issue here. At the receive end, there are various signal processing algorithms, which differ mainly in the method of suppressing background interference [23]. In [24], several signal processing algorithms for colocated MIMO radar are proposed, for which the receive beamforming components all have the following form [26]: where R r is an estimate of the interference covariance matrix and the receive steering vector is denoted by The receive steering vector indicates that the receive array is a uniform array with a half-wavelength spacing. Adaptive interference suppression algorithms, such as the MIMO-Capon algorithm [24], involve samples received online. Since interference circumstances may be different at different range cells and online waveform optimization involving range sidelobe suppression is difficult to implement, we do not concern ourselves with such adaptive algorithms at the current stage; rather, we formulate a simple and classical MIMO signal processing algorithm, i.e., the MIMO least square (LS) algorithm, for which R r = I and the receive beamforming weight is In [23], the implementation of the MIMO LS algorithm is addressed; it is mainly composed of three operations, i.e., receive beamforming, range compression, and transmit synthesis. The latter two operations are realized by a concise unit called space-time range compressors, which follows a receive beamforming filter and is actually a matched filter regarding returns in directions associated with the beamforming filter. If a spatial receive channel regarding a spatial direction f c uses this receive beamforming weight, target returns from other spatial directions would be attenuated first by the beamforming filter before they pass the space-time range compressor. Cross-correlation sidelobes measure how they are attenuated in the space-time range compressor, whereas the precedent beamforming filter has attenuated them first. Therefore, a good combination of the suppression terms can make better use of the degrees of freedom. It can be found that target return from f c is attenuated in the beamforming filter by a factor which is termed as the attenuation factor. In particular, if f c = f c then ρ r = 1, standing for no attenuation; otherwise, ρ r is generally less than one and the value indicates the degree of attenuation. If f c deviates far from f c , ρ r is generally very small. In this case, if receive beamforming can attenuate angular sidelobes efficiently, it is unnecessary to put too many degrees of freedom on mutual correlation sidelobe suppression in waveform optimization. Sidelobes of Angular Waveforms Notwithstanding target absolute amplitude, for target return s( f c ), the space-time range compressor intended to match angular waveform s( f c ) would output sidelobes given by where k denotes mutual range shift, and denotes the shifted waveform covariance matrix. The shift matrix is defined by where 0 denotes an all-zero matrix with subscripts indicating its sizes, and I denotes the identity matrix. In particular, if f c = f c , then we obtain auto-correlation sidelobes as From (17), we have and then where (·) * denotes the conjugate operator. In waveform design, it means that we can suppress only one side of range sidelobes, say, those for k > 0. Meanwhile, for cross correlation sidelobes, we have With this relationship, we can reduce the number of values to minimize as well. Sidelobes after Range Compression For a directional transmit beampattern with two peaks, the receive end generally deploys at least two space-time range compressors to deal with returns from those directions. Target returns from two directions would have nonzero outputs in both compressors and the mutual interference can be measured by cross-correlation sidelobes, which should thus be suppressed. As angular waveforms with two peaks also have conjugate symmetric sidelobes, we can suppress only one side of both auto-and cross-correlation sidelobes of two angular waveforms. Such a range compressor may receive target returns from any spatial direction. Targets or clutter patches from other spatial directions may also have sufficient power to spoil the range compressors. In this case, we intend to suppress the outputs in the range compressors, but we need to suppress both sides of range sidelobes. Here we focus on sidelobe outputs after receive beamforming and space-time range compression. In addition to the attenuation factor, the sidelobe outputs have a form as As ρ r ( f c , f c ) = 1, receive beamforming does not alter auto-correlation sidelobes. Transmit Beampattern Matching Transmit beampattern matching has mainly two approaches. One approach is to optimize a waveform covariance matrix that bears a certain beampattern; given such a waveform covariance matrix, transmit waveforms are optimized to match it [12]. Here we choose the other method, i.e., to directly squeeze the mismatch between the desirable transmit beampattern and real transmit beampattern. As the angular frequency is a continuous value, we need to quantize it first and then optimize it at selected angular frequencies. Given a group of N d selected representative angular frequencies denoted by , we assume that desirable transmit beampattern responses are b d . The real transmit beampattern at f b can be expressed as where diag(·) denotes a vector of its diagonal elements, and A t is the matrix of transmit steering vectors, i.e., A t = [a t ( f b (1)), · · · , a t (N b )]. A straightforward measure of the transmit beampattern mismatch is given by where |·| denotes a matrix of absolute values of the matrix/vector entry. Here a parameter γ > 0 is introduced to avoid amplitude ambiguity between expected transmit beampattern and real transmit beampattern. In practice, the accuracy demanded is often different and this method can control the mismatch flexibly by adjusting the number of elements in f b and the relative weight in contrast to the sidelobe level. Sidelobe Level Measure In the receive end, there would be multiple space-time range compressors, each following a receive beamforming filter regarding a spatial direction. We intend to suppress sidelobe outputs in the range compressors, so we first define a group of angular frequencies regarding those range compressors or beamforming filters, by where N a denotes the number of such a space-time range compressor in the receive end and f a (k) denotes the spatial direction regarding the kth range compressor. In practice, a peak of the transmit beampattern may need to deploy more than one such range compressor, depending on the width of the peak and system requirement. Both auto-and cross-correlation sidelobes of angular waveforms regarding f a should be suppressed. Meanwhile, it has been shown in (20) that auto-correlation sidelobes are conjugate symmetric in the range shift dimension, so we need to suppress only one side of range sidelobes. For cross-correlation sidelobes, here we assume that the directional transmit beampattern to match has peaks with the same amplitude, and then from (21), there is also a conjugate symmetric property. Therefore, we define a measure of one-side sidelobe level to suppress by Although other spatial directions have low power allocation, returns in those directions may still have high power and then we should suppress them. It should be kept in mind that the interference of interest is one-way, i.e., only the interference to those spatial receive channels is of interest. In order to represent target returns from all other spatial direction, we define another group of angular frequencies by f m = [ f m (1), · · · , f m (N m )] to represent N m such attenuated spatial directions. To avoid duplicated sidelobes, we assume that f m and f a have no element in common. We define another PSL measure by Joint Waveform Optimization Criterion Now we have defined two PSL measures, PSL a and PSL m , as well as a transmit beampattern measure. They are combined to form the following waveform optimization criterion: where P = −j log(S) denotes a matrix of phases of S, log(·) denotes the logarithm function, and α is a trade-off parameter between transmit beampattern matching and sidelobe level. In practice, the spacing of transmit antennas has a minimal limit, typically a half wavelength, so we impose a constraint over spacing for all antennas by β i ≥ 1. It is unnecessary to put a constraint on phase matrix P, whose elements have a period 2π. The optimization result of P would be shifted within the domain [0, 2π]. Our optimization method can trade off between transmit beampattern matching performance and range sidelobe level. The trade-off parameter α is set offline, according to the interest of the designer. A high α would emphasize transmit beampattern performance and thus may sacrifice the sidelobe level. In practice, we do not agree with wasting too many degrees of freedom on transmit beampattern matching, because even though we match a beampattern accurately in theory, a real transmit array would be difficult to reproduce for array error and mutual coupling. Too accurate a transmit beampattern is not always necessary in some situations. Therefore, we advise relaxing transmit beampattern matching accuracy properly for a better PSL. The degree is key, and the weight should be tuned properly. In practice, sidelobes may be imposed by different weights for a desirable property and the extension is straightforward, so we do not show it explicitly here. The problem in (27) is constrained, nonlinear, and NP-hard; the global minimum is difficult to reach and for such a problem, there are numerous optimization tools to use. Here we resort to the minimax algorithm based on the SQP, which is found to be efficient and robust [17]. Optimization Configurations The optimization is concerned with the background of a sparse MIMO transmit array. The transmit array has N t = 12 transmit elements, for which N t = 128 codes will be designed according to (27). The receive array deployed in the same direction has N r = 12 receive elements, all spaced by a half wavelength. As the SQP-based optimization algorithm is sensitive to the initial value, we run the optimization processing 20 times and select the best one as the final result for all the following numerical experiments. Two simulations will be considered. The first aims at verification of the benefits induced by optimizing array spacing. The second aims at studying the impact of array aperture on optimization performance. Benefits of Spacing Optimization In [25], a waveform optimization algorithm without array spacing optimization is addressed, wherein there is also a trade-off parameter like α. To make a fair comparison with it, we set the trade-off parameter α = 0.01 for both of them. Here the average element spacing is η = 3.5, corresponding to a total aperture D = 38.5 for the 12-element array. The transmit beampattern matching performance is shown in Figure 1, where the desirable transmit beampattern and that designed with η = 1, corresponding to the half-wavelength and no-spacing-optimization case, are shown together. Figure 2a indicates that the advantage of spacing optimization is obvious. The array spacing optimization gives rise to a better transmit matching performance than the method without spacing optimization. Meanwhile, additional degrees of freedom result in a lower sidelobe level of the transmit beampattern. Figure 2b shows the element positions after optimization, indicating that both the minimal spacing and the total aperture meet the prescribed settings. Auto-correlation sidelobes are shown in Figure 3, where the upper two figures are results with spacing optimization and the lower two are with the method without spacing optimization, both for two spatial directions corresponding to −0.2 Hz (left two) and 0.2 Hz (right two). From Figure 3, the spacing optimization leads to a −24.4 dB auto-correlation PSL a , whereas no spacing optimization has only −20.2 dB. Therefore, additional degrees of freedom provide approximately a 4.2 dB reduction of PSL a . Two spatial receive channels would receive returns from directions other than f a . To evaluate the effect, sidelobe outputs of returns from f m in the spatial receive channels are shown in Figure 4a,b, for our method and that in [25], respectively. In Figure 4a,b, all sidelobe outputs are lower than −30 dB, much lower than autocorrelation sidelobes. Such a significant performance improvement is mainly caused by the receive beamforming operation, which has efficiently suppressed cross-correlation sidelobes of angular waveforms. Their difference is insignificant in this measure. For the 12-element receive array, the attenuation factors for the correlation pairs are shown in Table 1. The attenuation factor depends on the interval, i.e., a larger distance of two directions tends to have a lower attenuation factor. More attenuation factors regarding other correlation pairs can be computed through (2). It can also be seen that those cross-correlation sidelobes are not as plain as autocorrelation sidelobes. That is because the waveform design criterion has equal weights on auto-correlation sidelobes, but auto-correlation sidelobes are more difficult to suppress through optimization than cross-correlation sidelobes. One can adjust the weight to meet specific demands. In [16], if the receive aperture is sufficiently large, one can even totally ignore cross-correlation sidelobes and focus on suppressing auto-correlation sidelobes for better performance. Impact of Array Aperture Total transmit array aperture D determines the degrees of β i and thereby affects final performance. To study quantity, we run the optimization process nine times for η increasing from 1 to 5 with spacing 0.5 and show the PSL versus η in Figure 5. It can be seen that the increase in spacing can indeed lead to a lower PSL. However, there is a limit; the bonus will cease increasing after η reaches a point, approximately η = 3.5 for our parameter settings. The transmit beampattern matching performance would vary with the sparse degree η as well and the variation is shown in Figure 6, where different η are grouped into four figures to have a clear view. It can be found that the spacing optimization can enhance transmit beampattern matching performance, but like the sidelobe level, the matching performance will reach a limit. It will fluctuate after η reaches the limit. In practice, different numbers of array elements may result in different turning points of η. For the case at hand, it is η = 3.5, i.e., 3.5 times the half wavelength. It means that the degrees of freedom that can be extracted from spacing optimization have a limit. The final PSL is proportional to the transmit beampattern mismatch, so the total aperture has a similar effect on the PSL, which is not discussed here anymore. Conclusions Waveform design for colocated MIMO radar involves optimization for various variables and with numerous elements to suppress. The success lies greatly in the degrees of freedom available. In this paper, we study how to optimize transmit waveforms and array spacing of sparse colocated MIMO radar transmit array for a desirable transmit beampattern and a low sidelobe level output. We use array spacing optimization to exploit more degrees of freedom and incorporate the receive beamforming effect to make better use of existing degrees of freedom, so that both transmit beampattern matching performance and sidelobe level are improved, without grating lobes typically present in a sparse array. An attenuation factor is introduced to measure how much the receive beamforming would suppress mutual correlation sidelobes of angular waveforms. The factor is incorporated into our waveform design criteria and releases some degree of freedom that is originally allocated to suppress cross-correlation sidelobes of angular waveforms. Better use of degrees of freedom available reasonably brings a better performance output. The way to measure transmit beampattern matching performance and to quantize the angular frequency is also of interest to future waveform optimization. For simplicity, we consider a classical but simple receive beamforming algorithm, but this method can be generalized simply to more general quiescent receive beamforming algorithms, which may have different attenuations factors. It is also revealed that the degrees of freedom that can be extracted from spacing optimization have a limit, i.e., average sparsity over about twice the wavelength may not reduce the PSL any more. Moreover, although spacing optimization enriches degrees of freedom and yields better optimization results, the reproduction in real applications depends heavily on the accuracy in controlling antenna locations. In practice, array spacing is applicable only for radar systems operating in a few carrier frequencies. In this case, one should extend our method to simultaneously suppress sidelobes over different frequencies. If a radar system operates in several frequencies, the performance may become worse for frequencies out of the optimization rule. An array spacing optimized for a carrier frequency may not be suitable for another carrier frequency. In order to improve the array manufacturing economy, some radar systems have a narrow operation bandwidth and this algorithm can make them work better.
2023-04-30T15:05:00.963Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "1f10d4535d66d71076e0e86e011e065afc9a2bb2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4639bf2ebbbe0143282a858d83d74f09b3158ea6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
11523689
pes2o/s2orc
v3-fos-license
Early derivation of IgM memory cells and bone marrow plasmablasts IgM memory cells are recognized as an important component of B cell memory in mice and humans. Our studies of B cells elicited in response to ehrlichial infection identified a population of CD11c-positive IgM memory cells, and an IgM bone marrow antibody-secreting cell population. The origin of these cells was unknown, although an early T-independent spleen CD11c- and T-bet-positive IgM plasmablast population precedes both, suggesting a linear relationship. A majority of the IgM memory cells detected after day 30 post-infection, also T-bet-positive, had undergone somatic hypermutation, indicating they expressed activation-induced cytidine deaminase (AID). Therefore, to identify early AID-expressing precursor B cells, we infected an AID-regulated tamoxifen-inducible Cre-recombinase-EYFP reporter strain. Tamoxifen administration led to the labeling of both IgM memory cells and bone marrow ASCs on day 30 and later post-infection. High frequencies of labeled cells were identified on day 30 post-infection, following tamoxifen administration on day 10 post-infection, although IgM memory cells were marked when tamoxifen was administered as early as day 4 post-infection. Transcription of Aicda in the early plasmablasts was not detected in the absence of CD4 T cells, but occurred independently of TLR signaling. Unlike the IgM memory cells, the bone marrow IgM ASCs were elicited independent of T cell help. Moreover, Aicda was constitutively expressed in IgM memory cells, but not in bone marrow ASCs. These studies demonstrate that two distinct long-term IgM-positive B cell populations are generated early in response to infection, but are maintained via separate mechanisms. Introduction Memory B cells, in addition to long-lived plasma cells, provide a major component of immunological memory [1,2]. Although it has often been assumed that B cell memory is harbored in high-affinity class-switched immunoglobulin (swIg) B cells, it has become increasingly apparent that, as for T cells, the memory B cell compartment is diverse, and several different memory subsets exist [3][4][5]. There is considerable phenotypic heterogeneity, i.e., varying surface markers and Ig expression, within populations of hapten-elicited memory cells [6], differences which may reflect different kinds of memory cell functions [7]. Moreover, several a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 studies have revealed that unswitched murine IgM B cells harbored a significant component of humoral memory [8][9][10][11]. IgM memory cells have been characterized in studies of murine memory responses following immunization, and similar cells are found in humans [12,13]. IgM memory cells constitute a novel and important subset of long-lived memory B cells that may provide immunity to variant pathogens not recognized by classical high-affinity swIg memory B cells [14,15]. In addition to memory B cells, bone marrow plasma cells constitutively produce classswitched antibodies that mediate long-term immunity [16][17][18]. Switched plasma cells have long been considered to be the major source of long-term antibodies, although several studies have described long-term bone marrow IgM antibody-secreting cells (ASCs; [19,20]). T cellindependent (TI) antigens can induce bone marrow IgM ASCs, although it has often been considered that this response is short-lived [21,22]. Our previous studies have indicated, however, that unswitched B cells and IgM can play an important role in long-term immunity to pathogens [20,23]. Our studies of B cells during infection have utilized a mouse model of ehrlichiosis caused by the intracellular monocytotropic bacterial pathogen, Ehrlichia muris. We first identified a TI CD11c-positive splenic plasmablast population present on and about day 10 post-infection that is responsible for the initial production of antigen-specific IgM during infection [24]. This day 10 CD11c-positive splenic plasmablast population produces pathogen-specific polyreactive IgM, is not found in GCs, and is generated in the absence of CD4 T cell help [24,25]. A second population of splenic CD19-and CD11c-positive B cells is elicited within 3-4 weeks post-infection, and is detected at relatively high frequencies for at least as long as one year post-infection [23]. We have demonstrated that these CD19/CD11c-positive B cells are IgM memory B cells, based on a number of definitive criteria, including their expression of the integrins CD11c and CD11b, as well as many other markers previously identified on memory B cells, such as CD73, PD-L2, CD80, and CD38 [23,26]. Moreover, the cells are largely quiescent, do not reside in GCs, have undergone limited somatic mutation, and are responsible for anamnestic, memory responses following antigen challenge [23]. The latter studies indicated that the CD11c-positive IgM memory population, which were only detected in infected mice, was composed, at least in part, of antigen-specific B cells. In addition to our work, studies of B cell responses in other experimental animal models and humans have identified what are likely related memory cells [15,27,28]. The spleen is a major reservoir of memory B cells, including IgM memory cells (both IgD-negative and -positive) in humans [29][30][31]. Human IgM memory cells are elicited in response to Streptococcus pneumoniae infection [30], malaria infection [11], and following tetanus immunization [32]. The early CD11c-positive plasmablasts and IgM memory cells that we have described also express the transcriptional factor T-bet. B cells that express either CD11c, T-bet, or both molecules, have been identified in both human and animals in response to immunization, infections, and in autoimmunity [33][34][35][36][37][38]. The identification of CD11c-positive T-bet+ cells in aged autoimmune patients led to their description as Age-Related B cells (ABCs; [36,37,39]), although CD11c-positive T-bet+ B cells are now known to function in many different immunological contexts. Whether CD11c and T-bet expression define a monolithic B cell population, or a number of related but functionally distinct B cell subsets, is currently unresolved. Our studies have indicated that CD11c-and T-bet-positive B cells include both early TI plasmablasts and IgM memory cells [23,24]. The derivation of and relationship between these two subsets remained unresolved in our previous studies, however. We have also described a third non-canonical population of IgM T-bet-positive ASCs that arises in the bone marrow of infected mice after peak infection [20]. These B cells express CD138, CD93, and CD44, but are CD11c-negative, and are responsible for the production of protective long-term IgM [20]. Thus, ehrlichial infection generates two diverse populations of long-lived IgM-positive B cells, in the spleen and bone marrow, respectively. The phenotypic similarity between these two populations, as well as the observation that the day 10 TI CD11cpositive plasmablasts precede both the IgM memory cells and bone marrow ASCs, suggested that the day 10 CD11c-positive B cells, or a yet unidentified population, are the precursors to one or both long-term populations. Here we demonstrate that both long-term IgM populations are derived from B cells elicited early following infection, at the time of the peak CD11cpositive plasmablast response. Moreover, because the bone marrow ASCs and IgM memory cells differ in their requirement for CD4 T cell help, we suggest that B cell fate is determined by the availability of signals from T fh cells that are also present in abundance in the spleen during infection. Finally, we show that although the day 10 CD11c-positive plasmablasts are generated independently of CD4 T cells, they nevertheless require CD4 T cells, likely T fh cells, for the induction of Aicda mRNA and for AID expression. These findings add to our understanding of the generation of non-canonical CD11c-and T-bet-positive B cell memory and effector cell subsets in the context of an intracellular bacterial infection. Ethics statement Our study was approved by the IACUC committee at SUNY Upstate Medical University (CHU 311), and is in accordance with the guidelines established in the Guide for the Care and Use of Laboratory Animals by the National Institutes of Health. Mice Sex-matched C57BL/6, B6.Cg-Gt(Rosa)26Sor tm3(CAG-EYFP)Hze /J and MHC class II (MHCII)deficient (B6.129S2-H2 dlAb1-Ea/J ) mice were obtained from The Jackson Laboratory (Bar Harbor, ME). The AID-Cre-ER T2 mice were generously provided by Dr. Jean-Claude Weill, ISERM, Paris, France. The UNC 93b deficient mice were provided by Dr. Ann Marshak-Rothstein, University of Massachusetts Medical School. All mice were bred and maintained under microisolator conditions under light/dark conditions in cages with bedding (up to five mice per cage), at Upstate Medical University, in accordance with institutional guidelines for animal welfare. The mice were maintained continuously under microisolator conditions. All experimental mice were between six and eight weeks of age, and within normal ranges of weight at the time of infection. Each mouse was considered to be an experimental unit. All mice were euthanized using CO 2 . When anesthesia was necessary, the mice were treated with isoflurane and were monitored following recovery from anesthesia. Infections and treatments Mice were infected i.p. with 5x10 4 copies of E. muris, as previously described [41]. E. muris causes a non-fatal infection in immunocompetent mice. None of the mice in this study showed signs of serve illness following infection, nor did any of the mice die due to infection. Mice were monitored daily. CD40L blockade was performed by administration of 200 μg of the mAb MR-1 (anti-CD40) on days 8, 10, and 12 post-infection; an irrelevant isotype-matched antibody (clone 2A3) was used a control. Tamoxifen was dissolved in peanut oil at a concentration of 20mg/ml, and 0.5 ml was administered via oral gavage. To validate the effectiveness of the CD40L blockade, NP-BSA-immunized C57BL/6 mice were administered either 200 μg of 2A3 or MR-1 on days 4, 8, and 12 post-immunization. RT-PCR RNA was extracted from sorted CD11c+ B220+ using TRIzol, following the manufacturer's protocol (Life Technologies). cDNA was generated using a Tetro cDNA Synthesis Kit (Bioline). RT-qPCR was performed using a BioRad T100 Thermo Cycler; transcripts were normalized to β-actin (Life Technologies probe Mm00607939_s1) expression. Aicda mRNA was detected using a primer-probe set specified by Life Technologies (Mm01184115_m1). To generate a positive control reagent, murine Aicda was amplified by PCR using Phusion High-Fidelity DNA Polymerase (New England BioLabs), using the following oligonucleotide primers: CTACCTCTGCTACGTGGTGAA (forward), GCTGAGGTTAGGGTTCCATCT(reverse). The amplicon was cloned using a TOPO TA cloning kit (Life Technologies). TLR9 stimulation Spleen cells were obtained from C57BL/6 and UNC93b-deficient mice after T-cell-depletion and B cell enrichment. The cells were labeled with CFSE and cultured with or without CpG (ODN1826). Cell stimulation was measured by monitoring dilution of CFSE after three days in culture. Statistical analyses Statistical analyses were performed using Prism (GraphPad) software. Both parametric and non-parametric analyses were performed, depending on sample size. Each experiment was performed at least two times. Details are provided in the figure legends. Early splenic B cells are precursors for both splenic IgM memory B cells and IgM bone ASCs Our previous studies showed that the majority of the CD11c-positive IgM memory cells detected on day 30 post-infection expressed somatically-mutated receptors, indicative of AID activity [23]. This observation suggested that IgM memory cell precursors could be identified on the basis of AID expression. Therefore, we utilized AID-Cre-ER T2 transgenic mice, described by Dogan et al. [14], to permanently mark IgM memory cells. The strain carries an AID promoter-regulated Cre recombinase whose activity is induced by tamoxifen; it was crossed to a reporter strain that carries a Gt(Rosa26)Sor-regulated EYFP allele containing flanking loxP sites (B6.Cg-Gt(ROSA)26Sor tm3(CAG-EYFP)Hze /J). In B cells that express AID, tamoxifen administration induces Cre recombinase activity that facilitates genetic recombination and subsequent expression of EYFP, thereby permanently marking AID-expressing B cells. We first used the strain to address whether AID-expressing cells induced during acute ehrlichial infection contributed to either long-term IgM population. For these studies, mice were infected, and tamoxifen was administered 10 days later, the peak time of the early CD11c-positive splenic IgM plasmablast response that we described in our previous studies [24]. When the mice were analyzed by flow cytometry on day 30 post-infection, approximately 30% of the CD11c-positive splenic IgM memory cells expressed EYFP (Fig 1a). Although AID is also expressed in GC cells, in our previous study we demonstrated that the IgM memory cells do not express markers characteristic of GC B cells [23]. Similarly, approximately 50% of the CD138-positive IgM-producing bone marrow B cells that we described previously [24] were also labeled following tamoxifen administration on day 10 (Fig 1b). EYFP-positive IgM memory cells and bone marrow ASCs were both identified at least as late as 98 days post-tamoxifen administration. Both the spleen and bone marrow contained cells irreversibly marked following tamoxifen administration as early as day 4 postinfection, indicating that AID expression is induced very early after infection (Fig 1c and 1d). Few, if any, EYFP-labeled B cells were detected in uninfected mice, demonstrating that the EYFP-positive B cells were infection-, and likely, antigen-specific. These studies indicate that early AID-expressing B cells, possibly the early CD11c-positive T-bet+ plasmablasts, give rise to both IgM memory cells and bone marrow IgM ASCs. Not all of the IgM memory cells or plasmablasts expressed EYFP, indicating that not all of the plasmablasts expressed Aicda during the window of tamoxifen treatment. Our previous study demonstrated that not all of the IgM memory cells expressed mutated BCRs, so it is also possible that not all of the precursor cells transcribed Aicda. To identify Aicda-positive cells early during early infection, infected (AID-cre-ER T2 x EYFP) F 1 mice were administered tamoxifen on days 4 and 7 post-infection, and splenic B cells were analyzed for EYFP expression three days later, on day 10 post-infection. A higher proportion of the CD11c-positive plasmablasts expressed EYFP (approximately 13%), relative to the CD11c-negative B220+ B cells (approximately 2%; Fig 1e). However, EYFP-positive cells were detected in similar numbers within each population. Although the CD11c-negative B220+ B cells consisted of largely naive follicular B cells, it is possible that some memory precursor B cells were also present. Thus, it is possible that either CD138 IgM bone marrow cells were generated independently of CD4 T cells It was unknown whether the antigen-specific IgM bone marrow plasmablasts we identified previously [20] were generated in a TI fashion, as has been shown in related studies of bone marrow IgM+ B cells [19,42]. CD4 T cell help was not required for the generation of IgM bone marrow ASCs in our experimental model, because the ASCs were detected in both wildtype and MHC class II-deficient mice on day 30 post-infection (Fig 2a). The frequency of IgM bone marrow ASCs in MHC class II-deficient mice among total bone marrow cells was modestly lower, relative to wild type mice; moreover, the IgM+ plasmablasts were detected at slightly higher cell numbers in the CD4-deficient mice (Fig 2a, bottom panels). However, there was no difference in the frequency of IgM bone marrow ASCs when measured as a proportion of the frequency of total population of bone marrow CD138+ ASCs within each strain. As an additional test for the requirement for CD4 T cell help in the generation of bone marrow IgM plasmablasts, mice were administered a CD40L-blocking antibody (MR-1), on days 8, 10, and 12 post-infection. In control experiments, this treatment effectively inhibited T cell-dependent responses to adjuvanted NP-BSA (Fig 2b). In contrast, mice treated with the anti-CD40L MAb exhibited very similar frequencies of IgM bone marrow ASCs as control mice (Fig 2c). Our data therefore identify two distinct populations of long-lived cells that are derived early following infection, likely via different mechanisms: a spleen IgM memory population that requires both CD4 T cell help and IL-21 [23], and the CD4 T cell-independent BM ASCs. These data also suggest that B cell fate may be determined in part by access to CD4 T cell help during early infection. Aicda expression in early splenic plasmablasts requires CD4 T cells, but not TLR9 signals Although we have demonstrated that CD4 T cells are not required for the generation of either the day 10 CD11c-positive plasmablasts [24], nor for the BM plasmablasts, it was possible that T cells were nevertheless required for the induction of AID expression. It is well known that CD4 T cells, in some cases in the context of innate signals, are required for the induction of AID in GC B cells [43,44]. To identify how AID expression is induced in early IgM plasmablasts, we monitored Aicda transcription in vivo, using B cells from (AID-cre-ER T2 x EYFP) F 1 mice. In these studies, magnetic bead-enriched, T cell-depleted, naïve B cells from (AID-cre-ER T2 x EYFP) F 1 mice were transferred to wild-type and MHC II (IA b )-deficient mice. The statistically significant, as determined using a Mann-Whitney test (p < 0.0001). (C and D) Tamoxifen was administered to uninfected (U), or infected (AID-Cre ER T2 X eYFP) F 1 mice, 0, 4, 7, or 10 days post-infection, and spleen (C) and bone marrow (D) were analyzed for eYFP expression on day 30 post-infection. The frequencies of eYFP+ cells detected in each of the mice are shown in the plots to the right of each dot plot. A multiple comparison ANOVA was used to compare data from uninfected tamoxifen-treated mice, relative to mice that had been treated with tamoxifen on day 0 (p value = 0.99), day 4 (p = 0.61), day 7 (p = 0.0054) and day 10 (p = 0.0002) post-infection. Similar analyses of the bone marrow cells were performed by comparing tamoxifen-treated uninfected mice to mice treated on day 0 (p value = 0.72), day 4 (p = 0.4039), day 7 (p = 0.016), and day 10 (p = 0.0056) post-infection. (E) Infected (AID-Cre ER T2 X eYFP) F 1 mice were administered tamoxifen on days 4 and 7 post-infection and were analyzed on day 10 post-infection for eYFP expression in CD11c-negative B220+ (shaded histogram) and CD11c-positive B220+ cells (open histogram), as shown in panels one and two. Cumulative data from the analyses are shown in the plots three and four; the frequencies EYFP+ cells within the B220-positive and the B220/CD11c double-positive populations, indicated by the gate in the second panel, are shown in the third panel (the frequencies were significantly different, as determined using an Unpaired students' T-test (p value = 0.0005). The number of EYFP+ cells within each of the two populations are shown in the fourth panel; the data were not significantly different, as determined using a Mann-Whitney statistical test (p value = 0.10). https://doi.org/10.1371/journal.pone.0178853.g001 recipient mice were infected with E. muris, and tamoxifen was administered on days 7 and 10 post-infection. To eliminate any residual donor T cells, the MHC II-deficient recipient mice were administered an anti-CD4 (GK1.5) antibody on day 0 and 3 post-transfer. When the mice were analyzed on day 11 post-infection, EYFP expression in CD11c-positive B220+ plasmablasts was detected in wild-type recipient mice, but not MHC II-deficient mice (Fig 3a), indicating that CD4 T cells were required for Aicda expression. Naïve IA b -positive donor B cells were detected in MHC II-deficient recipient mice post-infection, indicating that naïve donor B cells were successfully transferred (Fig 3b). The low frequency and number of ehrlichia-specific donor cells was not unexpected, because the frequency of antigen-specific B cells is very low in naïve mice. Thus, even though the day 10 CD11c-postive T-bet+ plasmablasts do not require CD4 T cells for their generation [20], the plasmablasts nevertheless require T cell help to induce Aicda mRNA expression. These data also demonstrate that the day 10 CD11cpositive plasmablasts are derived at least in part from naïve splenic precursor B cells. Aicda expression in IgM memory cells required CD4 T cell help; however, it was possible that the Aicda expression was also induced by TLR signaling. E. muris doesn't express ligands for TLR4, although was possible that the bacteria signal through TLR9, which recognizes bacterial DNA. To address whether TLR signaling was involved, both wild-type and UNC93bdeficient mice (deficient for signaling via TLR3, 6, 7, 8, and 9) were infected with E. muris. CD11c-positive B220+ plasmablasts were isolated by flow cytometric cell sorting, and qPCR used to detect Aicda expression. The UNC93b-deficient CD11c-positive B220+ plasmablasts exhibited Aicda expression that was similar to wild-type B cells (Fig 3c). To demonstrate that UNC93b-deficient mice were incapable of responding to TLR9, naïve B cells from wild-type and UNC93b-deficient mice were stimulated with CpGODN1826, and their proliferation was monitored. Wild-type B cells proliferated in response to CpG, but the UNC93b-deficient B cells did not (Fig 3d). The data together indicate that Aicda expression likely requires some form of T cell help, but not TLR signaling. To further investigate the requirement for T cell help, we next investigated whether CD4 T fh cells underwent expansion during early E. muris infection. We observed a major expansion of CXCR5+ PD-1+ T fh cells on day 10 post-infection, relative to uninfected mice (Fig 3e). Nearly all of the T fh cells exhibited high expression of ICOS, relative to PD-1/CXCR5 doublenegative CD4 T cells. Thus, although the day 10 plasmablasts are generated in the absence of CD4 T cells, this occurs in the presence of a vigorous T fh cell response, which is in part required for induction of Aicda in the B cells. Persistent Aicda expression in IgM memory cells We also addressed whether Aicda was expressed in long-term IgM memory B cells and bone marrow plasmablasts, by administering tamoxifen to infected (AID-Cre-ER T2 x EYFP) F 1 mice on or after day 30 post-infection. Spleen IgM memory cells exhibited EYFP expression 12 days following tamoxifen administration (Fig 4a; approximately 30% of the IgM memory cells were EYFP-positive). In contrast, lower percentages of EYFP+ cells were detected among the bone marrow plasmablasts (R4 in the bottom panels of Fig 4a). We also detected EYFP expression in a previously unidentified population of splenic CD19 hi CD11c-negative B cells (R3 in Fig 4a); other studies have indicated that these B cells, which were not resolved in previous studies, are phenotypically similar to CD11c-positive IgM memory cells we have characterized (unpublished data). Although we cannot completely exclude the possibility that some memory cells had recently emigrated from GCs, where they expressed AID, it is unlikely that 30% of the cells had emigrated during the time tamoxifen was administered. These data indicate that Aicdaexpression can be maintained in IgM memory cells during low-level chronic ehrlichial infection, in the absence of class switching. To rule out any possibility that the (AID-Cre-ER T2 x EYFP) F 1 mice can express the Cre recombinase spontaneously, naïve mice were administered tamoxifen and EYFP expression on B cells was examined 12 days later. Naïve B cells from (AID-Cre-ER T2 x EYFP) F 1 mice exhibited low expression of EYFP, relative to B cells from infected (AID-Cre-ER T2 x EYFP) F 1 mice (Fig 4b). EYFP expression among CD19+ B cells was significantly higher in infected mice that had established the IgM memory population, compared to naïve mice. These data demonstrate that CD11c-positive IgM memory cells can maintain transcription of Aicda, apparently indefinitely; whether AID is produced and is functional, and the impact of possible AID function for the maintenance of long-term IgM memory, is not yet known. Discussion Our findings reveal that both long-term antigen-specific IgM memory cells and bone marrow plasmablasts are derived from precursor cells present early during ehrlichial infection, well before the generation of canonical class-switched memory B cells that develop in GCs [45,46]. single pathway. Given their phenotypic similarities, it is possible that both long-lived B cell populations are derived from the early splenic CD11c-T-bet-positive plasmablasts that we have described [24]. Indeed, nearly all of the ehrlichial-specific IgM detected on day 10 postinfection was secreted by the early CD11c-positive plasmablasts, not CD11c-negative B cells [24,47]. However, plasmablasts are by definition short-lived cells, so an alternative explanation is that the long-term IgM-positive memory cells are derived independently, perhaps from CD11c-negative or CD138-negative follicular B cells. Although EYFP-positive cells were found at much higher frequencies among CD11c-positive plasmablasts, EYFP+ CD11c-negative B cells were found in similar numbers as the CD11c-positive plasmablasts. Ongoing studies will help to resolve the origin(s) of the long-term IgM B cells. These findings nevertheless highlight a novel pathway for the generation of long-term IgM memory and antibody-secreting plasmablasts in the context of a TI response to infection. Our and others' work highlight how infections can induce B cell responses that differ from those elicited by canonical non-infectious antigens, often challenging established dogma. Although the day 10 CD11c-T-bet-positive plasmablasts do not require CD4 T cell help for their generation, the signals required for the subsequent differentiation of the plasmablasts to either IgM B cell memory cells or BM ASCs are not yet known. However, the IgM memory cells require CD4 T cells (and IL-21) for their generation [23], whereas the BM plasmablasts do not, suggesting that the availability of T cell help may be a key factor in what drives the development of B cell memory population versus long-lived BM plasmablasts. In this regard, we have observed a large population of splenic T fh cells that are present at the time of the early CD11c-positive plasmablast response. Given the magnitude of the T fh cell response, the data suggest that T cell help is not limited by cell number, and it is possible that the fate of the B cells may be determined instead by their physical location. In such a model, extrafollicular B cells that fail to interact with T fh cells may exit the spleen and migrate to the bone marrow, whereas B cells that enter follicles may elicit T cell signals that drive IgM memory cell development. Both populations likely differentiate independently of GCs, which are suppressed during early ehrlichial infection [48]. Although we have not yet formally resolved the requirements for GCs in our experimental model, other studies of non-canonical memory B cells suggest IgM memory cells follow a GC-independent pathway [4,5,49]. Alternately, affinity may be a factor in the fate decision; higher affinity B cells may elicit more T cell help, thereby promoting IgM memory B cell development, whereas cells that fail to elicit sufficient T cell help may follow a default pathway to become bone marrow plasmablasts. Finally, it is likely that the IgM memory cells are driven by T fh cells that produce IFNɣ, as has been observed during Salmonella typhi infection [50]. Indeed, IFNɣ is likely responsible for inducing T-bet expression in the CD11c+ plasmablasts we have described previously [24,51]. Our demonstration that the bone marrow IgM plasmablasts are generated independently of T cells is consistent with other studies that have shown that TI responses can elicit long-term antibody production and/or protection [21,22,42]. Those studies demonstrated that longterm IgM production could be maintained indefinitely, via two different mechanisms. One possibility is that IgM is maintained by the continual recruitment of short-lived plasma cells [21,22]. Alternatively, it has been proposed that TI pathogens can induce long-lived plasma cells [42]. E. muris establishes a low-level chronic infection, although it is unknown if sufficient antigen can be derived from these intracellular pathogens to maintain IgM plasmablasts via antigen stimulation in the bone marrow. Alternatively, low level inflammation in the bone marrow may drive the production of factors that support plasmablast/plasma cell maintenance [52]. Our studies also allowed us to address the source of the signals that drive Aicda expression in early splenic CD11c-T-bet-positive plasmablasts. Although these cells are generated in the absence of CD4 T cell help, we show that CD4 T cell signals are nevertheless required to induce Aicda transcription. We propose Aicda transcription is driven by interactions with T fh cells that are abundant in the spleen at that time, perhaps via classical CD40:CD40L interactions. Unlike what has been described in other studies [44,[53][54][55], Aicda transcription did not require TLR9, and unlikely involves other TLRs, because the ehrlichiae lack classical TLR ligands. Other innate signals may substitute during ehrlichial infections, although the nature of these signals and/or receptors is currently unknown. The observation that Aicda expression was maintained, apparently indefinitely, in the IgM memory B cell population, suggests that the same or different factors that elicit Aicda expression on day 10 post-infection are maintained in the IgM memory cells, perhaps as a consequence of low-level inflammation. The consequences of long-term AID expression in IgM memory B cells is unknown, but it has been suggested that chronic low-level AID expression in memory B cells can promote polyreactivity, self-reactivity, and clonal elimination [56]. We also resolved a previously unidentified CD19 hi CD11c-negative B cell population, on the basis that this these B cells also expressed AID (i.e., they were found to be EYFP-positive following tamoxifen administration). It is likely that this CD19 hi CD11c-negative population is closely related to the IgM memory cells we have described previously, and will undergo further analysis. Our studies also shed light on the origin and function of CD11c-T-bet-positive B cells, which are now emerging as an important B cell subset involved in both host defense and autoimmunity. Although it is unclear whether CD11c and/or T-bet define a single or multiple functionally distinct B cell subsets, our previous and current findings support the hypothesis that such B cells include IgM memory B cells [23]. Although some CD11c-T-bet-positive B cells are generated in response to TLR signaling [36,38], our data indicate that these signals are not required, although other innate signals likely substitute for TLRs. T-bet activity may also be responsible for maintaining persistent Aicda expression. Our studies highlight a novel pathway for the development of both IgM memory cells and long-term bone marrow plasmablasts. We have shown that IgM production is maintained indefinitely following E. muris infection, and it is likely that both non-switched populations are important for maintenance of humoral memory. It will be important to address whether similar mechanisms contribute to long-term immunity in humans after either infection or vaccination.
2018-04-03T02:37:18.284Z
2017-06-02T00:00:00.000
{ "year": 2017, "sha1": "f5ac7371e2708557513c47322221d4b66ab770b0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178853&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5ac7371e2708557513c47322221d4b66ab770b0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236664339
pes2o/s2orc
v3-fos-license
gait analysis of patients with different design of total knee arthroplasty SUMMARYIntroduction/Objective The essence of the treatment of degenerative knee joint diseases is pain relief, restoring motion range and stability of knee joints. Methods In this study, 35 patients participated after having surgery of the knee joint. The patients had a posterior-stabilized (PS) endoprosthesis in one joint, and a posterior cruciate ligament retaining (CR) endoprosthesis in the other. Kinematic data was collected using a 3D optical system for tracking fluorescent markers in time. Based on these data, the following parameters were determined: degree of flexion, mediolateral (ML) translation, lateral gap, medial gap, and the angle of change between the transtibial and transfemoral axes. Results The results show a more pronounced flexion degree with the PS prosthesis compared to the CR prosthesis. Also, the results show negligible values of the ML translation, lateral gap, and medial gap in both types of prostheses. Using the non-parameter Wilcoxon test, a substantial difference in the angle change between the transtibial and transfemoral axes was confirmed, that is, in the flexion angles on the CR and PS prostheses. Conclusion This study shows that there is no great difference in the use of the PS or CR designs of endo- prostheses. Better behavior and range of motion in the knee joint were established with the implantation of the PS endoprosthesis. This conclusion is confirmed by the substantial difference in the degree of flexion of the knee joint and in the position of the transversal axes of the tibia and femur. INTRODUCTION The basic role of knee arthroplasty is pain relief, restoration of knee joint motion range and stability [1]. Therapy success of implanted endoprostheses is most commonly defined by clinical and radiographic methods and tests based on the subjective feeling of patients about their pain and everyday functioning, such as the Knee Society Score (KSS), Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and EQ-5D [2][3][4][5]. Functional comparisons of different implanted endoprostheses are difficult because of many subjective factors related to patients and their various postoperative expectations [6]. Therefore, examiners had a demanding task related to the application of the innovative objective methods for determining the level of physical activity, that is, the evaluation of operation success. The aim of this study is a simultaneous examination of motion analysis in both knees where due to gonarthrosis, a posterior-stabilized, that is, a cruciate-substituting endoprosthesis was implanted in one knee, and a cruciate-retaining endoprosthesis was implanted in the other. Patients The examination was conducted at the Clinical Center of Kragujevac (Serbia). The selection of patients was done based on the following criteria: -the patient suffers from gonarthrosis that was diagnosed based on anamnesis, clinical examination, and the analysis of radiographic records with the application of Kellgren-Lawrence (KL) classification [7,8]; -based on the KL classification, gonarthrosis belongs to the third or fourth stage of the disease; -all affected knees had a varus deformity with deviation of the axis of 5-15°; -all the knee joints that were analyzed preoperatively had a flexion deformity of less than 10°; -the PS endoprosthesis was implanted in one knee joint, and the CR endoprosthesis was implanted in the other; -the patient does not suffer from neurological, rheumatological, or similar diseases, that is, diseases that may affect the disruption of walking pattern. In the examination, there were 35 patients who suffer from gonarthrosis (mean value of years 68.79 ± 5.98, mean value of weight 81.5±16.18 kg, and mean value of height 167.86 ± 8.51 cm). The patients were familiarized with the procedure of examination to which they voluntarily agreed. Gait analysis was done six months after the second arthroplasty. All the patients signed an agreement for participating in the study. This study was done in accord with standards of the institutional Committee on Ethics. Implant system In the study, two endoprosthesis designs were used. The PS endoprosthesis was implanted in one knee (NexGen Complete Knee Solution Legacy Posterior Stabilized Knee, Zimmer, Warsaw, IN, USA), and the CR endoprosthesis was implanted in the other (DеPuy Synthes, SIGMA, Primary Knee System, Cruciate Retaining design, DePuy Orthopaedics, Inc, Warsaw, IN, USA). Clinical evaluation All knee arthroplasties were done with a standard medial parapatellar incision. After the performed operation, verticalization and early rehabilitation were performed at the clinic. All the performed operations were done by the same group of surgeons. Instrumentation and protocol Kinematic data was collected using a 3D OptiTrack system (Natural Point, Inc., Oregon, www.naturalpoint.com). This system consists of 6 infrared cameras (V100:R2) all with a resolution of 640 × 480 pixels and a frame rate of 100 fps. Using the afore-mentioned system, the tracking of positions of the 8 fluorescent markers (10 mm diameter) in space was done. The markers were placed on anatomic positions of the lower extremities to allow for repeatability of the examination, in the area of the great trochanter, on the medial and lateral femoral epicondyle, medial and lateral tibial epicondyle, on the center of the ankle joint and on the diaphysis of the femur and tibia ( Figure 1) [9,10]. During the tracking protocol, patients moved without shoes at their own speed along a straight line (length 3 m) towards the cameras. The tracking was repeated at least twice. Kinematic data The obtained data from the ARENA Software was extracted to a standard VICON .c3d recording format. Furthermore, data processing was done using the MATLAB program (The MathWorks, Inc, USA, www.mathworks.com), that is, an evaluation of the following parameters was made: flexion angle, ML translation, medial gap, lateral gap, and the change between the transfemoral and transtibial axes. Depending on the implant, the processed data was divided into the PS or CR group. The kinematic analysis was based on the principles of a three-dimensional body. The mean value and standard deviation calculated for every observed parameter, while a comparison was made using the non-parameter Wilcoxon test. RESULTS The results for the examined prostheses (PS and CR) were shown using the mean values of change in the observed parameters for the stance phase and the swing phase (flexion angle, ML translation, medial gap, lateral gap, change between transfemoral and transtibial axes) listed in Table 1 and using movement curves shown in Figure 2. The values of the observed parameters were approximately equal. The flexion angle (Figure 2a, b) shows that a patient's leg is slightly bent in the stance phase, and that it stays in that position until the swing phase starts. Also, Figure 3b shows that the flexion angle in the stance phase is more expressed in the PS endoprosthesis design (Table 1). There is relatively little ML translation (Figure 2c, d, Table 1) in both endoprosthesis designs. However, in the first 30% of the gait cycle for both types of prostheses, it can be noticed that there is a slight medial motion and that with the beginning of the swing phase, the lateral motion starts, and at the end of this phase the medial motion starts again. .06 ± 0.04 0.03 ± 0.01 0.11 ± 0.02 0.08 ± 0.01 Table 1) is between -1 and 1 mm. These movements in the knee joint occur in the stance phase and are completely eliminated when the swing phase starts. With the CR endoprosthesis design, a gap increase is noticeable in the first 10% of the gait cycle, while in the PS design, the gap increase occurs in the first 30%. The medial gap (Figure 2 g, h, Table 1) is almost constant. Its changes are very little, and their values represent hundredths of a millimeter. The angle that the transtibial and transfemoral axes form (Figure 2 i, j, Table 1) remains constant during the entire gait cycle in both endoprosthesis designs. The non-parameter Wilcoxon test (Table 2) determines that there is no statistically substantial difference in the ML translation, and lateral and medial gap in the PS and CR prostheses. However, a substantial difference is determined in the angle change between the transtibial and transfemoral axes, that is, the flexion angles in the CR and PS prostheses. This change is not accidental -it occurs under the influence of systematic or experimental factors with a statistical significance of p = 0.01, a possible error of р < 0.01 and certainty of p > 99%. DISCUSSION The analysis of the success of knee arthroplasty using various endoprosthesis designs has been a topic of many examinations that are usually based on the use of subjective tests and the matching of patients with different types of endoprostheses [2,[11][12][13][14][15][16]. In our examination, an objective index of the gait pattern was used after knee arthroplasty in the same patient with, in one knee joint, an implanted endoprosthesis with sacrifice of the posterior cruciate ligament, and in the other, an endoprosthesis with a preserved posterior cruciate ligament. Over the past years, the design and technology of implanted endoprostheses has been significantly improved, and many producers have placed various implant designs on the market. The choice of implants, in the majority of cases, depends on a surgeon`s personal experience. Apart from the surgeon`s experience, the implantation of the CR or PS endoprosthesis designs depends on the pathoanatomic change of the knee joint and on ligament stability [11,17,16]. Currently, there are disagreements regarding the sacrifice or retention of the posterior cruciate ligament in total knee prosthesis implantation. Marczak et al. [12] suggest that the proprioception property of patients shows better results with the PS endoprosthesis compared to the CR implant. Additionally, there are similar claims by Vanluawe et al. [13], who suggest that the implantation of the PS endoprosthesis design shows slight flexion instability, slight clinical, radiologic laxity, greater freedom of movement as well as slight complications after the implantation compared to the CR implant. A complete knee arthroplasty with retention of the posterior cruciate ligament (CR) has its own advantages compared to implantation of the total knee endoprosthesis with sacrifice of the posterior cruciate ligament (PS). The advantages are, firstly, it is based on a natural rollback of the femoral condyle during extension in the knee, as well as low osteotomy of the distal femur [11]. The disadvantage of implantation of such an endoprosthesis design is poor balance of soft tissue, which can lead to loosening of the implant. There are certain indications when it is necessary to place an implant with posterior stabilization that replaces the posterior cruciate ligament (PCL) function, such as a lack or insufficiency of the PCL, contraction of the posterior capsule that demands release, as well as noticeable flexion contractures of the knee. Considering the divided opinions among examiners and clinicians, and in order to obtain objective results about the behavior of the knee joint, after implantation of the CR and PS endoprostheses, kinematic data were collected using the 3D OptiTrack system. A similar methodology of 3D gait analysis was used by Bytyqi et al. [14], and Prodanović et al. [15], who analyzed deformity of the gait pattern of the knee joint with degenerative change using the aforementioned methodology. The methodology has proved to be an excellent mode for diagnosing lesions of the anterior and posterior cruciate ligament, shown by Matić et al. [18], and La Prade et al. [19] in their examinations. The method of establishing contact between implant elements has a direct influence on the functionality and durability of implants. As already mentioned, there are disputes about the placement of PS and CR implants. However, various studies have shown that there is no substantial difference in their use [20,21]. Koga [22] belongs to the group of examiners who think that better reduction in knee joint rotation is achieved through the use of the CR implant type because of the increased tension in the PCL. Global analyses on the range of knee joint flexion have shown that rejection of the PCL increases the flexion angle by 2% [20]. Our results in Table 1 show that an increase in the degree of flexion occurs in the stance phase, while in the swing phase, the degree of flexion remains equal with the CR and PS prostheses. Similar results were obtained by Murakami et al. [23], who analyzed the walking pattern after bilateral arthroplasty of the knee joint using a treadmill with radiographic supervision and a flat panel detector. Victor et al. [24] and Broberg et al. [25], examined the comparison of motion range in both endoprosthesis designs and showed that there is a statistically greater range DOI: https://doi.org/10.2298/SARH200706046P of motion after knee endoprosthesis implantation with sacrifice of the posterior cruciate ligament. With the use of the computational model and simulation of real conditions, potential conclusions of implant application can be drawn. With this model, Smith et al. [26] showed potential behavior of the PS endoprosthesis. Based on their results, it was shown that the use of this type of implant does not influence contact stresses, however it does affect ML distribution of the stress. An examination of CR prosthesis behavior in vivo was conducted by Li et al. [27]. Their results showed that, compared to the osteoarthritis (OA) joint, there is an increase in ML translation. With the increase of this translation, a change in the knee mechanics occurs, e.g., there is a change in the distribution of force and stress in the knee. Stress distribution is closely related to the realized contact [22], which our results showed. The ML translation is more noticeable with the use of the PS prosthesis rather than the CR implant. As the movement is more frequent in the gait cycle, contact is achieved on a greater surface, which influences the increased stress distribution in the ML direction. The results show that there is a greater range of motion with the use of the PS implant. Similar results were obtained by Jiang et al. [28]. They think that these results can be connected to the removal of the PCL and a better balancing of the soft tissue. In the gait analysis, we performed an examination of the medial and lateral gap. An increased gap occurs on the lateral side with the use of both types of endoprostheses, while the medial gap is non-existent. These gaps are noticeable in the stance phase (extensive and lateral gap). Another study proved that PCL resection does not influence the gap size. They analyzed the influence of PCL resection after knee arthroplasty [29]. In total arthroplasty of the knee joint, it is recommended that the transtibial and transfemoral axes be parallel [30], shown by the results in Figure 4e. There is a slight deviation for both endoprosthesis designs in the first 40% of the gait cycle due to the lateral gap occurring at the same moment. This study has several shortcomings that must be considered. The main weakness of our research is that surgeries were done in two separate procedures, therefore the period from operation to movement recording is not identical for both knees. Secondly, the use of optical systems for gait analysis is not such a precise procedure as dynamic radiographic examination, however, compared to the afore-mentioned procedure, it is superior due to the possibility of 3D analysis and the ultimate safety of the patient (i.e., through lack of X-rays). CONCLUSION In this study, we examined the kinematic behavior of the knee joint with the application of objective methods after knee joint arthroplasty when the same patient was implanted using a PS endoprosthesis design on the one knee, and a CR on the other. The purpose of this surgical intervention is to restore the original knee joint kinematics and eliminate patients' discomfort. Even though there is no substantial difference between these two designs, better behavior and range of motion in the knee joint were achieved with the implantation of the PS endoprosthesis. This was confirmed by the substantial difference (shown using the Wilcoxon test) in the degree of flexion in the knee joint, and in the position of the transversal axes of the tibia and femur.
2021-08-03T00:06:15.920Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "302a966abd47822b32609f458bd5afaf23e14cf0", "oa_license": "CCBYNC", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0370-81792100046P", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bce240d18517b5024b27e11b0d2aa7811cc2ee88", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222260304
pes2o/s2orc
v3-fos-license
Tuberculosis Knowledge Levels Of Secondary School Students Objective: A reduction in tuberculosis (TB) cases is possible through knowledge and control of the risk factors. In this study, the aim was to evaluate the knowledge levels of secondary school students about TB. Method: The study is a descriptive study conducted in 2018-2019. The sample of the study consisted of 116 students, who volunteered to participate in the study of 12th grade of the school. The data collection tool consists of 19 questions: Personal Information Form and TB Knowledge Level Questionnaire (TBKLQ). In TBDSF, every correct answer is one point. The score that can be obtained varies between 0-11. Results: The mean age of was 17.17 ± 0.37 years. 1.7% of the study group had tuberculosis. The incidence of tuberculosis in their families was 2.6%. 86.2% of the participants knew the correct cause of the disease, 51.7% of them were aware of the symptoms and 64.7% of them understood the correct transmission path. 37.1% of the study group knew the duration of treatment correctly, 20.7% of them knew the infectious period correctly; 93.1% of the study group knew that TB was a notifiable disease and 76.7% knew that registration of TB should be undertaken at the Tuberculosis Dispensary. 50.9% of the study group were aware that TB could be seen in the lungs and extrapulmonary organs, and 37.1% of them knew correctly when the TB vaccine should be administered. The mean TB knowledge score of the study group was 5.85 ± 2.11. 59.5% of the participants reported that they had received training on TB. A significant difference was found between the TB knowledge scores of those who previously received TB education with those who did not. Conclusion: The knowledge level of individuals with high school education about TB is at a medium level. One out of every two participants received training on TB. In order to increase the effectiveness of the fight against tuberculosis, to prevent the spread of the disease and to reach the goals of the end tuberculosis strategy, the lack of information should be overcome. INTRODUCTION In recent years, despite increasing efforts in the struggle to end tuberculosis (TB), the main gaps -resource-constrained environments in particular and environments with a high disease burden -render these efforts largely inadequate. 1.6 million people lost their lives in 2017 to TB, which continues to be one of the top 10 causes of deaths in the World 1,2 . 95% of deaths occur in low and middle-income countries. In Turkey, the total number of patients diagnosed with TB in 2017 was 12,046 3 . According to the World Health Organization Global Tuberculosis Report published in 2016, it will look to contribute to the 'End of TB' strategies and to raise awareness in society of the importance of early diagnosis of TB. Studies in the literature have reported a positive relationship between TB information, care-seeking and treatment adherence [1][2][3] . In two different studies conducted in sub-Saharan countries, 66.3% to 99.7% were reported to have misconceptions about the etiology of TB, 27.6%-90% likewise misunderstood the symptoms of TB, 0.1% -48.6% were mistaken with respect to its infectiousness, while 33.4% to 92.9% were incorrect as to methods of prevention 4, 5 . In another study, healthy lifestyle behaviors are determinants of health has an important place. It can be said that infectious diseases such as tuberculosis will also be acquired by the right health behavior at an early age will prevent contamination. The aim of this study is to evaluate the level of knowledge of secondary school students about TB. METHOD The study is a descriptive study conducted in 2018-2019. The sample of the study consisted of 116 students, who volunteered to participate in the study of 12th grade of the Eskişehir Mustafa Kemal Atatürk Vocational and Technical Anatolian High School in the 2018-2019 academic year. Students were asked to complete a structured questionnaire. The first section of the questionnaire prompted demographic information (age, gender, family professions). In the second section, informative questions related to TB, control, and prevention methods. Mean of the data collection tool was developed in parallel with the drafting of the study. The Personal Information Form (8 Questions) and Tuberculosis Knowledge Level Questionnaire (TBKLQ) (11 questions) consisted of 19 questions, totally. The TBKLQ included the following questions: What is the causative agent of tuberculosis? What is one of the symptom of tuberculosis? What is a definitive method of diagnosis? What is the route of transmission of tuberculosis? How long is the treatment period? For how long is the patient infectious following treatment? Which methods confer protection from TB? Is tuberculosis a compulsory notification disease? Which organ(s) does tuberculosis affect? When do you think the tuberculosis vaccine should first be given? Which institution is responsible for the records / follow-up of tuberculosis patients? In the questionnaire, a score of 1 point is awarded for each correct answer. A possible score of between one and eleven can be obtained from the questionnaire. Ethics: Ethical approval for the study was obtained from the Harran University, Medical Faculty, Department of Drug and Non-Medical Device Research Ethics Committee, number 18/12/35. The participants were informed about the study and written and verbal consent was obtained in accordance with the principles of the Helsinki Declaration. Data Analysis For data analysis, the SPSS 21.0 statistical package program (SPSS Inc., Chicago, Illinois, USA) was used. The Kolmogorov-Smirnov test was used to determine whether the distribution of continuous variables was appropriate for the normal distribution. Variables were shown as mean ± standard deviation. p <0.05 was accepted as statistically significant. RESULTS The mean age of the study group was 17.17 ± 0.37 years. 65.5% of our study group consisted of male students, and 34.5% were female. It was found that there was a difference between TB knowledge levels according to gender (p<0.05). 67.2% of the mothers of the study group had both a primary and secondary education, and 54.2% of their fathers had both a high school and university-level education. 63.8% of the study group reported that their families had a moderate income. It was found that 1.7% of the students in the study group had tuberculosis and 2.6% had a history of tuberculosis in their families. 59.5% of the study group had previously obtained information about TB. It was found that TBKLQ scores of those who had information about TB before were significantly higher than those who did not. (Table 1). Of the students in the study group, 86.2% were correct with respect to the causative agent, 51.7% in relation to the symptoms, and 64.7% with regard to the transmission path. In the study group, 57.8% of the students knew that detecting bacillus in the sputum was a definitive diagnostic method, 37.1% knew the correct treatment period, and 20.7% knew the correct infectious period. 93.1% of the students in the study group knew that TB was a notifiable disease. 56.1% of the study group knew that TB could be seen in both the lungs and extrapulmonary organs, while 37.1% knew the time for administration of the TB vaccine correctly. The number of people, who knew that the organization holding TB records was the Tuberculosis Dispensary, was 76.7% (Table 2). Although not stated in the table, the mean TB knowledge score of the study group was 5.85 ± 2.11. DISCUSSION In 2017, 1.6 million people lost their lives due to TB, which continues to be one of the top 10 causes of deaths in the world. It is known that 95% of deaths are in low and middle income countries 3 . This information supports the fact that TB is one of the most important and preventable diseases of developed and developing countries. In society, the way to gain knowledge or awareness of diseases is to either witness someone who had the disease or to have the disease yourself. However, this is not a desirable situation because of the consequences and the burden of the disease. In our study, the mean TB information score was 5.85 ± 2.11, and there was a statistically significant difference between TBKLQ scores of those who had information about TB before were significantly higher than those who did not. This result emphasizes the importance of obtaining information about the disease beforehand. Knowing a disease-specific factor is one of the conditions for raising awareness about that disease 6 . In our study, 86.2% of the participants were found to know the causative agent of TB correctly. The frequency of those who knew the correct agent of the disease was significant difference (Table 2). In another study, this rate was reported to be 37.7% 7 . In a cross-sectional study conducted with 1200 medical students in China, the frequency of those who knew that the mycobacterium avium complex was the most common factor in the etiology of TB was 6.4% 8 . The reason our results differ from studies in the literature may be due to the fact that our study group consisted of students, who were in high school and who were senior students. Although there are many studies investigating the level of knowledge in different sample groups in the literature, there are only two studies that question the causative agent of tuberculosis. The most common symptom of pulmonary TB is a persistent cough, and usually but not always, mucous, which is sometimes bloody (hemoptysis) 9 . In our study, 51.7% of the participants, ie one in two, reported that cough was a symptom of the disease. It was determined that the frequency of those who knew the TB symptoms correctly significant difference ( A definitive diagnosis of pulmonary tuberculosis is made by showing the presence of tuberculosis bacilli in the sputum and / or by growing the bacilli in cultures, if possible. Bronchial lavage is applied to those who cannot produce sputum or an examination conducted of the gastric juices in those who are fasting 13 . In our study, it was found that one in every two high school students knew the definitive method of diagnosis (57.8%). It was observed that the frequency of those who knew that they were diagnosed with Bacillus Detection in Sputum was significantly higher (Table 2). In a study conducted of medical faculty students, the figure was 70.9% for those undertaking related studies, and 24.3% for those who were not. In the literature, this rate varies between 75.3-83.6% in different studies conducted with physicians 14,15 . The standard treatment period for tuberculosis is 6 (six) months 16 . In our study, participant of 37.1% stated that the duration of treatment was six months and participant of 44% stated that it was 12 (twelve) months. It was observed that the frequency of those who knew that the treatment period was 12 months made a significant difference compared to the other groups (Table 2). In a study by Demir et al. (2016) the rate of those who knew the correct treatment period was 58% with respect to those who were untrained, and 83.7% in relation to those who had training 10 . In a study by Kara et al. (2015) the rate was 86.4% 14 . Our results were found to be lower than those of other studies in the literature. The reason why the result we have obtained is different from the literature, may be due to the low number of patients in the family members. One of the most common aspects encountered in tuberculosis is the period of infectiousness of the disease. In our study, 20.7% of the participants knew the correct infectious period. The majority of the study group reported that infectiousness ranged from six months to twelve months. It was observed that those who knew that the contamination period was fifteen days in TB made a significant difference (Table 2). In the study of 38.6% of TB cases and 23.3% of non-TB cases stated that TB was an infectious disease 12 . The results obtained in the literature and in our study may be an indicator of the problem of social isolation experienced by the people who have the disease. The notification of tuberculosis disease is important in terms of the control of infectious diseases, decreasing the number of contacts and increasing treatment success. 93.1% of the study group stated that tuberculosis is a notifiable disease. It was observed that the frequency of those who knew that reporting TB was mandatory significant a difference (Table 2). In a study conducted by Enginyurt et al. (2016), in a research hospital with health workers, it was reported that 82.5% knew that tuberculosis was a notifiable disease 17 . This result supports the fact that people have sufficient information to provide necessary control. Tuberculosis may be involved in many organs, especially the lungs 9,11 . In our study, one out of two participants (56.1%) stated that tuberculosis can be seen in both of the lungs and extrapulmonary organs. The frequency of those who knew that TB showed involvement in the lungs and other organs was significantly higher (Table 2). In a study by Kara et al. (2015) of pediatric residents, the figure was 96.1% 14 . In the study of Ayık et al. (2013), 72.9% stated that TB showed pulmonary involvement 12 . Vaccination, which is the most important method of prevention, can prevent the occurrence of infectious diseases, as well as reduce the burden of disease. It is recommended that the BCG vaccine be given to infants 2 months after birth 3 . 37.1% of the respondents gave the correct answer to the question about when the first BCG vaccine for TB should be given. There was no difference between the groups in the study of Demir et al. (2016) only 59.6% of the medical faculty students, who had studied the subject, knew the correct timing for the vaccination 10 . In the study of Ayık et al. (2013) 13.6% of TB cases and 16.6% of non-TB cases correctly identified the timing of the BCG vaccination 12 . Although tuberculosis patients first learn of their disease in a hospital, tuberculosis dispensaries are the place where the patient is monitored and direct supervised treatment is carried out effectively 11 . In our study, 76.7% of participants correctly recognized the institution that kept / monitored the records of TB patients. There was no difference between the groups. The studies in the literature were mostly carried out on the symptoms and methods of prevention of tuberculosis and no questions were posed in relation to tuberculosis dispensaries. This has limited the discussion of our study. Our study shows that seven out of ten people have knowledge about TB dispensaries. The high public awareness is important of TB dispensaries in terms of continuity of treatment and reduction of contamination. CONCLUSION The results of this study indicate that individuals in receipt of a high school education lack sufficient knowledge about the diagnosis, treatment and follow-up of TB. In order to increase the effectiveness of the fight against tuberculosis, prevent the spread of the disease and in order to achieve the objectives of the End of TB strategy, the lack of information evident in these groups in all areas of society should be addressed. The study abstract "Evaluation of Tuberculosis Knowledge Levels of Secondary School Students" 29. National Tuberculosis and Chest Diseases Congress, Turkey's National Tuberculosis Association Federation will be held between 17-19 January 2019. Porto Bello Hotel -Antalya -Turkey. Ethics Committee Approval: Ethical approval for the study was obtained from the Harran University, Medical Faculty, Department of Drug and Non-Medical Device Research Ethics Committee, number 18/12/35. The participants were informed about the study and written and verbal consent was obtained in accordance with the principles of the Helsinki Declaration. Declaration of Conflicting Interests: The authors declare that they have no conflict of interest. Financial Disclosure: No financial support was received.
2020-10-11T12:39:58.195Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "caf096d75c90113aead4a5fc35b0f892d6d34c24", "oa_license": "CCBYNC", "oa_url": "https://dergipark.org.tr/en/download/article-file/1309379", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "caf096d75c90113aead4a5fc35b0f892d6d34c24", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
16382496
pes2o/s2orc
v3-fos-license
Mucoepidermoid Carcinoma of Palatal Minor Salivary Glands with Intracranial Extension: A Case Report and Literature Review Mucoepidermoid carcinoma (MEC) is one of the most common malignant tumors of both major and minor salivary glands. Although there are reports of parotid MEC tumors extending intracranially via the facial nerve, intracranial extension from MEC originating from minor salivary glands in the palate has not previously been reported. This report presents a case of MEC arising from the minor salivary glands of the palate and extending into the middle fossa via the foramen rotundum with perineural invasion of the maxillary division of the trigeminal nerve. The patient received surgical intervention via a combined otolaryngology and neurosurgery approach to achieve gross total resection of the tumor. This was followed by adjuvant radiotherapy. The epidemiology, histopathology, and treatment of MEC originating from salivary glands are discussed. Introduction Mucoepidermoid carcinoma (MEC) is one of the most common malignant neoplasms of the salivary glands in both pediatric and adult populations. 1 Among the major salivary glands, the most common site of MEC is the parotid gland. 2 MEC tumors of parotid origin have been reported to extend intracranially to the temporal bone and cerebellopontine angle via the perineurium of the facial nerve. 3 Perineural invasion is a sign of high-grade MEC, worse patient outcomes, and a need for more aggressive surgical intervention. 4,5 MEC of the minor salivary glands most commonly occurs in the oral cavity at the junction of the hard and soft palate. To date, intracranial extension of palatal salivary gland MEC has not been reported. Here, we report a case of an adult patient with MEC originating from the minor salivary glands of the nasopharyngeal portion of the soft palate, extending into the right lateral wall of the nasopharynx, the pterygopalatine and infratemporal fossae, and reaching the anteromedial aspect of the middle fossa to involve the anterior temporal pole. Intracranial extension of the palatal MEC tumor likely occurred from the perineural invasion of the infraorbital nerve with the retrograde involvement of the maxillary branch of the trigeminal nerve at the foramen rotundum. Case Report The patient is a 53-year-old Caucasian male with a 2-month history of nasal obstruction and right-sided hearing loss. On physical examination, there was a visible serous effusion behind the right tympanic membrane. A computed tomography (CT) scan with contrast indicated the presence of a poorly defined soft tissue mass in the right posterior naso/oropharynx. Patient evaluation by an audiologist showed a moderate-to-severe Abstract Mucoepidermoid carcinoma (MEC) is one of the most common malignant tumors of both major and minor salivary glands. Although there are reports of parotid MEC tumors extending intracranially via the facial nerve, intracranial extension from MEC originating from minor salivary glands in the palate has not previously been reported. This report presents a case of MEC arising from the minor salivary glands of the palate and extending into the middle fossa via the foramen rotundum with perineural invasion of the maxillary division of the trigeminal nerve. The patient received surgical intervention via a combined otolaryngology and neurosurgery approach to achieve gross total resection of the tumor. This was followed by adjuvant radiotherapy. The epidemiology, histopathology, and treatment of MEC originating from salivary glands are discussed. conductive hearing loss in the right ear with complete sparing of the hearing functions on the left side. Nasal endoscopy revealed an exophytic mass involving the right lateral nasopharyngeal wall and obstructing the eustachian tube. An endoscopic incisional biopsy of the accessible portion of the right nasopharyngeal mass was interpreted as a high-grade MEC with cystic and solid nests of squamous and mucinous glandular cells. Moderate nuclear atypia, frequent mitosis (up to 20 per 10 high powered fields), areas of necrosis, and perineural invasion were seen (►Figs. 1 and 2). During this procedure, the mass was noted to block the right eustachian tube posteriorly and to overlie the soft palate inferiorly. Magnetic resonance imaging (MRI) of the orbit and face demonstrated a mass arising from the nasopharynx extending into the infratemporal fossa and pterygopalatine region (►Fig. 3). MRI also showed the involvement of the maxillary branch of the trigeminal nerve, the cavernous sinus, and the anterior temporal pole in the middle cranial fossa. The patient underwent joint resection of the tumor with otolaryngology and neurosurgery involvement. An endoscopic transnasal approach was used to resect the portion of the lesion involving the lateral nasal wall, nasal floor, nasal speculum, nasopharynx, sphenoid sinus, ethmoid sinus, pterygopalatine fossa, and infratemporal fossa. Within the pterygopalatine fossa, the tumor was noted to be pedicled superiorly to the foramen rotundum. Following endoscopic endonasal tumor resection, transoral resection of the involved areas of the palate was performed. The soft palate and uvula were split in half to facilitate tumor resection from the nasal aspect of the soft palate. No tumor was visualized proximal to the junction of the hard and soft palate in the oral cavity. The neurosurgery team performed a middle fossa craniotomy. Extradural dissection demonstrated the involvement of V2 division of the trigeminal nerve. The V1 and V3 nerves were also visualized and were within normal neural tissue inspection. The V2 branch that was infiltrated by tumor was sacrificed to the level of the foramen rotundum through the opening into the pterygopalatine fossa to unite with the endoscopic portion of the resection. A nasoseptal flap was used to repair the defect. At 2-months follow-up, the patient was doing well without a cerebrospinal fluid leak or new neurological deficit and endoscopic biopsy showed no evidence of gross disease. His main postoperative complaint has been a burning sensation in the right V2 distribution, which is being managed with trigeminal nerve blocks. The patient underwent adjuvant radiotherapy with posttreatment imaging (positron emission tomography/CT and MRI) at 3 months postradiation completion showing complete remission. Discussion Intracranial extension of MEC is extremely rare. To date, only one case report in the literature has discussed cerebellopontine angle extension of parotid MEC arising from perineural invasion of the facial nerve. 6 Our case report is the first to demonstrate intracranial extension along the maxillary division of the trigeminal nerve via the foramen rotundum in an MEC arising from a minor salivary gland of the palate. Our patient underwent gross total resection of high-grade MEC followed by radiation therapy and has been free of recurrence on most recent follow-up, 7 months after surgery. MEC is one of the most common malignant neoplasms observed in the major and minor salivary glands. 2 MEC most commonly affects the parotid gland among the major salivary glands. Approximately 450 to 750 minor salivary glands are present in the head and neck. 5 These minor glands are most commonly situated at the junction of the hard and soft palate. Around 8 to 15% of salivary gland tumors arise in the palate and are reportedly malignant in 40 to 82% of patients. Minor salivary glands tumors are more likely to be malignant than tumors of major salivary glands. 5 In one study of intraoral minor salivary gland neoplasms, MEC comprised 21% of all tumors and 48% of all malignant tumors. 7 In the oral cavity, MEC presents as a fixed, rubbery, and painless mass. The most common symptom is swelling, followed by pain, ulceration, and discoloration. The World Health Organization defines MEC as "a malignant glandular epithelial carcinoma characterized by mucous, intermediate, and epithelial cells, with columnar, clear cell and oncocytoid features." 8 The "intermediate cells" are difficult to characterize as their description varies in the literature. One review article describes intermediate cells as "nondescript" cells with a morphology that does not match a differentiated or recognized phenotype, such as mucous or squamoid. 8 Determination of low-grade versus high-grade MEC is based on morphological descriptions of predominant cell types in the tumor. 9 The low-grade type is characterized by > 50% mucinous cells while the high-grade type is characterized by a predominance of epidermoid cells with < 10% mucinous cells. In a study of 55 patients with long-term follow-up over a period of 30 years, 5 and 10-year survival rate are 92.4 and 90.1%, respectively, regardless of histological classification. Accounting for histological classification, the study demonstrated a 10-year survival rate of 96.7% in highly differentiated MEC as compared with 81.6% in poorly differentiated tumors. A database study involving 2,400 patients showed no difference in 5-year disease-specific survival between intermediate and low-grade MEC tumors (97.4 vs. 98.8%, respectively). 10 The first-line treatment for MEC is surgical resection with the goal of disease-free margins while minimizing morbidity. 8 A retrospective review of MEC in major salivary glands demonstrated that, of the 234 patients, 208 received surgical therapy only, 22 received combined surgery and radiation therapy, and 2 underwent surgery followed by radiation and chemotherapy. 2 This review also demonstrated that patients with positive surgical margins receiving adjuvant radiation had survival times comparable to those with complete surgical resection. The role of chemotherapy is not well defined. The Radiation Therapy Oncology Group (RTOG) is performing a phase II randomized control trial to determine whether the administration of cisplatin with postoperative radiotherapy improves outcomes in patients with high-risk salivary tumors, including MEC. 8 Conclusion We present the first report of a nasopharyngeal MEC arising from the minor salivary glands of the palate to have a perineural intracranial extension. The patient was successfully managed with surgical resection followed by radiation therapy. MECs of the minor salivary glands originating in the palate and extending to the brain require a multidisciplinary surgical approach to achieve gross total resection of both intra-and extracranial components. Close follow-up and adjuvant radiotherapy are advised to decrease the risk of tumor recurrence.
2017-08-15T06:01:29.483Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "35c007f131ef0d5c135b855d3279720c9509f467", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0036-1593396.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35c007f131ef0d5c135b855d3279720c9509f467", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117014176
pes2o/s2orc
v3-fos-license
Magnetic Field Effects in Fermion Pairings This paper considers various fermion pairings of interest for the QCD phases. The effects of an external magnetic field on the pairing mechanisms, on the realization of new condensates, and on the properties of the magnetized phases are all explored and discussed. Introduction The structure of neutron stars is directly connected to the equation of state (EoS) of their matter content. The strongly interacting matter at the core of a neutron star is mainly formed by protons and neutrons and is usually described by effective nuclear models. Nevertheless, for both strange stars that are formed by deconfined u,d and s quarks, or for hybrid stars with very dense cores, where the overlapping of the nucleons could lead to quark deconfinement, a description in terms of quark matter makes more sense. For these cases, QCD-like quark models have to be used to investigate the different phases that can be realized depending on the values of the density, temperature, and external fields present. Finding observable signatures that allow to distinguish among different internal phases of the neutron stars is a main goal in the astrophysics of compact objects. The physics of neutron stars is hence intimately related to the investigation of the QCD phases, whose properties are explored in heavy-ion collisions at several experimental facilities all over the world. In this regard, an important problem to be investigated is the influence of a magnetic field on the structure of the QCD phase diagram, and particularly, on the location and the nature of deconfinement and chiral symmetry restoration. One reason for this interest is, on the astrophysical side, the existence of magnetic fields ∼ 10 14 −10 15 G, in the surface of magnetars [1], with inner values estimated to be 10 18 G or 10 19 G, for hybrid stars with nuclear [2] or quark matter [3] cores respectively, or even higher, ∼ 10 20 G, for strange stars [3]. Close to home, the production of very strong magnetic fields in off-central heavy-ion collisions as in the Au-Au collisions at RHIC, can generate magnetic fields as large as 10 18 G, while fields even larger ∼ 10 19 G, can be generated with the energies reachable at LHC for Pb-Pb collisions. Even though these magnetic fields decay quickly, they only decay to a tenth of the original value for a time scale of order of the inverse of the saturation scale at RHIC [4]- [5], hence they may influence the properties of the QCD phases probed by the experiment. Strong magnetic fields will likely be also generated in the future planned experiments at FAIR, NICA and JPARK, which will make possible to explore the region of higher densities under a magnetic field. Quite often, going from one QCD phase to another is connected to the change in the expectation value of a fermion condensate. For example, the order parameter for the chiral phase transition is a fermion-antifermion condensate ΨΨ , while the order parameter for the transition from color superconductivity to normal quark matter is a fermion-fermion condensate of generic form Ψ C ΓΨ , with Γ being some matrix in color, flavor, and Dirac space that depends on the model under consideration. At intermediate densities, a condensate of quarks and holes can be formed leading to an inhomogeneous phase on which chiral symmetry is broken in a different way. Each of these condensates comes from pairing interactions that are present in the original QCD theory and can be modeled in effective theories through Nambu-Jona-Lasiniolike interaction terms. The influence of an external magnetic field on these different condensates is relevant to understand how it will affect the QCD phases and the phase transitions. In this paper, I shall discuss various fermion pairings that are of interest for the QCD phases, and explore the effects of an external magnetic field on these pairings, on the realization of new condensates, and on the properties of the magnetized phase. Magnetic Catalysis of Chiral Symmetry Breaking The simplest pairing, represented in Fig 1, takes place between particles and antiparticles at the Dirac sea. The particle and antiparticle have opposite spin, opposite chiralities, and opposite momenta. Hence, the chiral condensate is a neutral and homogenous scalar that breaks chiral symmetry. This is the well-known chiral condensate responsible for the constituent quark mass in QCD. It only occurs when the strength of the coupling between the fermions is stronger than some critical value which depends on the details of the interaction and the model considered. One may wonder how a magnetic field can influence a neutral condensate, but let's not forget that the fermions in the pair are charged, so each of them can minimally couple to the magnetic field. This coupling leads to the Landau quantization of the fermion's momentum. Recall that the energy of a fermion of mass m and charge q in a uniform magnetic field is given by where B is the strength of the magnetic field and we have assumed, without lost of generality, that it points in the z direction. The Landau level n characterizes the quantization of the momentum in the direction transverse to the field. It is easy to see that the infrared dynamics of the particles in the lowest Landau Level (LLL), n = 0, is 1+1-dimensional. In the chiral limit there is no energy cost to excite the LLL particles about the Dirac sea and a large number of degenerate excitations are produced. This situation makes the system unstable against the formation of particleantiparticle pairs, which are now favored even at the weakest attractive interaction. This is the well-known phenomenon of magnetic catalysis of chiral symmetry breaking (shorten as MC from now on) [6]. It is very similar to the Bardeen-Cooper-Schiffer (BCS) mechanism that takes place about the Fermi surface and favors the formation of Cooper pairs. In the original works on MC, this mechanism was only associated with the generation of a chiral condensate that in turn leads to a dynamical fermion mass. However, it is easy to understand that in the presence of a magnetic field, a second condensate is unavoidable. To see this, notice that the fermion-antifermion pair possesses a net magnetic moment, because the particles in the pair have opposite charges and spins. In the absence of magnetic field, these magnetic moments point in all directions and have no effect on the ground state. But when a magnetic field is present, the pairs' magnetic moments orient themselves in the direction of the field, so the ground state can have a net component of the anomalous magnetic moment (AMM) in the field direction, that manifests as a spin-one condensate of Dirac structure iγ 1 γ 2 . This can also be seen as a consequence of the explicit breaking of the rotational group by the field. In the presence of the magnetic field only the subgroup O(2) of rotations about the field axis remains as a symmetry of the theory. Given that the two symmetries the new condensate would break, chiral and rotational, are already broken, either spontaneously or explicitly, these symmetries are not protected, and nothing prevents the emergence of this spin-one condensate along with the conventional chiral condensate. The situation resembles the realization of the symmetric gaps in the Color-Flavor-Locked (CFL) phase of color superconductivity [7]- [9]. Depending on the theory considered, the presence of an AMM condensate manifests in the free energy as a term that couples the field with the dynamical AMM (QED) [10], or, for QCD-inspired NJL theories, through a spin-spin interaction generated via the Fierz identities in a magnetic field [11]. No matter how the existence of the AMM is manifested in the particular theory, one can always show that in the presence of a magnetic field, the ground state of the system does not admit a solution that has nonzero chiral condensate, but zero AMM. Therefore, the existence of this spin-one condensate is unavoidable and universal in the MC phenomenon. An AMM condensate can produce effects like a nonperturbative Zeeman splitting in massless QED [10] . The MC is a very universal mechanism that has been corroborated in QED and in many different fermion model calculations in vaccum and at finite temperature. Nevertheless, some recent lattice QCD calculations in a magnetic field have produced contradictory results. While in Ref. [12] the validity of the MC behavior was corroborated, Ref. [13] claims that the MC scenario is found at low temperatures, but around the crossover temperature the chiral condensate shows a complex behavior with the magnetic field that results in a decrease of the transition temperature with the field. This interesting issue is still under scrutiny and more investigations will be needed to settle it. Fermion Pairing at Finite Density in a Magnetic Field At finite density the chiral condensate is less favored because hopping the antiparticles from the Dirac sea to the Fermi surface, where they can pair with the particles, costs twice the Fermi energy. In this case the magnetic field's influence on the chiral pair would only be important for fields much larger than the density. Two other pairings are however favored at finite density and can lead to very interesting new physics. One is the Cooper pairing responsible for the BCS superconductivity, occurring between fermions at the Fermi surface whenever an attractive interaction, no matter how weak, is present (left panel in Fig.2). The other is the density wave (DW) type of pairing between a particle and a hole of momentum P each (right panel in Fig.2). The DW pairing is familiar for two-dimensional systems in condensed matter. In four dimensions, the DW pairing requires a strong coupling to be favored over the BCS type. While the BCS condensate is homogenous, the DW one is inhomogeneous and hence breaks translational symmetry. These condensates are familiar in condensed matter, but they are also relevant in QCD at finite density. At very large densities the most favored pairing in QCD is of the BCS-type, because the effects of the quarks on the gluon screening become large, making the coupling weak and decreasing the likelihood of a DW condensate. On the other hand, the DW condensate may have an edge over color superconductivity in the region of intermediate densities, where not only is the coupling stronger, but also, since it pairs single-flavor quarks, is immune to the pairing stress produced by different quark chemical potentials that leads to chromomagnetic instabilities in color superconductivity. The dimensional reduction of the LLL which is so important for the chiral condensation in the mechanism of MC is irrelevant at finite density, as the excitations about the Fermi surface are already 1+1-dimensional ((1+1)-D) at zero field, because their energy only changes in the direction perpendicular to the Fermi surface. However, a magnetic field can affect the pairing mechanism in a different way. In the presence of a magnetic field, the geometry of the Fermi surface changes, turning into a discrete set of rings defined by the intersection of the surface of the Fermi sphere at zero field with the cylinders associated with the different Landau levels (Fig.3) in momentum space. Pairing now can occur between particles excited in small cylinders about each Landau level in the Fermi surface. Therefore, the field influences the pairing mechanism by this modification of the Fermi surface, and also through the change in the degeneracy of the states, 2 ∞ −∞ , which now becomes proportional to the field. As I shall discuss in the next sections, in the context of QCD a magnetic field can affect the realization of Cooper and DW pairings in quite nontrivial ways. Magnetic CFL Superconductivity Cooper pairing begins to be important for QCD in the high density, low temperature region, where it is responsible for the phenomenon of color superconductivity. Because pairing of two quarks is always colored, the ground state breaks the color symmetry forming a color superconductor. The most favored phase at very high densities is the CFL phase that is realized through the color-antitriplet, flavor-antitriplet interaction channel. The CFL ground state breaks chiral symmetry through a locking of color and flavor transformations and reduces the original symmetry to the diagonal subgroup SU (3) C+L+R of locked transformations [14]. This unbroken group contains an Abelian U (1) Q subgroup which consists of a simultaneous electromagnetic and color rotation and plays the role of a "rotated" electromagnetism. The group generator Q remains unbroken because all the diquarks in the condensate have zero Q-charge. Hence, the Q photon is massless and consequently, a rotated magnetic field will not be subject to Meissner effect in the CFL superconductor. Since the mixing angle between the original electromagnetic and gluon generators is very small, the Q photon is mostly the original photon with a small admixture of gluon. Thus, a regular magnetic field will penetrate a CFL superconductor almost unabated. Although all the diquarks have zero net Q-charge, some are formed by neutral constituents (both quarks Q-neutral) and some by charged constituents (quarks in the pair have opposite Q-charge). The Q-charged quarks in the last set of pairs can couple to an external magnetic field and lead to a splitting of the CFL gap ∆ CF L into a gap ∆ that only gets contribution of diquarks with neutral quarks, and a second gap ∆ B that gets contributions of diquarks with Q-neutral and Q-charged constituents [9]. In addition, a third gap ∆ M is also formed because the Cooper pairs with Qcharged constituents have nonzero AMM [15]. Similarly to what occurs in the MC scenario, the explicit breaking of the rotational symmetry by the uniform magnetic field opens new channels of interactions through the Fierz transformations and allows the formation of a spin-one diquark condensate characterized by the gap ∆ M . This new order parameter is proportional to the component in the field direction of the average magnetic moment of the pairs of charged quarks. As can be seen in Fig.4, in the region of large fields, the magnitude of ∆ M is bigger than ∆ and comparable to ∆ B . Since there is no solution of the gap equations with nonzero scalar gaps and zero value of this magnetic moment condensate, its presence in the MCFL phase is unavoidable. At lower fields, the MCFL phase exhibits de typical Haas-van Alphen oscillations [16] of magnetized systems. Even though the separation between the ∆ and ∆ B gaps is relevant only at very strong fields, the difference between the low-energy physics of the CFL and the MCFL phases becomes important at much smaller field strengths, of order ∆ 2 CF L [17]. This can be understood from the following considerations: In the CFL phase the symmetry breaking is This symmetry reduction leaves nine Goldstone bosons: a singlet associated to the breaking of the baryonic symmetry U (1) B , and an octet associated to the broken axial group SU (3) A . Once a magnetic field is switched on, the difference between the electric charge of the u quark and that of the d and s quarks reduces the original flavor symmetry of the theory and also the symmetry group that remains after the diquark condensation. Then, the breaking pattern for the MCFL-phase [9] becomes The group U (1) A (not to be confused with the usual anomaly U (1) A ) is related to the current which is an anomaly-free linear combination of s, d, and u axial currents. In this case only five Goldstone bosons remain. Three of them correspond to the breaking of SU (2) A , one to the breaking of U (1) (1) A , and one to the breaking of U (1) B . Thus, an applied magnetic field reduces the number of Goldstone bosons in the superconducting phase, from nine to five. Not only has the MCFL phase a smaller number of Goldstone fields, but all these bosons are neutral with respect to the rotated electric charge. Hence, no charged lowenergy excitation can be produced in the MCFL phase. This effect can be relevant for the low energy physics of a color superconducting star's core and hence for the star's transport properties. In particular, the cooling of a compact star is determined by the particles with the lowest energy; so a star with a core of quark matter and sufficiently large magnetic field could display a distinctive cooling process. Although the symmetries of the CFL and MCFL ground states are quite different, at weak fields the CFL phase still remains as a good approximation to describe the low energy physics, since the masses of the charged Goldstone bosons depend on the magnetic field and are very small. However, as shown in Ref. [17], when the field increases and becomes comparable to ∆ 2 CF L , the mass of the charged Goldstone bosons is large enough for them to decay into a particle-antiparticle pair, and only the neutral Goldstone bosons remain. This is the energy scale at which the MCFL phase becomes physically relevant. The MCFL matter is subject to magnetoelectricity, pressure anisotropies, and several other interesting effects. For a review see [18]. The possibility of self-bound MCFL matter in neutron stars was explored in [19], where the magnetic field was found to act as a destabilizing factor for the realization of strange matter in such a way that only if the bag constant decreases with the field, a magnetized strange star could exist. Quarkyonic Matter in a Magnetic Field In this section, we will consider fermion pairing in QCD at low temperatures and intermediate densities in the large N c limit. In the region of intermediate densities, i.e., large enough for the system to be in the quark phase, but small enough to support nonperturbative interactions, color superconductivity and DW pairing compete with each other. In the large N c limit the diquark condensate is definitely not favored because it is not a color singlet and decreases as 1/N c . These are the conditions where quarkyonic matter can be realized. Quarkyonic matter (QyM) is a large N c phase of cold dense quark matter recently suggested in Ref. [20]. The main feature of QyM is the existence of asymptotically free quarks deep in the Fermi sea and confined excitations at the Fermi surface. The quarks lying deep in the Fermi sea are weakly interacting because they are hard to be excited due to Pauli blocking. Their interactions are hence very energetic and the confining part of the interaction does not play any role [21]. On the other hand, excitations of quarks within a shell of width Λ QCD from the Fermi surface interact through infrared singular gluons at large N c and hence are confined. For the QyM to exist, the screening effects have to be under control, so they cannot eliminate confining at the Fermi surface. Such a region can be defined by the condition m D Λ QCD µ, with m D the screening mass of the gluons and µ the quark chemical potential [22]. As shown in Refs. [22,23], chiral symmetry can be broken in QyM through the formation of a translational non-invariant condensate that arises from the pairing between a quark with momentum P and the hole formed by removing a quark with opposite momentum −P from the Fermi surface. The DW condensate that forms in QyM is a linear combination of the chiral condensate ψψ , and a spin-one, isosinglet odd-parity condensate of ψσ 0z ψ . Here z is the direction of motion of the wave. At each given patch of the Fermi surface, z is the direction perpendicular to that surface. This combination of two inhomogeneous condensates has been named Quarkyonic Chiral Spiral (QyCS) [23]. The ψσ 0z ψ component corresponds to the condensation of an electric dipole moment. The influence of a magnetic field on the QyCS was investigated in Ref. [24] using a single-flavored (3+1)-D QCD theory with a Gribov-Zwanziger confined gluon propagator. Considering the polar patches shown in Fig. 5, and assuming a magnetic field that points in the z-direction and has magnitude B ≤ Λ 2 QCD , it was shown found that the (3+1)-D theory is mapped into the following (1+1)-D QCD theory with 2L + 1 flavors and flavor symmetry SU(2L)× U(1). The spinor fields in this (1+1)-D theory are defined by Φ T 0 = (ϕ (0) ↑ , 0) and Φ T l = (ϕ (l) ↑ , ϕ (l) ↓ ), with flavor indexes l, and ↑, ↓ corresponding respectively to the Landau level and the spin up and down components of the 4D spinors of the original 4D theory. The 2D Dirac Γ matrices are defined in term of the Pauli matrices as Γ 0 = σ 1 ; Γ z = −iσ 2 ; Γ 5 = σ 3 . L is the maximum number of Landau levels that can fit into the polar patches. Performing the following transformation of the quark fields to eliminate the chemical potential (it actually remains in the theory through the anomaly of the baryon charge) one obtains As argued in [24], the above theory admits the formation of two independent condensates, and See Ref. [23] for the definition of the flavor matrix τ 3 in the present context. A condensate of the form Φ τ 3 Φ = ϕ ↑ ϕ ↑ − ϕ ↓ ϕ ↓ is not present in the QyM at zero magnetic field because at zero field there is a spin degeneracy so spin up and down condensates have to be the same and thus cancel out in Φ τ 3 Φ , in agreement with the claims of Ref. [23]. When B = 0, the LLL contribution, which is the only level that has no spin degeneracy, makes it possible for this second condensate to be present.This is true even if the two spin-flavor terms in the sum (8) cancel out. In terms of the unprimed fields we find ΦΦ = cos(2µz) Φ Φ , and Φτ 3 Φ = cos(2µz) Φ τ 3 Φ , Therefore, in the presence of a magnetic field two set of inhomogeneous condensates emerge. Here again an extra condensate is generated thanks to the explicit breaking of the rotational symmetry by the magnetic field. Notice that the matrix τ 3 is a generator of the group of flavor symmetries in (1+1)D, but in the 4D theory, it is actually related to the spin component in the third direction, which is precisely the direction of the external magnetic field. Going back to the quark fields in the 4D theory, the two chiral spirals in the presence of a magnetic field are ψψ = ∆ 1 cos(2µz), ψγ 0 γ 3 ψ = ∆ 1 sin(2µz) (11) ψγ 1 γ 2 ψ = ∆ 2 cos(2µz), ψγ 5 ψ = ∆ 2 sin(2µz) The field-induced chiral spiral is a combination of a condensate of magnetic moment and a pion condensate, also varying in the direction parallel to the field. Both parity and time-reversal symmetries are broken in the system. The spontaneous generation of inhomogeneous condensates with electric and magnetic dipole moments may lead to interesting observational implications with potential consequences in dense environments like the cores of neutron stars or the planed high-density heavy-ion collision experiments.
2013-07-29T05:59:34.000Z
2013-07-29T00:00:00.000
{ "year": 2013, "sha1": "a1b1a132f7c8a33d9d8f068fbfcdc2a1e53d4116", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a1b1a132f7c8a33d9d8f068fbfcdc2a1e53d4116", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119641581
pes2o/s2orc
v3-fos-license
A cosmological solution of Regge calculus We revisit the Regge calculus model of the Kasner cosmology first considered by S. Lewis. One of the most highly symmetric applications of lattice gravity in the literature, Lewis' discrete model closely matched the degrees of freedom of the Kasner cosmology. As such, it was surprising that Lewis was unable to obtain the full set of Kasner-Einstein equations in the continuum limit. Indeed, an averaging procedure was required to ensure that the lattice equations were even consistent with the exact solution in this limit. We correct Lewis' calculations and show that the resulting Regge model converges quickly to the full set of Kasner-Einstein equations in the limit of very fine discretization. Numerical solutions to the discrete and continuous-time lattice equations are also considered. Introduction The discrete formulation of gravity proposed by T. Regge in 1961 [1] has been deployed in a wide variety of settings, from probing the foundations of gravity and the quantum realm [2,3] to numerical studies of classical gravitating systems [2,4]. Regge calculus continues to be used in new and diverse ways; recent examples include Ricci flow [5] and as an explanation of dark energy [6]. In this paper we re-examine the Regge calculus model of the vacuum Kasner cosmology first considered by Lewis [7], with the goal of gaining insight into the continuum limit of this discrete approach to gravity. In a general setting the structural differences between a continuous manifold and a discrete simplicial lattice lead to difficulties in directly comparing the Regge and Einstein equations or their solutions, with a single Regge equation per edge in the lattice compared with ten Einstein equations per event in spacetime. We expect many more simplicial equations than Einstein equations in a general simulation, and some form of averaging must be expected before the equations (or their solutions) can be compared. Lewis [7] studied both the Kasner and spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) cosmologies using a regular hypercubic lattice. We only consider the Kasner solution in this paper, where the high degree of symmetry, without the added complication of matter, allows explicit examination of the Regge equations in the continuum limit. By aligning the degrees of freedom of the lattice with the continuum metric components, Lewis was able to avoid the issue of averaging and make direct comparisons between the Regge equations and the Kasner-Einstein equations in the continuum limit. Unfortunately, Lewis was only able to recover one of the four Einstein equations in this limit, and even this was only possible after the equations were carefully averaged [7]. Without this averaging it is not clear that the equations obtained by Lewis actually represent the Kasner cosmology. We show that Lewis neglected a vital portion of the simplicial curvature arising from the two-dimensional spacelike lattice faces that lie on constant time hypersurfaces. The Kasner cosmology has zero intrinsic curvature on constant time hypersurfaces, so the lattice curvature concentrated on spacelike faces -measured on a plane with signature "-+" orthogonal to the face -make an important contribution to the total lattice curvature. We show that when these curvature terms are included, the discrete equations exactly reproduce the Kasner-Einstein equations in the limit of very fine triangulations without the need to average. In addition to reconsidering Lewis' analytic work on the spatially flat, anisotropic Kasner cosmology [7], we construct numerical solutions to the discrete and continuoustime lattice equations. This builds on the previous work of Collins and Williams [8] and Brewin [9] on highly symmetric, continuous time, closed FLRW cosmologies, and the (3+1)-dimensional numerical study of the Kasner cosmology by Gentle [10] with discrete time and coarse spatial resolution. We begin by briefly describing the continuum Kasner solution. The Kasner cosmology The Kasner solution [11] is a vacuum, homogeneous, anisotropic cosmological solution of the Einstein equations with topology R × T 3 . Appropriate slicing creates flat spacelike hypersurfaces, while the global topology allows non-trivial vacuum solutions of the Einstein equations. The Kasner metric may be written in the form [12] where the functions f , g and h are determined by the vacuum Einstein equations Figure 1. The section of a world-tube joining a rectangular prism to its future counterpart. Homogeneity implies that an observer will fall freely along the centre of the worldtube, providing a convenient coordinate system from which to view the lattice. Note that the equation G tt = 0 is a first integral of the remaining equations. The Kasner metric components are where the Kasner exponents p i are unknown constants. With this choice of metric functions the vacuum Kasner-Einstein equations reduce to two algebraic constraints, p 2 1 + p 2 2 + p 2 3 = p 1 + p 2 + p 3 = 1, leaving a one parameter family of Kasner cosmologies. The Kasner solutions are the basis of the Mixmaster cosmologies [13], which may be regarded as a series of Kasner-like epochs undergoing an infinite series of "bounces" from one set of Kasner exponents to the next. It is conjectured that these asymptotic velocity term dominated models embody the generic approach to singularity in crunch cosmologies, and it has been shown that the bounces represent a chaotic map on the Kasner exponents [14,15]. A homogeneous, anisotropic spacetime lattice We follow Lewis and build a discrete approximation of the Kasner spacetime using a highly symmetric lattice of rectangular prisms. The regularity of the lattice implements homogeneity, while the rectangular prisms allow a degree of anisotropy. The complete four-geometry is constructed by extruding the initial three-geometry forward in time and filling the interior with four-dimensional prisms. Each flat T 3 hypersurface consists of a single rectangular prism with volume x i y i z i , where opposing faces are identified to give the global topology. This prism is subdivided into n 3 regular prisms with edge lengths u i , v i and w i , with and where the subscript i labels the Cauchy surface at time t i . The three-geometry is joined to a similar structure at time t = t i+1 = t i +∆t i , where the prisms have edge lengths u i+1 , v i+1 and w i+1 . This structure is shown in figure 1. Time evolution of the initial surface maintains homogeneity, so within the worldtube of each prism there exists a local freely-falling inertial frame. In the coordinates of this frame the coordinates of vertex A in figure 1 can be written as , − w i 2 and similarly, the coordinates for vertex A + (the counterpart of A on the next hypersurface) are The spacetime interval along the time-like edge joining A and A + is defined to be m 2 i < 0, and thus where the difference operator is defined as ∆l i = l i+1 − l i . Note that the requirement that m 2 i < 0 implies a restriction on the choice of ∆t i for a given value of the "resolution" parameter n. Homogeneity guarantees that identical expressions hold for all timelike edges joining the spacelike hypersurfaces labeled t i and t i+1 . The discrete spacetime curvature in Regge calculus is manifest on the twodimensional faces on which three-dimensional blocks hinge [1], and is represented by the angle deficit (the difference from the flat space value) measured in the plane orthogonal to the face. There are two distinct classes of two-dimensional faces, or hinges, in the lattice: timelike areas formed by evolving a spacelike edge forward to the next hypersurface, and the rectangular faces of the prisms on t = constant slices. The timelike trapezoids formed when the spatial edge u i is carried forward in time from the hypersurface labeled t i to t i+1 , face ABA + B + in figure 1, has area Likewise, the spacelike hinge ABCD shown in figure 1, consisting of the edges u i and v i , has area with the other spacelike and timelike hinges defined similarly. Turning now to the curvature about these faces, we note that there are four distinct three-dimensional prisms which hinge on the timelike face ABA + B + . Each of these is formed by dragging one of the two-dimensional faces containing the edge AB forward in time. This includes the prisms ABCDA + B + C + D + and ABEF A + B + E + F + . The remaining two prisms hinging on ABA + B + are not shown in figure 1. The homogeneity of the lattice ensures that the four hyper-dihedral angles that surround the hinge ABA + B + are the same. Denoting each of these angles as θ xt i , the angle defect (or deficit angle) about the timelike face ABA + B + is which measures the deviation of the total angle from the flat-space value of 2π. The hyper-dihedral angle θ xt i is , with analogous definitions for θ yt i and θ zt i , the hyper-dihedral angles about the remaining classes of spacelike hinge. To measure the deficit angles about the spacelike faces, consider the curvature ǫ xy i about the hinge ABCD. Since this hinge is spacelike, the hyper-dihedral angles are boosts in the plane with signature −+ orthogonal to the hinge. The deficit angle for a spacelike hinge is [16] ǫ xy where the summation is over all boosts φ k which surround the hinge. The boost between the two three-dimensional prisms which hinge on ABCD, namely ABCDA + B + C + D + and ABCDEF GH, is and the hinge ABCD is surrounded by four such boosts, two identical boosts above and two below. Thus the final deficit angle measured about ABCD is Further details on calculating the deficit angles about a spacelike hinge are contained in a recent paper by Brewin [16]. We note that this type of hinge was not included in the calculations of Lewis [7], leading to errors in the resulting Regge equations. We return to this issue below. The Regge calculus model The vacuum Regge equations take the form [1] where the sum is over all triangles t with area A t that contain the edge L j , and ǫ t the deficit angle about triangle t. Using the geometric information collected in section 3 we obtain a single Regge equation for each class of edge on the ith hypersurface, namely which correspond to the lattice edges m 2 j , u j , v j and w j , respectively. The structure of the Regge equations (7)-(10) is worth considering. Equation (7), associated with the timelike edge m 2 i , involves edges on, and between, two neighbouring hypersurfaces, whereas (8)-(10) involve information on and between three consecutive spatial hypersurfaces. Thus (7) is a first-order constraint, while (8)-(10) are second-order difference equations. This contrasts sharply with the equations derived by Lewis [7], who neglected the curvature associated with spacelike hinges, and was thus unable to derive the spatial equations (8)- (10). After correctly obtaining (7), Lewis found that he could only make sense of the truncated spatial equations by considering a careful average. This averaging resulted, once again, in the timelike equation (7). Without the spatial curvature terms ǫ yz i , ǫ xz i and ǫ xy i Lewis was unable to build the second-order Regge equations (8)-(10). Before examining the continuum limit of the Regge model we consider solutions of the discrete equations. Initial data is constructed at t 0 = 1 to match the continuum Kasner solution as far as possible. Taking the exact initial data to be x(1) = y(1) = z(1) = 1 andẋ(1) = p 1 ,ẏ(1) = p 2 , andż(1) = p 3 , we mimic the properties of the exact solution in the lattice by setting u 0 = v 0 = v 0 = 1 and where α is an unknown parameter. This form is chosen to maintain the continuum conditionẋ(1) +ẏ(1) +ż(1) = 1 to first order, which is physically equivalent to using the degrees of freedom in the initial data to generate linear expansion in the volume element. With these initial data the Regge constraint (7) is solved for the single parameter α. A typical solution of the discrete Regge equations is shown in figure 2 for the case p 1 = 0.75 and ∆t = 0.01. The solution to the initial value problem in this case is α = 0.0214379, which represents a roughly 3% change in the initial rate of change of u i compared to the exact estimate. The evolution of the initial data is shown in figure 2a, while 2b shows the evolution of the fractional error in the Regge solutions compared with their exact counterparts. The fractional error in all edges remains in the 5% − 10% range, and can be shown to shrink as the time step is reduced. The residual error in the first-order Regge constraint (7) is defined as and is a measure of the consistency amongst the Regge equations. Figure 2c shows R t as a function of time with ∆t = 0.01, and clearly the residual R t remains small throughout the evolution. We repeated this process with different ∆t to estimate the rate at which R t reduces as ∆t tends to zero. Figure 2d shows second-order convergence in the mean value of R t over 1 < t < 100 as the timestep is reduced. We explore the issue of convergence in more detail in the following sections. The continuous-time Regge model Many of the early applications of Regge calculus to highly symmetric spacetimes considered the differential equations that arise in the limit of continuous time and discrete space [7,8,9]. In this section we derive the continuous time Regge equations and compare them with the results of Lewis [7] before considering numerical solutions. The continuous-time Regge model is developed from the discrete equations in section 4 in the limit of small ∆t, with the assumption that spatial edges in the lattice approach continuous functions of time. For example, the spacelike edge u i is viewed as the value of a continuous function u(t) evaluated at t = t i . A power series expansion then relates the edge lengths on neighbouring surfaces, where all derivatives are evaluated at t = t i . Similar expressions hold for the edges u i−1 , v i+1 etc. The series expansion for the timelike edges m 2 i is obtained from (6). In the continuum limit the deficit angle about the spacelike area A xy i is ǫ xy (t) = 4ü 4 −u 2 ∆t + O(∆t 3 ), and the deficit angle about the timelike face formed by the evolution of u i is with similar expressions for the remaining faces. Using these expansions, the Regge equations (7) to ensure that the expansion of lattice volume elements is initially linear to match the exact solution in section 2. Once again introducing a parameter α, we set and use the first-order Regge initial value equation (13) to solve for α. This mimics the exact initial data, for which α = 0. The second-order, non-linear differential equations (14)-(16) are used to evolve the initial data forward in time. Figure 3 shows solutions to the continuous-time Regge equations with p 1 = 0.5. The solution to the initial value problem is α = 0.0204161, which represents a small deviation (≈ 4% change in the initial value ofu) from the exact initial data. As can be seen in the figure, the continuous-time Regge solutions are very similar to the exact Einstein solution, with the Regge edges deviating from the exact values by 5% − 7%. In the next section we extend the limiting process to the spatial edges, and examine the difference between the exact and Regge equations more carefully. The Regge equations in the limit of continuous space and time In this section we explore the discrepancies between the discrete Regge model and the Kasner spacetime by examining the truncation error incurred when the Regge equations are viewed as approximations of the Kasner-Einstein equations (1)-(4). We consider the continuum limit of the Regge equations (7)-(10) as the spatial lattice is refined in both space and time. The continuous space and time limit of the Regge equations is obtained from the temporal series expansions (12) together with the link between lattice edges and the global length scales given by (5). Substituting these into the discrete Regge equations (7)-(10) and simultaneously increasing the number of prisms (n → ∞) while reducing the timestep (∆t → 0) we obtain a series expansion for the the discrete equations in the continuum limit. The refinement parameter n and timestep ∆t are chosen so that m 2 j < 0. The series expansion of the temporal Regge equation (7) is , which has truncation error that is first-order in the timestep ∆t and second-order in the spatial discretization scale 1/n. The spatial Regge equations (8) (20) to leading order in the continuum limit. The truncation error in these equations is second order in both the spatial and temporal discretization scales. The leading order terms in the continuous time and space Regge equations (17)- (20) are identical to the Kasner-Einstein equations (1)-(4), so we expect solutions of the discrete lattice equations to approach the continuum solutions as length scales in the lattice are reduced. It is clear from the preceding equations that the truncation error for the Regge equations, when viewed as approximations to the Kasner-Einstein equations, are second order in the spatial discretization scale 1/n. The truncation error is also second order in ∆t for the spatial Regge equations (18)- (20). The truncation error in (17) implies that the Regge initial value equation (7) is only a first-order approximation to its continuum counterpart. This conflicts with the calculations in section 4, where the truncation error in the Regge constraint R t was found to converge to zero as the second power of ∆t (see figure 2d). To understand this contradiction, we rewrite the coefficients of ∆t in the expansion (17) as 3ẋẏż +ẏ (zẍ + xz) +ż (yẍ + xÿ) +ẋ (zÿ + yz) =ẋ (ẏż + zÿ + yz) +ẏ (ẋż + zẍ + xz) +ż (ẏż + yẍ + xÿ) where the final equality follows from substitution of (18)- (20). Thus the truncation error in the Regge constraint (17) is formally of order ∆t, but the coefficient of that term in the expansion is a linear combination of the spatial Kasner-Einstein equations. This should be zero to leading order for any solution of the spatial Regge equations. To clarify this argument, consider again the simulation in section 4. Once the initial data is set, the numerical evolution is achieved by the repeated solution of the spatial Regge equations (8)- (10). These are shown above to be second order accurate approximations of the Einstein equations in both space and time. We expect from (17) that the leading order error in the Regge constraint is first order in ∆t. However, the coefficient of that error term is a linear combination of the Regge equations we are solving (to leading order), and thus the coefficient of ∆t is itself zero to second order in ∆t. Thus the effective leading order truncation in the Regge constraint equation (17) is of second order in both space and time. This is consistent with the numerical experiments in section 4. Discussion In the preceding sections we re-examined one of the most highly symmetric applications of Regge calculus to be found in the literature. The primary goal of this study was to examine the convergence properties of Regge calculus, and we have shown that for the discrete Kasner spacetime the equations of Regge calculus reduce identically to the corresponding Einstein equations in the continuum limit. The discrete lattice used by Lewis, outlined above, was specifically designed to guarantee a one-to-one correspondence between the degrees of freedom in the Regge lattice and the metric components in the continuum solution [7]. Despite this, Lewis needed to average the Regge evolution equations in order to obtain consistency in the continuum limit. The averaging process was chosen to obtain the first order Regge equation (7), but Lewis was still unable to derive the remaining spatial equations (8)- (10). We showed in section 4 that once all lattice curvature elements are included in the calculations the Regge equations consist of one constraint and three evolution equations. In section 6 we showed that these lattice equations approach the full set of Kasner-Einstein equations in the continuum limit. It was also shown in section 6 that the discrete Regge equations are second order accurate approximations to the Einstein equations for the Kasner cosmology in the limit of very fine discretization. This convergence rate is in agreement with many previous numerical simulations, in particular the (3+1)-dimensional Regge calculus models of the Kasner cosmology that utilized general simplicial lattices [10,19]. Unlike the current analysis, these simulations did not enforce the homogeneity and anisotropy of the Kasner model throughout the evolution, yet displayed second-order convergence to the continuum solution. These numerical simulations considered the convergence of solutions, rather than equations, and so are complementary to our analysis. In general applications of Regge calculus that utilize a simplicial lattice there will be many more Regge equations (one per lattice edge) than Einstein equations (10 per spacetime event). The direct comparison between individual Regge and continuum equations considered in this paper would not be possible, or even desirable. We expect that an appropriate average of the Regge equations would still correspond to the Einstein equations in the continuum limit [17,18,20], and several such averaging schemes have been suggested. Brewin considered a finite-element integration of the weak-field Einstein equations over a simplicial lattice (discussed in [18]), and suggested that the vertex-based equivalent of the vacuum Einstein equations are where ∆x µ j is a vector oriented along edge L j in a coordinate system based at vertex v. The outer summation is over all triangles which meet at v, and the inner sum is over the edges on each triangle. These are essentially linear combinations of Regge equations, together with boundary terms [18]. Regardless of how one compares the continuum and discrete equations, it is ultimately the solutions that are of interest. The application of Regge calculus to the Kasner cosmology discussed in this paper demonstrates yet again that Regge calculus is a consistent second-order accurate discretization of general relativity, providing further support for the use of lattice gravity in numerical relativity and discrete quantum gravity.
2012-08-07T19:57:27.000Z
2012-08-07T00:00:00.000
{ "year": 2013, "sha1": "05ed23c055023e265821d95204ba032750dd76be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.1502", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "05ed23c055023e265821d95204ba032750dd76be", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
119271889
pes2o/s2orc
v3-fos-license
Upper critical field of p-wave ferromagnetic superconductors with orthorhombic symmetry We extended the Scharnberg-Klemm theory of Hc2(T) in p-wave superconductors with broken symmetry to cases of partially broken symmetry in an orthorhombic crystal, as is appropriate for the more exotic ferromagnetic superconductor UCoGe in strong magnetic fields. For some partially broken symmetry cases, Hc2(T) can mimic upward curvature in two crystal axis directions, and reasonably good fits to some of the UCoGe data are obtained. Introduction There has long been an interest in the possibility of superconductivity with the paired electrons having an order parameter consisting of a triplet spin configuration and the corresponding odd orbital symmetry [1,2,3,4,5,6,7,8]. The simplest odd orbital symmetry has the p-wave form [1]. In a crystal with non-cubic structure, there can be a variety of different p-wave states [1,2,3,4,5]. Depending upon the temperature T , magnetic field H, and pressure P , there can be phases corresponding to different triplet spin states [6,7,8]. One of the easiest ways to characterize the p-wave states is by measurements of the T dependence of the upper critical field H c2 (T ) [1,2]. However, when multiple phases are present in the same crystal, as in UPt 3 , a proper analysis requires a variety of experimental results [6,7]. Recently, a new class of ferromagnetic superconductors has been of great interest. Presently this class consists of UGe 2 [9], UIr [10], URhGe [11], and UCoGe [12], which except for UIr have orthorhombic crystal structures. For URhGe, the superconductivity arises within the ferromagnetic phase. That is also true for UCoGe at ambient pressure, but when sufficient pressure is applied, the ferromagnetic phase appears to disappear, leaving the superconducting phase without any obvious additional ferromagnetism [13,14]. In the cases of UGe 2 and UIr, applying pressure within the ferromagnetic phase induces the superconductivity [9,10]. In addition, polarized neutron studies have been interpreted as providing evidence for a fieldinduced ferrimagnetic state in UCoGe, with local moments of different magnitudes in opposite directions on the U and Co sites [15]. For a ferromagnetic superconductor with orthorhombic symmetry, the possible order parameter symmetries were given by Mineev [16]. Hardy and Huxley measured H c2 (T ) of URhGe at ambient pressure in all three-crystal axis directions [17]. Using only one fitting parameter for each field direction, they found that the Scharnberg-Klemm theory fit their data quantitatively [17], assuming the polar state with completely broken symmetry (CBS) [2]. This remarkable fit for the low-field regime of the superconducting state in URhGe did not require any inclusion of the ferromagnetism into the theory, as the only apparent effect of the ferromagnetism was to give rise to a demagnetization effect jump in H c2 at the superconducting transition temperature T c . In addition, H c2 (0) exceeded the Pauli limit for all field directions measured, providing strong evidence of a parallelspin pair state. Upon the discovery of magnetic-field induced reentrant superconductivity in URhGe [18], much interest turned to the possible source of the high-field superconducting phase. Then, superconductivity was discovered in UCoGe [12], and H c2 (T ) was measured for all three crystal axis directions [19], and all of the curves exhibited upward curvature unrelated to dimensionalcrossover effects [20]. Subsequently, a highly anomalous S-shaped H c2 (T ) curve was observed for T < 0.65T c with H||b [21]. Since M ||ĉ at low fields, this change in the M direction only occurred in very pure, well-aligned samples. This behavior may also have something to do with a reentrant phase, one that is close in field strength to the low-field phase [22] The first attempts to describe upward H c2 (T ) curvature in all crystal axis directions were based either upon ferromagnetic fluctuations [23], or upon a crossover from one parallel-spin state to another [24]. Meanwhile, a mean-field theory of the complementary effects of itinerant ferromagnetism and parallel-spin superconductivity was developed [25,26]. To date, the field dependence of this mutual enhancement has not been investigated. Here, we study the case in which the p-wave pairing interaction strength is anisotropic, but finite in all crystal directions. Since H c2 is essentially isotropic in the ab plane for samples of UCoGe with medium purity [19], we studied the partially broken symmetry (PBS) state as a function of the pairing interaction anisotropy. This can give a kink in H c2 (T ) in at least one field direction [27]. Upper critical field anisotropy of the PBS state We assume a p-wave pairing interaction as in Eq. (1) of Ref. [2], where we take V 3 > V 2 ≥ V 1 . Then, for H||ê 3 , the polar and two axial PBS states are obtained from ⟨n|∆ 10 ⟩α (p) where α n , and where the upper (lower) terms in the parenthesis of a , N (0) is the single-spin density of states, and we seth = c = k B = 1. For the field alongê 1 orê 2 , one rotates the axes by π/2 aboutê 2 orê 1 , respectively, and lets m 12 be replaced by m 23 or m 13 , respectively. Since the low-field H c2 (T ) data of Huy et al. for UCoGe suggest that it has uniaxial symmetry, with H c2 ||â ≈ H c2 ||b, in the following we will restrict our consideration to the V 1 = V 2 case [19]. In order to fit the Aoki et al. data with the S-shaped H c2,||b (T ) curve, it is necessary to use the full orthorhombic anisotropy in Eqs. (1)-(4), and to include the spontaneous and field-dependent magnetization. To do so for the two axial states, one may obtain a recursion relation for either one of the amplitudes, ⟨n|∆ 1,±1 ⟩, by eliminating the other in Eq. (2), and then solving the recursion relation in terms of a continued fraction. In Fig. 1(a), we plotted h c2,||c = 2eH c2 (m/m 12 )v 2 F /(2πT c c ) 2 versus t = T /T c c for the polar state and for a variety of PBS states with −0.25 ≤ δ < 0, where δ = ln(T ab c /T c c ). Note that these PBS states all have slight upward curvature, but since T c c > T ab c , the polar state dominates for all T ≤ T c c . In Fig. 1(b), we plotted h c2,⊥c = 2eH c2 (m/ √ m 12 m 3 )v 2 F /(2πT c c ) 2 versus t = T /T c c for the CBS state and for various PBS states with −0.25 ≤ δ < 0. In this case, the CBS state dominates near to T c c , but there is a crossover to a PBS state for −0.179 ≤ δ < 0, resulting in a single kink in H c2,⊥c (T ). Fits to the Huy et al. UCoGe H c2 (T ) data As a starting point, to see if there is any possibility of fitting the least anomalous region of the H c2 (T ) curves obtained for UCoGe, we assume uniaxial anisotropy and fit the data of Huy et al. [19]. In Fig. 2(a), the best fit to the H||ĉ data is for the polar state, as shown. In Fig. 2(b), the best fits to the H||â and H||b data are both for δ = −0.07, which show a distinct crossover from the CBS to the PBS state. This δ value is also consistent with the polar state best fit to the H c2,||c data in Fig. 2(a). We remark that when the spontaneous magnetization along the c-axis direction is included, the fitting to the data in Fig. 2(a) would be altered. Conclusions We found that it is possible to fit the upward curvature of the H c2 (T ) data for H||â and H||b from medium-purity UCoGe using a crossover from the completely broken symmetry polar state to a PBS state. However, in the model studied, it is not possible to fit the observed upward curvature of H c2 (T ) for H||ĉ, as the polar state alone provides the best fit to the data. At the very least, the spontaneous and field-dependent magnetization should be included in future fits, using an anisotropic itinerant ferromagnetic superconductor model similar to that previously studied [25,26]. Acknowledgments The authors are grateful to Prof. A. de Visser for providing the data of Huy et al. [19]. QG acknowledges the Specialized Research Fund for the Doctoral Program of Higher Education of China (no. 20100006110021).
2019-04-13T00:00:01.813Z
2011-07-20T00:00:00.000
{ "year": 2011, "sha1": "72ea5b214f46bba37a310b01918b619e962023e5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/400/2/022055", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "56ea1ac3e92fbd71eacae4fbbadc7b97cef04445", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257704478
pes2o/s2orc
v3-fos-license
Amidoamine Oxide Surfactants as Low-Molecular-Weight Hydrogelators: Effect of Methylene Chain Length on Aggregate Structure and Rheological Behavior Rheology control is an important issue in many industrial products such as cosmetics and paints. Recently, low-molecular-weight compounds have attracted considerable attention as thickeners/gelators for various solvents; however, there is still a significant need for molecular design guidelines for industrial applications. Amidoamine oxides (AAOs), which are long-chain alkylamine oxides with three amide groups, are surfactants that act as hydrogelators. Here, we show the relationship between the length of methylene chains at four different locations of AAOs, the aggregate structure, the gelation temperature Tgel, and the viscoelasticity of the formed hydrogels. As seen from the results of electron microscopic observations, the aggregate structure (ribbon-like or rod-like) can be controlled by changing the length of methylene chain in the hydrophobic part, the length of methylene chain between the amide and amine oxide groups, and the lengths of methylene chains between amide groups. Furthermore, hydrogels consisting of rod-like aggregates showed significantly higher viscoelasticity than those consisting of ribbon-like aggregates. In other words, it was shown that the gel viscoelasticity could be controlled by changing the methylene chain lengths at four different locations of the AAO. Introduction The control of rheological behavior is critical for a variety of industrial products, such as cosmetics, toiletry, and paints, because rheological properties are closely related to product characteristics, dispersion and emulsion stability, and the feel of the product. Although polymer materials are commonly used as thickeners/gelators, they have significant disadvantages: their molecular weight is difficult to control, and once the polymer is dissolved, the viscosity of the gel does not decrease even at high temperatures, resulting in poor operability. In contrast to conventional polymer gels, supramolecular gels have attracted attention in recent years . Supramolecular gels are formed by the self-assembly of low-molecular-weight gelators (LMWGs), which form fibrous aggregates and 3D network structures with them. Compared to polymer gelators, LMWGs are easier to synthesize, and above the gelation temperature, they do not significantly increase the viscosity, making them easier to handle. Another advantage of LMWGs is that their gelation temperature can be controlled by their chemical structure. At the same time, since self-assembly is Gels 2023, 9, 261 2 of 15 based on intermolecular interactions, such as solvophobic interactions, van der Waals interactions, hydrogen bonds, π-π interactions, metal coordination, and host-guest interaction, supramolecular gels (LMWG gels) have lower strength than polymer gels, in which monomers are covalently bonded. Furthermore, because the thickening and gelation performance of LMWGs is extremely sensitive to their chemical structure, few guidelines for the design of gelators for industrial applications have been reported; thus, the development of LMWGs has required time and resources. We have previously reported that amidoamine oxide surfactants (AAOs), which are long-chain alkylamine oxides with multiple amide groups, gelate water and aqueous salt solutions [29][30][31]. Long-chain alkylamine oxides are general-purpose surfactants used in kitchen detergents. Multiple amide groups are introduced for the formation of hydrogen bonds between neighboring AAO molecules. Although AAOs are achiral molecules, they have sufficient thickening and gelation ability in polar solvents. AAOs also possess notable industrial advantages: they are easy to synthesize and purify, and their production cost is low. In addition, the gelation temperatures and rheological properties of AAOs may be adjusted through slight changes in molecular structure, in particular, in the length of the alkyl chain in the hydrophobic part, number and arrangement of amide groups, and the length of the methylene chain between the amide and amine oxide groups. The objective of this study was to clarify the relationship between the chemical structure of AAOs, their aggregate structure, and rheological behavior, and to formulate guidelines for the design of LMWGs. In this paper, we report the results of our investigation into the effects of the length of the methylene chain in AAOs on the aggregate structure and rheological properties of their hydrogels. Results and Discussion As shown in Figure 1, AAOs have methylene chains at four locations: in the hydrophobic part, between amide groups (between nitrogen atoms and between carbonyl groups), and between amide and amine oxide groups. The effects of the lengths of these methylene chains (k, l, m, and n, respectively) on the aggregate structure, T gel , and rheological behavior were investigated. Gels 2023, 9, x FOR PEER REVIEW 2 of 15 temperature can be controlled by their chemical structure. At the same time, since selfassembly is based on intermolecular interactions, such as solvophobic interactions, van der Waals interactions, hydrogen bonds, π-π interactions, metal coordination, and hostguest interaction, supramolecular gels (LMWG gels) have lower strength than polymer gels, in which monomers are covalently bonded. Furthermore, because the thickening and gelation performance of LMWGs is extremely sensitive to their chemical structure, few guidelines for the design of gelators for industrial applications have been reported; thus, the development of LMWGs has required time and resources. We have previously reported that amidoamine oxide surfactants (AAOs), which are long-chain alkylamine oxides with multiple amide groups, gelate water and aqueous salt solutions [29][30][31]. Long-chain alkylamine oxides are general-purpose surfactants used in kitchen detergents. Multiple amide groups are introduced for the formation of hydrogen bonds between neighboring AAO molecules. Although AAOs are achiral molecules, they have sufficient thickening and gelation ability in polar solvents. AAOs also possess notable industrial advantages: they are easy to synthesize and purify, and their production cost is low. In addition, the gelation temperatures and rheological properties of AAOs may be adjusted through slight changes in molecular structure, in particular, in the length of the alkyl chain in the hydrophobic part, number and arrangement of amide groups, and the length of the methylene chain between the amide and amine oxide groups. The objective of this study was to clarify the relationship between the chemical structure of AAOs, their aggregate structure, and rheological behavior, and to formulate guidelines for the design of LMWGs. In this paper, we report the results of our investigation into the effects of the length of the methylene chain in AAOs on the aggregate structure and rheological properties of their hydrogels. Results and Discussion As shown in Figure 1, AAOs have methylene chains at four locations: in the hydrophobic part, between amide groups (between nitrogen atoms and between carbonyl groups), and between amide and amine oxide groups. The effects of the lengths of these methylene chains (k, l, m, and n, respectively) on the aggregate structure, Tgel, and rheological behavior were investigated. Chemical structure of alkyl amidoamine oxide. k is the length of the methylene chain of the hydrophobic part, l is the length of the methylene chain between nitrogen atoms of the amide groups, m is the length of the methylene chain between carbonyl groups of the amide groups, and n is the length of the methylene chain between the amide and amine oxide groups. AAO is denoted as k-l-m-n using the length of the methylene chain in four places. Figure 2 shows some examples of the appearance of AAO hydrogels. Depending on the chemical structure, the gel can be transparent or cloudy. Chemical structure of alkyl amidoamine oxide. k is the length of the methylene chain of the hydrophobic part, l is the length of the methylene chain between nitrogen atoms of the amide groups, m is the length of the methylene chain between carbonyl groups of the amide groups, and n is the length of the methylene chain between the amide and amine oxide groups. AAO is denoted as k-l-m-n using the length of the methylene chain in four places. Figure 2 shows some examples of the appearance of AAO hydrogels. Depending on the chemical structure, the gel can be transparent or cloudy. Gels 2023, 9, x FOR PEER REVIEW 2 of 15 temperature can be controlled by their chemical structure. At the same time, since selfassembly is based on intermolecular interactions, such as solvophobic interactions, van der Waals interactions, hydrogen bonds, π-π interactions, metal coordination, and hostguest interaction, supramolecular gels (LMWG gels) have lower strength than polymer gels, in which monomers are covalently bonded. Furthermore, because the thickening and gelation performance of LMWGs is extremely sensitive to their chemical structure, few guidelines for the design of gelators for industrial applications have been reported; thus, the development of LMWGs has required time and resources. We have previously reported that amidoamine oxide surfactants (AAOs), which are long-chain alkylamine oxides with multiple amide groups, gelate water and aqueous salt solutions [29][30][31]. Long-chain alkylamine oxides are general-purpose surfactants used in kitchen detergents. Multiple amide groups are introduced for the formation of hydrogen bonds between neighboring AAO molecules. Although AAOs are achiral molecules, they have sufficient thickening and gelation ability in polar solvents. AAOs also possess notable industrial advantages: they are easy to synthesize and purify, and their production cost is low. In addition, the gelation temperatures and rheological properties of AAOs may be adjusted through slight changes in molecular structure, in particular, in the length of the alkyl chain in the hydrophobic part, number and arrangement of amide groups, and the length of the methylene chain between the amide and amine oxide groups. The objective of this study was to clarify the relationship between the chemical structure of AAOs, their aggregate structure, and rheological behavior, and to formulate guidelines for the design of LMWGs. In this paper, we report the results of our investigation into the effects of the length of the methylene chain in AAOs on the aggregate structure and rheological properties of their hydrogels. Results and Discussion As shown in Figure 1, AAOs have methylene chains at four locations: in the hydrophobic part, between amide groups (between nitrogen atoms and between carbonyl groups), and between amide and amine oxide groups. The effects of the lengths of these methylene chains (k, l, m, and n, respectively) on the aggregate structure, Tgel, and rheological behavior were investigated. Chemical structure of alkyl amidoamine oxide. k is the length of the methylene chain of the hydrophobic part, l is the length of the methylene chain between nitrogen atoms of the amide groups, m is the length of the methylene chain between carbonyl groups of the amide groups, and n is the length of the methylene chain between the amide and amine oxide groups. AAO is denoted as k-l-m-n using the length of the methylene chain in four places. Figure 2 shows some examples of the appearance of AAO hydrogels. Depending on the chemical structure, the gel can be transparent or cloudy. Gelation Temperature (T gel ) The typical cryo-SEM images of aqueous AAO solutions sampled above and below T gel are shown in Figure 3. At room temperature, the aqueous AAO solution is highly viscous and does not flow, and the obtained cryo-SEM images (Figure 3a,b) clearly show the presence of aggregates. An aqueous solution of 9-2-2-6 contains thin and straight rod-like structures (Figure 3a), whereas the solution of 11-2-2-6 contains twisted ribbon-like structures (Figure 3b) that are wider than those in 9-2-2-6 [30,31]. Figure 3c shows a quickfrozen aqueous solution of 9-2-2-6 heated to about 60 • C (higher than T gel ), and Figure 3d shows a quick-frozen aqueous solution of 11-2-2-6 heated to about 80 • C (higher than T gel ). cous and does not flow, and the obtained cryo-SEM images (Figure 3a,b) clearly show the presence of aggregates. An aqueous solution of 9-2-2-6 contains thin and straight rod-like structures (Figure 3a), whereas the solution of 11-2-2-6 contains twisted ribbon-like structures ( Figure 3b) that are wider than those in 9-2-2-6 [30,31]. Figure 3c shows a quickfrozen aqueous solution of 9-2-2-6 heated to about 60 °C (higher than Tgel), and Figure 3d shows a quick-frozen aqueous solution of 11-2-2-6 heated to about 80 °C (higher than Tgel). We assumed that these are snapshots of aggregates formed at respective temperatures before freezing. The aqueous solutions of AAOs show low viscosity, almost the same as that of water without AAOs. Despite the low viscosity, aggregates are observed in these solutions at temperatures above Tgel, although in considerably smaller quantities than below Tgel. The AAO concentration CD of 50 mM is clearly above the critical micelle concentration; thus, the formation of aggregates above Tgel is reasonable. However, as the viscosity of the aqueous solution is almost the same as that of the solvent at temperatures above Tgel, the concentration of aggregates with large aggregate numbers is low. Thus, Tgel is considered to be the temperature at which "aggregates with aggregate number sufficient to induce an increase in viscosity begin to form" through hydrogen bonding between adjacent AAO amide groups. Effects of the Length of the Methylene Chain between the Amide and Amine Oxide Groups (n) We previously reported the relationship between Tgel and the length of the methylene chain between the amide and amine oxide groups (n) in aqueous AAO solutions [30,31]. Briefly, Tgel increased with n for all methylene chain lengths (k = 9, 11, and 13 in the hydrophobic part), and cryo-TEM observations showed the formation of thin and linear rodlike aggregates in the region where Tgel linearly increased with n [30,31]. The slope of the We assumed that these are snapshots of aggregates formed at respective temperatures before freezing. The aqueous solutions of AAOs show low viscosity, almost the same as that of water without AAOs. Despite the low viscosity, aggregates are observed in these solutions at temperatures above T gel , although in considerably smaller quantities than below T gel . The AAO concentration C D of 50 mM is clearly above the critical micelle concentration; thus, the formation of aggregates above T gel is reasonable. However, as the viscosity of the aqueous solution is almost the same as that of the solvent at temperatures above T gel , the concentration of aggregates with large aggregate numbers is low. Thus, T gel is considered to be the temperature at which "aggregates with aggregate number sufficient to induce an increase in viscosity begin to form" through hydrogen bonding between adjacent AAO amide groups. Effects of the Length of the Methylene Chain between the Amide and Amine Oxide Groups (n) We previously reported the relationship between T gel and the length of the methylene chain between the amide and amine oxide groups (n) in aqueous AAO solutions [30,31]. Briefly, T gel increased with n for all methylene chain lengths (k = 9, 11, and 13 in the hydrophobic part), and cryo-TEM observations showed the formation of thin and linear rod-like aggregates in the region where T gel linearly increased with n [30,31]. The slope of the plot of T gel against n (dT gel /dn) was 30, 20, and 15 for k = 9, 11, and 13, respectively, indicating that the effect of n on T gel diminishes with increasing k, i.e., with the lengthening of the hydrophobic chain. This was attributed to the decrease in the curvature of the aggregates with increasing k. At the same time, the T gel vs. n curves of both 11-2-2-6 and 13-2-2-6 significantly deviated to the upside from the straight line; notably, a flat ribbon-like structure was observed in these two samples by cryo-TEM [30]. The present cryo-SEM observations ( Figure 3) also show an aggregate structure, which is in good agreement with that observed using cryo-TEM. Quick-freeze replica TEM images also reveal thin linear aggregates in 9-2-2-6 ( Figure 4a) and wide twisted structures in 11-2-2-6 ( Figure 4b). Negative-staining TEM also shows rod-like aggregates in 9-2-2-6 ( Figure 5a) and 13-2-2-4 ( Figure 5c), and ribbon-like structures in 11-2-2-6 ( Figure 5b) and 13-2-2-6 ( Figure 5d). In summary, the shapes of the aggregates observed using cryo-TEM, cryo-SEM, quick-freeze replica TEM, and negative-staining TEM are in good agreement with each other. Generally, in negative-staining methods, there are concerns about structural changes in the aggregates caused by staining agents and the drying out of water. However, in this study, the aggregate structures obtained using negative-staining methods are in good agreement with those observed by non-staining cryo-methods, indicating no effect of the staining agent and drying. These results suggest that the relationship between T gel and n accurately reflects the structure of the aggregates formed in water. Gels 2023, 9, x FOR PEER REVIEW 4 of 15 plot of Tgel against n (dTgel/dn) was 30, 20, and 15 for k = 9, 11, and 13, respectively, indicating that the effect of n on Tgel diminishes with increasing k, i.e., with the lengthening of the hydrophobic chain. This was attributed to the decrease in the curvature of the aggregates with increasing k. At the same time, the Tgel vs. n curves of both 11-2-2-6 and 13-2-2-6 significantly deviated to the upside from the straight line; notably, a flat ribbon-like structure was observed in these two samples by cryo-TEM [30]. The present cryo-SEM observations ( Figure 3) also show an aggregate structure, which is in good agreement with that observed using cryo-TEM. Quick-freeze replica TEM images also reveal thin linear aggregates in 9-2-2-6 ( Figure 4a) and wide twisted structures in 11-2-2-6 ( Figure 4b). Negative-staining TEM also shows rod-like aggregates in 9-2-2-6 ( Figure 5a) and 13-2-2-4 ( Figure 5c), and ribbon-like structures in 11-2-2-6 ( Figure 5b) and 13-2-2-6 ( Figure 5d). In summary, the shapes of the aggregates observed using cryo-TEM, cryo-SEM, quick-freeze replica TEM, and negative-staining TEM are in good agreement with each other. Generally, in negative-staining methods, there are concerns about structural changes in the aggregates caused by staining agents and the drying out of water. However, in this study, the aggregate structures obtained using negative-staining methods are in good agreement with those observed by non-staining cryo-methods, indicating no effect of the staining agent and drying. These results suggest that the relationship between Tgel and n accurately reflects the structure of the aggregates formed in water. The aggregate structure is closely related to n for the following reasons. When n is small, the amide groups, which are hydrogen bonding sites, are close to the aggregate surface, i.e., water. In this case, water as a solvent prevents the formation of hydrogen bonds between neighboring molecules. The decrease in the curvature of aggregates due to the increase in n also affects the formation of hydrogen bonds. We previously reported the dependence of cmc on n for N-lauroylaminoalkyl-N ,Ndimethylamine oxide, a molecule consisting of dodecyldimethylamine oxide with one amide group and a methylene chain (chain length n) between the amide and amine oxide groups [32]. The cmc hardly changed at 2 ≤ n ≤ 4; however, at n ≥ 5, the cmc rapidly decreased with increasing n. This suggests that when n is small, the amide group and the methylene chain between the amide and amine oxide groups form a polar group; at the same time, when n is sufficiently large, the methylene chain between the amide and amine oxide groups acts as a hydrophobic chain. This result also supports the relationship between n and the aggregate structure described above. The differences in the aggregate structure at different n were expected to affect the rheology of the hydrogels. The storage modulus, G , and loss modulus, G", of 13-2-2-n (n = 3-6) hydrogels at 25 • C are shown as functions of angular frequency in Figure 6. Electron micrographs confirm that 13-2-2-4 and 13-2-2-5 form rod-like aggregates and 13-2-2-6 forms ribbon-like aggregates. All gels have G > G" in the examined angular frequency region, indicating that they are in the gel state. The 13-2-2-4 and 13-2-2-5 hydrogels containing rod-like aggregates show similar viscoelastic behavior. In addition, both G and G" are almost two orders of magnitude greater for rod-like aggregates than for ribbon-like ones. In other words, hydrogels containing ribbon-like aggregates (more regular or closer to crystalline) are less viscoelastic than those containing rod-like aggregates. The viscoelastic behavior of gels containing ribbon-like aggregates is further discussed. In gels containing ribbon-like aggregates, G is almost constant regardless of frequency, while G" shows upturns in the curve. An idea presented in organogels may explain this result [15]. That is, given the crystal-like structure of ribbon-like aggregates, gels containing ribbon-like aggregates may form solid networks rather than physical cross-links. Effects of the Length of Methylene Chain in the Hydrophobic Part (k) The effect of the length of the methylene chain in the hydrophobic part (k) on T gel , aggregate structures, and viscoelasticity are discussed. The value of k affects the hydrophobic interaction between AAO molecules in water. The relationship between T gel and k is shown in Figure 7: T gel increases with k in all cases [31]. The slopes (dT gel /dk) for k-2-2-3 and k-3-2-6, which formed rod-like aggregates, are 7.0 and 6.0, respectively. At the same time, for k-2-2-6 (11 ≤ k ≤ 13), which formed ribbon-like aggregates, the slope is 17, which is considerably higher than that for the rod-like aggregates. Ribbon-like aggregates have a lower curvature and thus a shorter distance between neighboring molecules than rod-like aggregates, suggesting a greater contribution of intermolecular hydrogen bonds to the stabilization of the former aggregates. At k = 13, the T gel values for 13-2-2-6, 13-3-2-6, and 13-2-2-3 are 77, 45, and 17 • C, respectively. Although 13-3-2-6 and 13-2-2-3 both form similar rod-like aggregates, their T gel values differ by 28 • C. In other words, even aggregates with similar rod-like structures have significantly different temperature stabilities owing to differences in the lengths of methylene chains other than the hydrophobic chain. to crystalline) are less viscoelastic than those containing rod-like aggregates. The viscoelastic behavior of gels containing ribbon-like aggregates is further discussed. In gels containing ribbon-like aggregates, G′ is almost constant regardless of frequency, while G″ shows upturns in the curve. An idea presented in organogels may explain this result [15]. That is, given the crystal-like structure of ribbon-like aggregates, gels containing ribbonlike aggregates may form solid networks rather than physical cross-links. Effects of the Length of Methylene Chain in the Hydrophobic Part (k) The effect of the length of the methylene chain in the hydrophobic part (k) on Tgel, aggregate structures, and viscoelasticity are discussed. The value of k affects the hydrophobic interaction between AAO molecules in water. The relationship between Tgel and k is shown in Figure 7: Tgel increases with k in all cases [31]. The slopes (dTgel/dk) for k-2-2-3 Next, the relationships of G′ and G″ with an angular frequency of k-2-2-6 (k = 9, 11, and 13) were investigated, and the results are shown in Figure 8. In all the examined regions, G′ is larger than G″ in the gelled state. Both G′ and G″ are larger when k is small. As mentioned above, an aqueous solution of 9-2-2-6 was confirmed to have rod-like aggregates, whereas the aqueous solutions of 11-2-2-6 and 13-2-2-6 have ribbon-like aggregates. Ribbon-like aggregates have a more regular arrangement of molecules, such as in solid or crystalline structures, whereas rod-like aggregates have a larger curvature and greater intermolecular distances than ribbon-like aggregates. Hydrogels with ribbon-like aggregates are less viscous and less elastic than gels with rod-like aggregates. In the present study, similar results were obtained for samples with different n, indicating that differences in the aggregate structure affect the viscoelasticity of the hydrogels. Next, the relationships of G and G" with an angular frequency of k-2-2-6 (k = 9, 11, and 13) were investigated, and the results are shown in Figure 8. In all the examined regions, G is larger than G" in the gelled state. Both G and G" are larger when k is small. As mentioned above, an aqueous solution of 9-2-2-6 was confirmed to have rodlike aggregates, whereas the aqueous solutions of 11-2-2-6 and 13-2-2-6 have ribbon-like aggregates. Ribbon-like aggregates have a more regular arrangement of molecules, such as in solid or crystalline structures, whereas rod-like aggregates have a larger curvature and greater intermolecular distances than ribbon-like aggregates. Hydrogels with ribbon-like aggregates are less viscous and less elastic than gels with rod-like aggregates. In the present study, similar results were obtained for samples with different n, indicating that differences in the aggregate structure affect the viscoelasticity of the hydrogels. Ribbon-like aggregates have a more regular arrangement of molecules, such as in solid o crystalline structures, whereas rod-like aggregates have a larger curvature and greate intermolecular distances than ribbon-like aggregates. Hydrogels with ribbon-like aggre gates are less viscous and less elastic than gels with rod-like aggregates. In the presen study, similar results were obtained for samples with different n, indicating that differ ences in the aggregate structure affect the viscoelasticity of the hydrogels. Effects of the Lengths of Methylene Chains between Amide Groups (l and m) The aggregate structure and rheological behavior were examined for different lengths of the methylene chains between the amide groups (l and m). The values of T gel are listed in Table 1. Interestingly, when either l or m is odd, the T gel is much lower than when both are even [31]. The odd-even effect of methylene chain length was not observed for the hydrophobic part or the length of the methylene chain between the amide and amine oxide groups, suggesting that the role of methylene chains between the amide groups differs from that of the other methylene chains. The negative-staining TEM images of the formed aggregates are shown in Figure 9. There are significant differences in the aggregate structures, with rod-like or bundle-like structures (Figure 9a,c) observed for odd lengths of the methylene chain between amide groups and wide, flat ribbon-like structures (Figure 9b) observed for methylene chains of even length. The investigation of the rheological properties of these samples ( Figure 10) shows that both G and G" are larger when either l or m is odd than when both l and m are even. In other words, hydrogels with rod-like, or bundle-like aggregates show higher viscoelasticity than those with ribbonlike aggregates. The odd-even effect of the lengths of methylene chains between amide groups on Tgel was attributed to the number of hydrogen bonding points formed between amide groups of the neighboring molecules. As reported by Sumiyoshi et al. [33,34], the numbers of hydrogen bonds formed between adjacent molecules are different for odd and even lengths of the methylene chains between amide groups, as shown in Figure 11. This effect is similar to the parallel-antiparallel model for the β-sheet of polypeptides such as proteins; it also explains why Tgel increases, more lamellar and flat aggregate structures are formed, and the curvature of aggregates decreases (forming rod-like to ribbon-like aggregates) with an increasing number of hydrogen bonds. At the same time, the bundle structure, which is an aggregation of thin rod-like aggregates, shows better viscoelasticity than the ribbon-like structure. Electron micrographs show that the widths of ribbon-like aggregates are tens of nm, whereas the diameters of the rod-like aggregates are a few nm; furthermore, the numbers of aggregates are significantly different. In other words, at the same AAO concentration, the number of rod-like aggregates is much greater than that of ribbon-like aggregates. Therefore, the rod-like aggregates have a larger number of mutual entanglement points, and as a result, they exhibit higher viscoelasticity. The odd-even effect of the lengths of methylene chains between amide groups on T gel was attributed to the number of hydrogen bonding points formed between amide groups of the neighboring molecules. As reported by Sumiyoshi et al. [33,34], the numbers of hydrogen bonds formed between adjacent molecules are different for odd and even lengths of the methylene chains between amide groups, as shown in Figure 11. This effect is similar to the parallel-antiparallel model for the β-sheet of polypeptides such as proteins; it also explains why T gel increases, more lamellar and flat aggregate structures are formed, and the curvature of aggregates decreases (forming rod-like to ribbon-like aggregates) with an increasing number of hydrogen bonds. At the same time, the bundle structure, which is an aggregation of thin rod-like aggregates, shows better viscoelasticity than the ribbon-like structure. Electron micrographs show that the widths of ribbon-like aggregates are tens of nm, whereas the diameters of the rod-like aggregates are a few nm; furthermore, the numbers of aggregates are significantly different. In other words, at the Gels 2023, 9, 261 9 of 15 same AAO concentration, the number of rod-like aggregates is much greater than that of ribbon-like aggregates. Therefore, the rod-like aggregates have a larger number of mutual entanglement points, and as a result, they exhibit higher viscoelasticity. Gels 2023, 9, x FOR PEER REVIEW 9 of 15 Figure 11. Schematic of the odd-even effect of hydrogen bonding formation. Blue circles indicate hydrogen bonds between neighboring AAO molecules. Red circles indicate amide groups without hydrogen bonds. Conclusions In this paper, we reported the relationship between the lengths of methylene chains at four different locations in AAO and the aggregate structure, Tgel, and viscoelasticity of the hydrogel. A significant odd-even effect on Tgel was observed for the lengths of the methylene chains between the amide groups (l and m). At the same time, Tgel monotonically increased with the lengths of the methylene chains in the hydrophobic part (k) and between the amide and amine oxide groups (n), and no odd-even effect was observed for k and n. This difference suggests that each methylene chain plays a different role in aggregate formation. Specifically, k and n determine the curvature of the aggregates, whereas l and m directly affect the number of intermolecular hydrogen bonds. The aggregate structures were observed using various microscopic techniques, including cryo-TEM, cryo-SEM, quick-freeze replica TEM, and negative-staining TEM. Our results confirmed that the aggregate structure was almost the same in this system, and the staining agent hardly changed the aggregate structure. Furthermore, we found that the ribbon-like aggregates are formed only when all the following conditions are met: k ≥ 11, n ≥ 6, and l and m are both even. If even one of these conditions is not met, the aggregates have a rod-like structure. At the same time, hydrogels with rod-like aggregates showed higher viscoelasticity than those with ribbon-like aggregate, and a strong correlation was observed between the aggregate structure and viscoelasticity. In other words, we succeeded in determining the parameters of the chem-k-2-2-6: ribbon-like aggregates k-3-2-6: rod-like aggregates k-2-3-6: rod-like aggregates Figure 11. Schematic of the odd-even effect of hydrogen bonding formation. Blue circles indicate hydrogen bonds between neighboring AAO molecules. Red circles indicate amide groups without hydrogen bonds. Conclusions In this paper, we reported the relationship between the lengths of methylene chains at four different locations in AAO and the aggregate structure, T gel , and viscoelasticity of the hydrogel. A significant odd-even effect on T gel was observed for the lengths of the methylene chains between the amide groups (l and m). At the same time, T gel monotonically increased with the lengths of the methylene chains in the hydrophobic part (k) and between the amide and amine oxide groups (n), and no odd-even effect was observed for k and n. This difference suggests that each methylene chain plays a different role in aggregate formation. Specifically, k and n determine the curvature of the aggregates, whereas l and m directly affect the number of intermolecular hydrogen bonds. The aggregate structures were observed using various microscopic techniques, including cryo-TEM, cryo-SEM, quick-freeze replica TEM, and negative-staining TEM. Our results confirmed that the aggregate structure was almost the same in this system, and the staining agent hardly changed the aggregate structure. Furthermore, we found that the ribbon-like aggregates are formed only when all the following conditions are met: k ≥ 11, n ≥ 6, and l and m are both even. If even one of these conditions is not met, the aggregates have a rod-like structure. At the same time, hydrogels with rod-like aggregates showed higher viscoelasticity than those with ribbonlike aggregate, and a strong correlation was observed between the aggregate structure and viscoelasticity. In other words, we succeeded in determining the parameters of the chemical structure of the AAO that control the T gel and rheological behavior. There is a high demand for gelators that can be synthesized cost-effectively for industrial applications. The molecular design guidelines obtained in this study can be applied not only to hydrogelators but also to organogelators. Synthesis of [(N-acylaminoalkyl)succinamoylaminohexyl]dimethylamine The AAO concentrations of all aqueous solutions were 50 mM. In addition, no pH adjustments were performed. Characterization 1 H NMR spectra were recorded with a JEOL ECZ400 (400 MHz) spectrometer (Tokyo, Japan). These spectra are shown in Figures S1-S10 Accurate mass spectra were acquired using an X500R mass spectrometer (SCIEX, Framingham, MA, USA) running in positive-ion electrospray mode. The ionspray voltage was 5500 V. The nebulizer and heater gas were set at 60 psi and 40 psi, respectively. The resolving power of the mass spectrometer was over 20,000. Each 0.1 µg/mL methanolic solution was infused at a flow rate of 10 µL/min using a syringe pump, YSP-201 (YMC, Kyoto, Japan). Mass spectra in the range of m/z 100 to 1100 were acquired for 1 min and averaged. ESI mass spectra are shown in Figure S11 Viscosity Measurement Using the tuning fork vibro viscometer SV10A (A&D, Tokyo, Japan), the aqueous AAO solution was first heated to the temperature at which the solution viscosity was comparable to that of the solvent, and then, the temperature and viscosity were measured simultaneously during slow cooling. The temperature at which the viscosity rapidly increased with decreasing temperature was defined as the gelation temperature T gel . Rheological Measurement The viscoelasticity of AAO hydrogels was measured using an MCR702 rheometer (Anton Paar Japan, Tokyo, Japan) with a 50 mm diameter parallel-plate geometry. Peltier system was employed for temperature control. A solvent trap was used to reduce evaporation. After the temperature of the aqueous AAO sample was sufficiently higher than T gel , the sample was introduced into the apparatus. For every sample, frequency, and amplitude sweeps were performed to determine the linear viscoelastic regime. Amplitude sweeps were performed at a fixed frequency of 1 Hz. All rheological measurements were conducted at 25 • C. Quick-Freezing of Surfactant Aqueous Solutions Surfactant aqueous solutions were quick-frozen via the metal-contact method using a quick-freeze unit (Polaron E7200, Watford, UK). The surface of a pure copper block was mirror-finished beforehand for optimal thermal conductivity. A drop of each sample with a volume of several microliters was quick-frozen via contact with the surface of the block pre-cooled with liquid helium. The frozen samples were stored in liquid nitrogen for subsequent cryo-SEM and freeze-fracture replication. Cryo-Scanning Electron Microscopy (Cryo-SEM) Cryo-SEM was performed using Zeiss LEO 1530 (Oberkochen, Germany) equipped with a cryo-preparation system (Gatan Alto 2500, Pleasanton, CA, USA). The quick-frozen sample was mounted on the sample stage in the cryo-preparation system under vacuum at about 10 −4 Pa, fractured at −100 • C, sputter-coated with platinum, and transferred to the sample stage for observation at 20 kV. Freeze-Fracture Replica Transmission Electron Microscopy (TEM) Freeze-replica films of surfactant aqueous solutions were prepared using a freezereplica apparatus (BAF 400D, Balzers, Liechtenstein). Quick-frozen sample was mounted on the sample stage in the apparatus at −100 • C under vacuum at about 10 −5 Pa, fractured, and coated with a layer of platinum (thickness of about 6.5 nm) by evaporation at an angle of 25 • while the sample stage was rotating horizontally (low-angle rotary shadowing). The sample was further coated with a carbon protective film (thickness of about 25 nm) by evaporation from an angle of 90 • . The replica films were sequentially washed with a kitchen bleach solution, pure water, dilute sulfuric acid, and pure water and were mounted on a TEM copper grid (Veco, 150 mesh, Eerbeek, The Netherlands) in air at room temperature. The specimens were observed using a Philips CM200UT (Eindhoven, The Netherlands) operating at 200 kV. Negative-Staining TEM Carbon-coated copper grids (GLF-C10, Okenshoji Co., Ltd., Tokyo, Japan) were used for TEM. A drop of hydrogel sample was placed on a grid and stained with an aqueous solution containing uranyl acetate (2.0 wt%). Sample-loaded grids were vacuum-dried and observed using a JEM-2100 transmission electron microscope (JEOL Ltd., Tokyo, Japan) at an operating voltage of 100 kV.
2023-03-24T15:21:35.020Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "ac67b56f6fe99789b0fdc027c4004235eb0996d1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/gels9030261", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72551260cc333bd3b0854a155d63c7ceb0d8e10b", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
78093263
pes2o/s2orc
v3-fos-license
Model to improve specificity for identification of clinically-relevant expanded T cells in peripheral blood Current methods to quantify T-cell clonal expansion only account for variance due to random sampling from a highly diverse repertoire space. We propose a beta-binomial model to incorporate time-dependent variance into the assessment of differentially abundant T-cell clones, identified by unique T Cell Receptor (TCR) β-chain rearrangements, and show that this model improves specificity for detecting clinically relevant clonal expansion. Using blood samples from ten healthy donors, we modeled the variance of T-cell clones within each subject over time and calibrated the dispersion parameters of the beta distribution to fit this variance. As a validation, we compared pre- versus post-treatment blood samples from urothelial cancer patients treated with atezolizumab, where clonal expansion (quantified by the earlier binomial model) was previously reported to correlate with benefit. The beta-binomial model significantly reduced the false-positive rate for detecting differentially abundant clones over time compared to the earlier binomial method. In the urothelial cancer cohort, the beta-binomial model enriched for tumor infiltrating lymphocytes among the clones detected as expanding in the peripheral blood in response to therapy compared to the binomial model and improved the overall correlation with clinical benefit. Incorporating time-dependent variance into the statistical framework for measuring differentially abundant T-cell clones improves the model's specificity for T-cells that correlate more strongly with the disease and treatment setting of-interest. Reducing background-level clonal expansion, therefore, improves the quality of clonal expansion as a biomarker for assessing the T cell immune response and correlations with clinical measures. Introduction High-throughput next-generation sequencing of the T cell receptor (TCR) repertoire, i.e., immunosequencing, enables precise molecular identification and tracking of tens to hundreds of thousands of T-cell clones in a single subject [1]. system is the clonal expansion of activated T cells. With immunosequencing, clonally expanded T cells can be identified by comparing the frequency of each T-cell clone at one time point versus another. One challenge with immunosequencing data is developing a systematic framework to determine if the increase in T-cell clone frequency meets the criteria for clonal expansion. Here, we describe a statistical framework that accounts for sampling and timedependent repertoire variability to detect T-cell clones that are differentially abundant in an unbiased and quantitative manner. In previous work by DeWitt et al, detection of clonally expanded T-cell clones was shown to correlate with an immune response to the yellow fever vaccine [2]. This earlier method was based on a Fisher's exact test, which can also be implemented as a binomial test comparing two proportions. Although the binomial model only accounts for random sampling variance around clone frequency, clonal expansion detected using this method was still found to correlate with pharmacodynamic activity and clinical response across a wide range of studies in the immuno-oncology setting [3][4][5][6][7]. Given that the T-cell repertoire is a highly dynamic system that evolves over time, we hypothesized that incorporating time-dependent variability into the differential abundance assessment would improve specificity for clinically relevant clonal expansion by reducing the identification of T-cell clones whose frequencies are changing within the range of normal physiology. The approach presented here uses a beta-binomial model to incorporate time-dependent variance in addition to the previously-captured sampling variance. We measured the variance in T-cell repertoires between technical replicates as well as blood samples drawn 2 and 4 weeks apart from ten healthy subjects. Variance between technical replicates did not contribute to the identification of differentially abundant clones, but time-dependent variance indeed affected this measure, resulting in tens of clones identified. We used the measured repertoire variance over a two-week interval from these healthy donors to calibrate the allowable range of time-dependent dispersion for the beta-binomial model. We then applied the calibrated betabinomial model to a urothelial cancer cohort to measure T-cell clonal expansion in the blood after administration of an anti-PD-L1 immunotherapy. In these patients, we found that T-cell clones identified as expanded in blood by the beta-binomial model were more likely to also reside in the tumor microenvironment than clones identified with the binomial model. This enrichment for tumor infiltrating lymphocytes (TILs) among expanded T-cell clones in the peripheral repertoire also improved the overall correlation with clinical benefit. Comparison of the binomial model in technical replicates and time-course blood samples To characterize the performance of the binomial model (Eq 1) and illustrate the importance of accounting for time-dependent biological variance, we analyzed T-cell clone frequencies in technical replicates as well as samples collected at 2-week and 4-week intervals. Fig 1A shows clone frequencies in a pair of technical replicates, which were sequenced on two aliquots from the same gDNA pool. Application of the binomial model returned an average of 1.2 differentially abundant clones across six comparisons performed between four technical replicates, resulting in a false-positive-rate of 2.6E-4 ( Fig 1A). In contrast, an average of 16.4 differentially abundant clones at a false-positive-rate of 0.0028 were identified over a 2-week interval ( Fig 1B) and an average of 19 differentially abundant clones at a false-positive-rate of 0.0031 were identified over a 4-week interval ( Fig 1C). Furthermore, the false discovery rate was higher for clones above 0.1% using the binomial model due to increased power for detecting small changes in clone frequency. analysis, decision to publish, or preparation of the manuscript. Competing interests: I have read the journal's policy and the authors of this manuscript have the following competing interests: JR, HR, and EY own stocks or shares and are paid employees of Adaptive Biotechnologies. HR holds patents associated with Adaptive Biotechnologies. JB, SD, TX, and CD own stocks or shares and are paid employees of Pfizer. This does not alter our adherence to PLOS ONE policies on sharing data and materials. Fig 1. Differential abundance analysis in technical replicates and samples collected over 2-week and 4-week intervals. Scatter plots of T-cell clone frequencies (from the whole blood of a single donor) in technical replicates as well as samples collected every two weeks over a 4-week period, with differentially abundant clones annotated in orange or cyan as determined by the binomial model (A-C) or the beta-binomial model (D-F). For each time interval, T-cell clones with were tested for differential abundance (dark grey, orange, and cyan) and the remaining clones (light grey) were excluded. Summary of the differential abundance analysis results comparing the performance of the binomial and betabinomial model in detecting differentially abundant clones (G) and the overall false-positive rate (H In order to account for this time-dependent variability in T-cell clone frequency, we first characterized the observed variance in T cell counts between a given clone over a two-week interval across all 10 subjects, defined as |k iA −k iB |. A linear mixed-effects model on log 10 -transformed data found that the variance in T-cell counts across all subjects significantly increased with the total T cell count, k i(A+B) (p < 0.001; Fig 2A). The residual variance, calculated by subtracting the expected binomial variance from the observed variance, accordingly increased for larger values of k i(A+B) and fit the modeled form, v = (b � log(k i(A+B) )+a) 2 (R 2 = 0.60, p << 0.001; Fig 2B). This approach permitted clone frequency and total T-cell count to define the beta probability density and ultimately the additional variance incorporated by the beta-binomial model in Eq 2. Consequently, normal biological time-dependent variance in T-cell clone frequencies could be accounted for when assessing statistical changes in clone frequency between two samples (Fig 1D-1F). Compared to the previous results with binomial model, we found that the beta-binomial model significantly reduced the number of clones detected as differentially abundant as well as the subsequent false-positive rate (Wilcoxon Rank Sum Test, p < 0.05) across all time intervals (Fig 1G and 1H). Characterization of statistical power Statistical power to identify differentially abundant clones was estimated as a function of initial clone frequency and frequency fold-change using a dilution experiment in which repertoires from two different individuals were mixed at a range of specified ratios. These fixed ratios allow us to control the expected fold-change in clone frequency when comparing two different mixtures. Fig 3 shows the estimate of statistical power versus clone frequency and frequency fold-change, ranging from 2-fold up to 20-fold. As expected, we have greater statistical power to identify a clone as differentially abundant if its initial frequency is higher and/or the foldchange in its frequency between the two samples is greater. Evidence for enrichment of TIL clones We re-analyzed a previously published urothelial cancer cohort from Snyder et al (Complete/ Partial Responders: n = 14; Stable/Progressive Disease: n = 22) to compare the detection of TILs expanded in the peripheral repertoire with the beta-binomial and binomial models [3]. T-cell clones identified as expanded in the blood by each model were annotated as TILs based on their presence in pre-treatment tumor samples. As shown in Fig 4A, the beta-binomial not only increases the statistical resolution between responders (CR/PR) and non-responders (SD/PD) over the binomial model (p = 0.01 vs p = 0.13) but also increases the proportion of expanded clones identified as TILs for patients responding to therapy (p = 0.08). The enhanced specificity for the peripheral expansion of TILs with the beta-binomial is largely due to fewer clones being identified as expanded rather than an increase in TIL clones, with mean differences of approximately -40% and 0% relative to the binomial model, respectively (Fig 4B). Discussion In healthy individuals, we found that the number of T-cell clones detected as expanded increases as the time interval between Sample A and B increases. This time-dependence highlights the importance of establishing a prior on typical biological variance within the T-cell repertoire, especially over weekly and monthly time intervals commonly used for collecting Improved specificity for clinically-relevant T cells clinical trial patient samples. We expect that the biological variance may further increase if much larger time intervals were used, e.g., months to years. By training a new beta-binomial model with data from healthy individuals, identification of differentially abundant T-cell clones now accounts for both random sampling from a diverse repertoire and normal timedependent biological variability. In effect, the clonal expansion measured by the beta-binomial differential abundance model should display reduced biological noise and subsequently enrich for clones responding to the pharmacologic or pathologic setting-of-interest. In a cohort of urothelial cancer patients treated with atezolizumab, expansion of TILs in the peripheral repertoire was previously reported to correlate with RECIST assessment [3]. Other studies have similarly analyzed the overlap between TILs and the peripheral repertoire and found correlations between patient outcomes or drug activity and expansion of T-cell clones [6,[8][9][10]. The association of peripherally expanded TILs with treatment outcomes has identified them as putative therapeutic effectors and as valuable biomarkers of clinical efficacy. The re-analysis presented here of the urothelial cancer cohort demonstrated that the beta-binomial model, by accounting for biological variability over time, reduces the total number of clones identified as expanding without affecting the number of TIL clones detected. Consequently, this model enriches for TIL clones within the expanded T-cell population in the blood compared to the binomial model and strengthens the association between expanded clones and clinical benefit. This strengthened association with clinical outcomes suggests that the statistical framework presented here a) represents an improvement over existing methods to quantify clonal expansion and b) may be similarly applicable as a biomarker for assessing pharmacodynamic activity and patient response to therapy. However, tumor samples are not always available for sequencing. We, and others, have also found that assessment of clonal expansion from longitudinal blood samples alone can be leveraged for optimizing therapeutic dosing and timing regimens as well as the assessment of novel drug combinations [5,7,11]. In addition, companion diagnostics are now increasingly common to identify patients for which there is an a priori expectation of benefit. Hence, early measures or predictors of therapeutic response, such as clonal expansion, have broad clinical value. Training cohort Whole blood samples collected from 10 healthy donors at three time points with 2-week intervals between collections were purchased from AllCells, LLC (Emeryville, CA). Among the 10 donors, there were 3 females and 7 males with ages ranging from 29 to 64 with an average age of 43. No immune events such as infections were documented for these donors over the course of collection. Genomic DNA was extracted at Adaptive Biotechnologies (Seattle, WA) for subsequent immunosequencing. Immunosequencing Immunosequencing of the CDR3 regions of human TCRβ chains was performed using the immunoSEQ Assay (Adaptive Biotechnologies, Seattle, WA). Extracted genomic DNA was amplified in a bias-controlled multiplex PCR, followed by high-throughput sequencing. Sequences were collapsed and filtered in order to identify and quantitate the absolute abundance of each unique TCRβ CDR3 region for further analysis as previously described [1]. Statistical method for identification of expanded T cell clones in blood DeWitt et al previously reported a method for identifying differentially abundant T-cell clones between two samples [2]. The method employed Fisher's exact test to compute a p-value for each T-cell clone, identified by a unique TCRβ rearrangement, against the null hypothesis that the T-cell clone frequency was the same in both Sample A and Sample B. In practice, this procedure can also be implemented in terms of the binomial distribution to estimate the probability (θ i ) for each T-cell clone i of counting k i templates given n total T cells in a sample according to Eq 1: Eq 1 permits testing against the null hypothesis that θ A = θ B = θ, where θ can be estimated from (k iA +k iB )/(n A +n B ), where subscript A denotes Sample A and B denotes Sample B; in other words, this binomial implementation is testing the null hypothesis that the frequency in Sample B, compared to the average frequency between the two samples, is within the binomial variance expected from sampling n T cells in Sample B. In Eq 1, variance in observed clone frequencies is expected to arise from sampling a diverse pool of T cells and driven solely by actual clone frequency and the number of T cells analyzed. To incorporate additional variance into the model with the goal of modeling natural, timedependent variation of T-cell clone frequencies, we incorporated the beta distribution as a prior for the binomial parameter θ in a beta-binomial model. The beta distribution yields a probability density function for each clone frequency, parameterized by two shape parameters β 1 and β 2 . Thus, incorporating the beta probability density into Eq 1 permits additional variance around the clone frequency θ and estimates the probability of observing a given frequency in Eq 2 [12,13]: In implementing the posterior distribution of Eq 2, we reparametrized the shape coefficients with mean frequency, μ, and variance, v, by β 1 = μv and β 2 = (1−μ)v [12]. Using this reparametrized implementation, we trained and modeled coefficients β 1 and β 2 as a function of the total template counts observed for a T-cell clone in Sample A and Sample B. The training method involves determining the mean frequency of all observations at given total template count, k i(A+B) = k iA +k iB , and modeling variance as v = (b � log(k i(A+B) )+a) 2 , where k iA is the template count in sample A and k iB is the templates count in sample B. To apply the implementation of Eq 2 after training, the mean frequency is estimated from k i(A+B) /n (A+B) and the variance is determined from the modeled variance with total template counts in Sample A and B, k iA +k iB , as input parameters to determine β 1 and β 2 for that clone frequency. The python script implementing both the binomial and beta-binomial models and associated data is available at: https://github.com/jrytlewski/beta_binomial_paper. To determine p-values, we calculate and sum the exponent of the log-likelihood of the betabinomial model (Eq 1 with the Eq 2 posterior) for each template count ranging from 0 to k iA for a one-sided test. For a two-sided test, we repeat the summation for all template counts yielding a point estimate probability value greater than and less than the count observed. To control for multiple testing, we excluded T-cell clones where k i(A+B) <5 and employ the Benjamini-Hochberg (BH) correction. We considered T-cell clones with a BH-adjusted false discovery rate (FDR) less than 0.01 significant and differentially abundant and identified these rearrangements as expanded or contracted clones based on the direction of their frequency change.
2019-03-16T13:02:33.166Z
2019-03-14T00:00:00.000
{ "year": 2019, "sha1": "e5dfa3c73ad15501bce9e17c79d6ea8d9f4c3b4e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0213684&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5dfa3c73ad15501bce9e17c79d6ea8d9f4c3b4e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
218595696
pes2o/s2orc
v3-fos-license
Generating directed networks with prescribed Laplacian spectra Complex real-world phenomena are often modeled as dynamical systems on networks. In many cases of interest, the spectrum of the underlying graph Laplacian sets the system stability and ultimately shapes the matter or information flow. This motivates devising suitable strategies, with rigorous mathematical foundation, to generate Laplacian that possess prescribed spectra. In this paper, we show that a weighted Laplacians can be constructed so as to exactly realize a desired complex spectrum. The method configures as a non trivial generalization of existing recipes which assume the spectra to be real. Applications of the proposed technique to (i) a network of Stuart-Landau oscillators and (ii) to the Kuramoto model are discussed. Synchronization can be enforced by assuming a properly engineered, signed and weighted, adjacency matrix to rule the pattern of pairing interactions. I. INTRODUCTION Complex networks play a role of paramount importance for a wide range of problems, of cross-disciplinary breadth. In several cases of interest, networks define the skeleton of pairwise interaction between coupled populations, families of homogeneous constituents anchored to a given node of the collections. The nature of existing paired relationships between mutually entangled populations is encoded in the weight of the links, that bridge adjacent nodes. Cooperative and competitive interactions can, in principle, be accommodated for by allowing for the weights to take positive or negative values. Local and remote non linear couplings shape the ensuing dynamics, and possibly steer the system towards a stationary stable equilibrium which is compatible with the assigned initial condition. The stability of the fixed points can be analyzed by studying the dynamical system in its linearized version. For reaction-diffusion systems defined on networks, the stability of the inspected equilibrium is ultimately dictated by the spectrum of the discrete Laplacian matrix [1][2][3]. The eigenvalues of the Laplacian define, in fact, the support of the dispersion relation, the curve that sets the rate for the exponential growth of the imposed perturbation. More specifically, external disturbances can be, in general, decomposed on the basis formed by the eigenvectors of the Laplacian operator. Each eigenvector defines an independent mode, which senses the web of intricate paths made accessible across the network: the perturbation can eventually develop, or, alternatively, fade away, along the selected direction, depending on the corresponding entry of the dispersion relation, as fixed by its associated eigenvalues. Stability is an attribute of paramount importance as it relates to resilience, the ability of a given system to oppose to external perturbations that would take it away from the existing equilibrium. Similarly, synchronization, a widespread phenomenon in distributed systems, can be enforced by properly adjusting the spectrum of the matrix which encodes for intertwined pairings. Based on the above, it is therefore essential to devise suitably tailored recipes for generating networks, which display a prescribed Laplacian spectrum, compatible with the stability constraint [13,14]. The problem of recovering a network from a set of assigned eigenvalues has been tackled in the literature both from an algorithmic [4][5][6] and formal [7,8] standpoints. In [8], a procedure is discussed to generate an undirected and weighted graph from its spectrum. The result extends beyond the well-known theorem of Botti and Merris [9] which states that the reconstruction of non-weighted graphs is, in general, impossible since almost all (non-weighted) trees share their spectrum with another non-isomorphic tree. In [11], a method is proposed to obtain a, directed or undirected, graph whose eigenvalues are constrained to match specific bounds, which ultimately reflect the nodes degrees, as well as the associated weights. In [12], a mathematically rigorous strategy is instead developed to yield weighted graphs which exactly realize any desired spectrum. As discussed in [12], the method translates into an efficient approach to control the dynamics of various archetypal physical systems via suitably designed Laplacian spectra. The results are however limited to undirected Laplacians, characterized by a real spectrum. The purpose of this paper is to expand beyond these lines, by proposing and testing a procedure which enables one to recover a signed Laplacian operator which displays a prescribed complex spectrum. Signed Laplacians are often used in the literature for applications which relate to social contagion, cluster synchronization or repulsive-attractive interactions [19,20]. In engineering, they are often employed in modeling microgrids dynamics [21]. The paper is organized as follows. The first section is devoted to illustrating the devised method, focusing on the mathematical aspects. We then turn to discussing the implementation of the scheme and introducing the sparsification algorithms that are run to cut unessential links. In the subsequent section, we elaborate on the conditions that are to be met to generate a positively weigthed network. This discussion is carried out with reference to a specific setting. Then, we apply the newly introduced technique to the study of an ensemble made of coupled Stuart-Landau oscillators [22][23][24] and to (a simplified version of) the Kuramoto model [27,28]. Finally, in the last section, we sum up the contributions and provide concluding remarks. EIGENVALUES. Consider a network made of Ω nodes and denote by A the (weighted) adjacency matrix, where structural information is encoded. More precisely, the element A ij is different from zero when a directed link exists from j to i. The entries of the matrix A are real numbers and their signs reflect the specificity of the interaction at play: negative signs stand for inhibitory (or antagonistic) couplings, while positive entries point to excitatory (or cooperative) interaction. From the adjacency matrix, one can define its associated Laplacian operator. This is the matrix L, whose elements are represents the natural extension of the concept of (incoming) connectivity to the case of a weighted network and δ ij denotes the Kronecker delta. We shall here discuss a procedure to generate a Laplacian matrix, which displays a prescribed set of eigenvalues. As anticipated above, we will focus in particular on directed Laplacians, which yields, in general, complex spectra. Concretely, we begin by introducing a collection of Ω = 2N + 1 complex quantities defined as 1 (1) The first 2N elements come in complex conjugate pairs and we set in particular Λ i = Λ * i+N , ∀ i = 1, ..., N , where (·) * stands for complex conjugate. The aim of this section is to develop a rigorous procedure to construct a directed (and weighted) graph G with 2N + 1 vertices, whose associated Laplacian has {Λ i } for eigenvalues. Recall that {Λ i } contains the null element, since this latter is, by definition, a Laplacian's eigenvalue. The procedure that we are going to detail in what follows exploits the eigenvalue decomposition of the Laplacian matrix. To this end we will seek to introduce a proper eigenvector 1 We shall assume all the eigenvalues but 0 to be complex numbers. Let us remark that the method developed readily adapts to the case where also real eigenvalues are present. In this case, the eigenvectors associated to real eigenvalues are generated according to the prescriptions of [12]. Eigenvectors linked to complex eigenvalues are instead assigned following the procedure outlined here. basis such that is a Laplacian. In Eq. (2), D is a diagonal matrix where the sought Laplacian eigenvalues are stored. More specifically, D ii = Λ i−1 for i = 2, .., 2N + 1 and D 11 = Λ 2N +1 = 0. The problem is hence traced back to constructing V , whose columns are the right eigenvectors of L. We also recall that rows of the inverse matrix V −1 are the left eigenvectors of L. As outlined in [15], Laplacian (right and left) eigenvectors should satisfy a set of conditions: 1. The columns of V , which refer to complex conjugate eigenvalues, must be complex conjugate too. 2. The same condition holds for the rows of V −1 . 3. Moreover, the columns of V (resp. the rows of V −1 ) corresponding to eigenvalues different from 0 should sum up to zero. 4. Finally, the right eigenvector relative to the null eigenvalue should be uniform (i.e. display identical components). In light of the above, we put forward for V the following structure: where, i stands for the imaginary unit, U is an invertible N × N matrix having real entries, the vector u = (1 . . . 1) T has dimension N × 1 and the vector v is defined as The first column of V is hence a uniform vector, corresponding to the eigenvector associated to the null eigenvalue. By construction, every other column sums up to zero, that is (4) holds. In the following, we will write D jj = α j + iβ j , which, in turn, implies D j+N,j+N = α j −iβ j , for j = 2, . . . , N +1. Here, α j and β j are real quantities and respectively denote the real and imaginary parts of the j-th eigenvalue. To proceed further, one needs to determine the inverse of V . To achieve this goal we begin by considering a generic matrix W , which satisfies the general constraints that are in place for V −1 . In formulae: Note that the jth and (N + j)th rows of W , for j = 1, .., N + 1, are complex conjugated, as required. Moreover, summing all the elements of each row (but the first) yields zero, a condition that the inverse of V should meet, as anticipated above. Building on these premises, we shall here determine the unknown S, w and d so as to match the identity W V = I, where I stands for the (2N + 1) × (2N + 1) identity matrix. This implies, in turn, that W ≡ V −1 due to the uniqueness of the inverse matrix. A straightforward manipulation yields the following conditions for, respectively, d and where use has been made of the identity u T u = N and where The quantity d is completely specified by the first of Eqs. (7) and solely depends on c and N , the size of the system. By making use of the identities (4) and (6), one can progress in the analysis of the second and third conditions (7) to eventually get: where: It is, therefore, immediate to conclude that The analysis can be pushed further to relate S to matrices U and E. The calculation, detailed in Appendix A, yields The expression for S * can be immediately obtained by taking the complex conjugate of the above equation. In conclusion, the matrix W defined in (5) is the inverse matrix of V , provided that d and S are respectively assigned as specified above. Clearly, matrix L defined in (2) has the desired spectrum (1). We should, however, prove that L is a Laplacian. This amounts to showing that L is a real matrix, whose columns sum up to zero. The proof is given hereafter. Proposition. Matrix L is real. From Eqs. (3) and (5), one can readily compute the elements of L via matrix products and, taking advantage of the block structures of V and W , prove that L is real. ℜ(·) is introduced to represent the real part of (·). The generic element L st can be written as: due to the diagonal structure of the matrix D and recalling that D 11 = 0. By making use of the specific form of V and W , one gets: since αβ + α * β * = 2ℜ(αβ), for any complex numbers α and β. One can thus conclude that L, as generated by the above procedure, is real. Proposition. Each column of L sums up to zero. Because of the diagonal structure of D: Then: since (i) D 11 = 0 and (ii) the components of all eigenvectors corresponding to non-null eigenvalues, sum up to zero. Notice that this result can also be proven by observing that the uniform vector d1 is the left eigenvector corresponding to the null eigenvalue, that is From (23), it follows that i L ij = 0, for every j. Proposition. L is balanced. We can also show that j L ij = 0 i.e. that the sum of all the elements of any given row i returns zero. According to (2), the first column of V is the right eigenvector corresponding to eigenvalue 0, namely This implies, in turn, j L ij = 0 ∀i, which ends the proof. The Laplacian is hence balanced, as the sums on the rows and on the (corresponding) columns return the same result. From the generated Laplacian operator, one can readily calculate the adjacency matrix of the underlying network. In general, for any assigned spectrum, the recovered adjacency matrix is fully connected, meaning that there exists links connecting each pair of nodes. Notice that links are weighted and signed. The weights can be small or have a modest impact on the spectrum of the associated Laplacian. This motivates the implementation of a dedicated sparsification procedure, which seeks to remove unessential links, in terms of their reflection on the ensuing Laplacian spectrum. The next section is devoted to elaborating along these lines. III. EXAMPLES AND SPARSIFICATION. In this section we discuss a sparsification procedure, which aims at a posteriori simplifying the structure of the recovered network. To this end, we begin by generating a network following the strategy outlined in the preceding section, and which yields an assigned spectrum for the associated Laplacian. The Laplacian spectrum that we seek to recover consists of Ω = 2N + 1 complex entries, the eigenvalues, which are here confined in the left portion of the complex plane, by setting ℜ(Λ j ) = α j < 0 for j ≥ 2, see blue crosses in Fig. 1 (a). This choice is somehow arbitrary, and ultimately amounts to enforce stability into a linear system of the form: where x i is the i-th entry of the Ω-dimensional state vector x. In the final part of the paper, we will turn to considering more complex scenarios where the stability of the examined dynamics is also influenced by local reaction terms. The network that we obtain, following the scheme outlined in the preceding sections yielding a Laplacian with the prescribed spectrum, is in general fully connected. In other words, a weighted link exists between any pair of nodes. The weights of the link can be, in principle, very small and, as such, bear a modest imprint on the ensuing Laplacian spectrum. Motivated by this observation, we perform an a posteriori sparsification of the obtained network: this aims to identifying and then removing the finite subset of links that appear to have a modest impact on the eigenvalues of the associated Laplacian. The first sparsification procedure that we have considered, aims at removing unessential links while confining the spectrum of the Laplacian operator within a bounded region of the complex plane. More precisely, we focus on the links which display a weight in the range (-σ, σ), where σ is a small, arbitrarily chosen, cut off. All links whose weight is smaller that σ in absolute value are selected, in a random order. The selected link is removed and the modified Laplacian spectrum computed. Denote byΛ j , for j = 2, ..., 2N + 1, the Laplacian eigenvalues obtained upon removal of the link. The change to the network arrangement becomes for j = 2, ..., N . Here ℑ(·) stands for the imaginary part of (·) and δ is an arbitrary threshold which quantifies the amount of perturbation that is deemed acceptable for the problem at hand. As a further condition, we check that ℜ(Λ j ) < 0 for j = 2, ..., 2N + 1, which, in turn, corresponds to preserving the stability of the linear system (25). Clearly, the order of extraction of the links, which are candidate to be trimmed, matters. Different realizations of the procedure of progressive sparsification illustrated above might hence result in distinct final outcomes. In Fig. 1(a), the eigenvalues obtained after the sparsification algorithm are plotted (red circles) for one choice of the cutoff δ. The sparsity pattern of the adjacency matrix obtained at the end of the above procedure is displayed in panel (b) of Fig. 1. In panels (c) and (d) of Fig. 1 we plot, with an appropriate color code, the entries of the adjacency matrices, before and after the sparsification. Only weights which are significantly different from zero (see annexed colorbars) are displayed. As appreciated by visual inspection, the skeleton of the network is not altered by the applied sparsification. To monitor how the eigenvalues get redistributed within the bounded domain to which they belong, we introduce the following indicators: The quantity I x measures the dispersion along the imaginary axis, by weighting the squared distance of each eigenvalue from the horizontal axis. Conversely, I y reflects the scattering of the eigenvalues about their mean, in the direction of the real axis. In Fig. 2, I x and I y , normalized to their respective values obtained before application of the sparsification algorithm, are shown against N , an indicator of the size of the generated networks. The sparsification procedure shrinks the eigenvalues in the x-direction, while the opposite tendency is observed for the distribution along the y-direction. The second sparsification method implements a more stringent constraint. Just like before, we select the links with weights in the range (-σ, σ), where σ acts as a small threshold amount. Unlike with the former case, we now eliminate the selected link only if the change produced on the modulus of each of the N eigenvalues is smaller than δ, namely if |Λ j −Λ j | < δ, for j = 2, ..., 2N + 1. In Fig. 3, the eigenvalues obtained after the sparsification algorithm are plotted (red circles) for two choices of the cutoff δ. The number of links that can be effectively removed grows with δ, the size of the allowed perturbation, as clearly demonstrated in Fig. 4. Summing up, we have developed and tested a procedure to generate a network which returns an associated Laplacian matrix with a prescribed complex spectrum. The weighted network obtained following the above procedure is, in general, fully connected. Dedicated sparsification strategies can be applied to remove the links which carry a small weight, and bear a modest imprint on the ensuing Laplacian spectrum. In the following, we will consider a specific setting of the aforementioned generation scheme, which makes it possible for the Laplacian elements to be computed analytically. IV. FOCUSING ON THE SPECIAL CASE U = qI In the previous sections we described a general method to generate a Laplacian matrix which displays a designated spectrum. The method assumes a generic matrix U , which can be randomly assigned. In the following, we will focus on the specific case where U is proportional to the identity matrix and progress with the analytic characterization of the obtained Laplacian. As we shall argue in the following, working in this framework allows us to derive a set of closed conditions for constraining the weights of the underlying network to strictly positive values. To proceed in this direction we set: where I stands for the identity matrix and q is scalar. A straightforward calculation returns the following expression for matrix S: while w is a uniform vector with identical entries equal to N a + ((N − 1)a − b)i and the quantities d, a, b are specified by The diagonal elements satisfy: where use has been made of the identity ℜ(D kk ) = α k . The first row and column are given by: while the remaining elements read: The Laplacian matrix obtained with the procedure illustrated above has both positive and negative entries. Signed Laplacians are often used in consensus problems, where nega-tive weights model antagonistic interactions. In other contexts, when e.g. the Laplacian is stemming from diffusive interactions, non diagonal entries are constrained to positive values. In the following, we will provide a set of necessary conditions for the assigned spectrum to eventually yield a Laplacian with positive extra diagonal elements, L ij > 0 for i = j. Clearly, L ii < 0, as summing on the rows should return zero. The underlying network, therefore, displays positive weights, as its adjacency matrix is basically obtained from the Laplacian matrix by replacing the diagonal elements with zeros. Further, we will set ℜ(Λ j ) = α j < 0 for j ≥ 2, an assumption which corresponds to dealing with a stable linear system of the type given in Eq. (25). This requirement immediately yields L 11 < 0, as it follows from relation (33). We will also operate in the setting analyzed above, i.e. assuming U = qI. The obtained expressions for the Laplacian elements allow to recast the sought conditions on their signs as: where the inequalities hold for t = 2, . . . , N + 1. The above conditions can be simplified, after some algebraic manipulations, to return and The interested reader can access the detailed steps in Appendix C. Assume that the assigned Laplacian spectrum matches the above conditions, while having α k < 0 for k > 2. Then the Laplacian matrix obtained with the above procedure , with U = qI, displays positive non diagonal entries. Let us explore the consequences of conditions (47) In the following section, we will apply the method of Laplacian generation here discussed to the study of two prototypical examples of dynamical systems on networks. V. SELECTED APPLICATIONS In this Section we consider two different models of interacting oscillators, defined on a network. In both cases, the coupling between individual oscillators is implemented via a discrete Laplacian operator, which reflects the specific network arrangement. It will be shown that a suitable network arrangement can be a priori established, building on the procedure illustrated above, so as to make the inspected systems stable against external perturbations. A. Coupled Stuart-Landau oscillators Consider an ensemble made of 2N + 1 nonlinear oscillators and denote by W i their associated complex amplitude. We assume the oscillators to be mutually coupled via a diffusive-like interaction which is mathematically modeled by a discrete Laplacian operator. Each oscillator obeys a complex Stuart-Landau equation. The dynamics of the system can be cast in the form: where c 1 and c 2 are real parameters. The index j runs from 1 to 2N + 1, the total number of oscillators. Here, K is a suitable parameter setting the coupling strength. Without loss of generality, in what follows it is assumed that K = 1. A ij is the generic entry of the directed and weighted adjacency matrix A and k i = j A ij . The system admits a homogeneous limit cycle solution in the form W LC (t) = e −ic 2 t . To characterize the stability of the cycle, one can introduce a non homogeneous perturbation in polar coordinates as: By linearizing around the limit cycle solution (ρ i (t) = 0, θ i (t) = 0), one gets: To proceed further, expand the perturbations ρ j and θ j on the Laplacian eigenvectors basis, that is By inserting this expansion in (51) and using the relation for α = 1, . . . , 2N + 1, we obtain a condition formally equivalent to the expression of the continuous dispersion relation If λ Re = ℜ(λ max ) is positive for some Λ (α) , the perturbation grows exponentially in time, and the initial homogeneous state proves unstable. Conversely, if λ Re = ℜ(λ max ) < 0, for every Λ (α) , the perturbation gets re-absorbed and the system converges back to the fully synchronized state. The condition λ Re < 0 can be further processed analytically, as discussed in [15]. In particular, it can be shown that the latter condition is fulfilled, if the Laplacian eigenvalues fall in a specific portion of the parameter plane, which reflects the choice made for the reaction parameters c 1 , c 2 and K. The region of interest is the one enclosed between the two solid lines, displayed in Fig. 6(a) for the specific selection of the parameters. The blue symbols depicted in Fig. 6(a) are randomly generated so as to fall in the region of the complex plane where stability holds. They represent the spectrum of the Laplacian that we seek to recover following the method illustrated above. In Fig. 6(b), λ Re is plotted against −Λ Re = −ℜ(Λ), confirming the stability of the homogeneous solution. We now proceed by generating a Laplacian matrix, which is constructed so as to yield the spectrum depicted in Fig. 6(b). From this, we compute the corresponding adjacency matrix A and use it to define the interactions between coupled oscillators, as follows from Eqs. (49). We then integrate numerically the governing equations, assuming the initial state to be a perturbation of the homogenous synchronized equilibrium. As expected, the perturbation fades away and the system regains its unperturbed, fully synchronized, equilibrium. As a second example we set to study the Kuramoto model. Consider a system made of 2N + 1 oscillators, denote by θ i the phase of the i-th oscillator, and ω i its natural frequency. The oscillators evolve as dictated by the following system of 2N + 1 coupled differential equations:θ Here, A ij stands for the entries of the adjacency matrix A which sets the interactions between pairs of oscillators. The matrix is, in principle, weighed, and may display positive and negative entries as reflecting the specific interaction (excitatory or inhibitory) being at play. As an additional assumption, we will here focus in the simplified setting where ω i = ω ∀i. We can then introduce the new variable ψ i = θ i − ωt, and write the governing equation in the equivalent form:ψ A homogeneous solution always exists with ψ i = Ψ ∀i, and for any constant Ψ ∈ [0, 2π), as it can be immediately checked by substitution. To assess the stability of the solution, one sets ψ i = Ψ + δ i , and expands (56) at the leading order in the δ i . In this way, one gets: where L ij = A ij − k i δ ij is the Laplacian operator which stems from A ij . The stability of the simplified Kuramoto model here considered is controlled by a linear system of the type introduced in (25), with the obvious replacement of x i with δ i . The system proves hence stable if the (non trivial) eigenvalues of the Laplacian operator display negative real parts. Our aim, here, is to generate a Laplacian (and therefore a matrix of binary weighted connections among oscillators) which warrants the stability of the system. To this end we assign the eigenvalues (which appear in conjugate pairs) to belong to the negative portion of the complex plane, see Fig. 7(a). The null eigenvalue is clearly included into the spectrum. Running the procedure discussed in the first part of the paper, we obtain the corresponding Laplacian and compute the associated adjacency matrix. The Kuramoto model (56) is then integrated numerically by assuming the recovered expression for A. As predicted, the system is stable to external perturbations as one can clearly appreciate by inspection of Fig. 7(b). which proves the results. Appendix B: About the computation of L ij The aim of this section is to detail the computations needed to obtain explicitly the entries of the Laplace matrix L under the assumption U = qI starting thus from Eqs. (16) and (18). Let us begin by computing the diagonal elements of L, namely L ii for i = 1, .., N , . For i = 1, one gets: From (4) and (6) we obtain for k = 2, . . . , N + 1: Then where use has been made of the identity ℜ(D kk ) = α k . Focus now on the conditions for β t . We assume that the following condition holds: and we set to explore its consequences. Eq. (C4) yields: For N > 1, conditions (C3) maps therefore in the following equivalent system: Remark that: due to the inequality which holds for each N > 1. Hence, system (C6) takes the form: Then, (C9) has no solutions, under the working hypothesis that we have put forward to deriving it. In fact: and summing on every t we get which, in turn, implies 4N 2 4N 2 −1 < 1, a condition that is obviously never met. We now go back to revise ansatz (C4), and consider the alternative scenario: Following a path analogous to the one discussed above, we get: and
2020-05-13T01:00:51.269Z
2020-05-12T00:00:00.000
{ "year": 2020, "sha1": "213833cb39b798abcfe164b04a66d5984d647e53", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2632-072x/abbd35", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "213833cb39b798abcfe164b04a66d5984d647e53", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
264099260
pes2o/s2orc
v3-fos-license
Histological transition from minimal change disease to THSD7A-associated membranous nephropathy in a patient receiving long-term steroid treatment: A case report Rationale: A predominant Th2 immune response is suggested in the pathogenesis of both minimal change disease (MCD) and membranous nephropathy (MN); however, consecutive development of the 2 diseases in a patient is extremely rare. Patient concern: A Japanese man, who developed nephrotic syndrome in his 50s and was diagnosed with MCD by renal biopsy, experienced a relapse of proteinuria approximately 3 years later during long-term steroid treatment. Since the proteinuria was resistant to increase in steroid dosage, repeat renal biopsy was performed, which revealed a small amount of glomerular subepithelial immune deposits containing immunoglobulin (Ig)G (dominantly IgG4). Immunostaining for thrombospondin-type-1-domain-containing-7A (THSD7A) was positive on the glomerular capillary walls, whereas that for other causative antigens of MN, such as phospholipase A2 receptor or neural epidermal growth factor-like 1 protein, was negative. Detailed examination found no associated condition, including malignancies and allergic diseases. Diagnosis: The diagnosis of THSD7A-associated idiopathic MN was made. Interventions and outcomes: He received further increased dose of steroids. Thereafter he maintained clinical improvement because his urinary protein level was decreased. Lessons: The present case suggested that histological transition from MCD to MN is possible and repeat biopsy would be crucial for accurate diagnosis. Introduction Minimal change disease (MCD) and membranous nephropathy (MN) are the leading causes of nephrotic syndrome in adults.MN is usually classified into the idiopathic type, which occurs in the absence of an underlying disease, and the secondary type, which is associated with a causative systemic disease.Among a number of potential causative antigens for MN that have been identified, phospholipase A2 receptor (PLA 2 R) and thrombospondin-type-1-domain-containing-7A (THSD7A), expressed on podocytes, are representatives for patients with idiopathic MN, and autoantibodies of the immunoglobulin (Ig)G4 subclass are dominant in the glomerular immune deposits of such patients. [1,2]though the precise disease pathogenesis remains to be clarified, predominant Th2 immune responses are supposed to play important roles in both MCD and MN. [3][6] However, whether some overlap exists between the 2 diseases is still unclear, and to the best of our knowledge, there is only 1 case reported that was initially diagnosed as MCD but subsequently developed into MN. [7]ere, we have reported a case in which the first renal biopsy diagnosed MCD, but the repeat renal biopsy, which was performed approximately 4 years later owing to the relapse of proteinuria, demonstrated the development of THSD7A-associated MN during long-term steroid treatment. Case presentation A previously healthy Japanese man in his 50s developed anasarca, and was referred to our hospital 4 years ago.Massive proteinuria along with hypoproteinemia, which led to the diagnosis of nephrotic syndrome, was noted, as shown in Table 1.Despite low IgG level and high IgE level, dysproteinemia was not detected.Renal biopsy was performed, and light microscopy revealed glomeruli with apparently normal appearance (Fig. 1A).Immunofluorescence staining for Igs and complements was all negative (Fig. 1B), and no electron-dense deposit was observed by electron microscopy (Fig. 1C).He was diagnosed with MCD and was treated with steroids, as recommended by the guideline in Japan. [8]e achieved complete remission of nephrotic syndrome and maintained a stable condition under long-term steroid treatment.However, proteinuria (approximately 1 g/g Cr) recurred approximately 3 years after the onset of nephrotic syndrome.Although the dose of steroids was increased from prednisone at 7.5 mg daily to 25 mg daily, proteinuria aggravated further.Laboratory results at this point are summarized in Table 1.Although hypoproteinemia was noted, his serum creatinine level was normal.Dysmorphic red blood cells were observed in the urinary sediment, although nephritic casts were absent.He complained of peripheral edema, although the rest of the physical findings were unremarkable, and his vital signs were normal. Since the patient presented with atypical clinical findings of MCD, such as, steroid-resistant proteinuria accompanied by glomerular hematuria, repeat renal biopsy was performed approximately 4 years after the first.Light microscopy sections contained 16 glomeruli, of which one was obsolescent, but neither proliferative changes nor thickening of the glomerular capillary walls was seen in the remaining glomeruli (Fig. 2A).Immunofluorescence staining showed deposition of IgG along the glomerular capillary walls, although that of IgA, IgM, and complements C3 and C1q was negative (Fig. 2B).Immunofluorescence staining for IgG subclasses showed the dominant deposition of IgG4 (Fig. 2C).Although electron microscopy showed small amounts of subepithelial electron-dense deposits, spike formation of the glomerular basement membrane was unclear (Fig. 2D).Immunostaining for THSD7A was positive on the capillary walls (Fig. 2E), whereas that for PLA 2 R or neural epidermal growth factor-like 1 protein was negative (data not shown).Serum anti-PLA 2 R, measured by the enzyme-linked immunosorbent assay, was negative (2.04 RU/mL; positive cutoff value, 20 RU/mL).Detailed examination, including imaging tests, was repeatedly performed, but underlying diseases, such as malignancies, were not found, and a diagnosis of THSD7A-associated idiopathic MN (stage I) was made based on the second renal biopsy.He received further increased dose of steroids [intravenous half-dose methylprednisolone pulse therapy (500 mg daily for 3 days) followed by oral prednisone at 30 mg per day].His urinary protein level gradually decreased thereafter, and he maintained clinical improvement during the 1-year follow-up period (proteinuria level of approximately 0.5 g/g Cr with normal serum creatinine level). Discussion We reported a case in which histological transition from MCD to MN occurred during long-term steroid treatment.To the best of our knowledge, there had only been one such case reported to date, in which first and second renal biopsy yielded the diagnosis of MCD and idiopathic MN, respectively, but third renal biopsy resulted in the final diagnosis of lupus nephritis. [7]In the reported case, causative antigen for idiopathic MN was not identified, and complement C1q deposition, which suggests the possibility of secondary MN, was observed.On the other hand, in our patient, C1q deposition was negative, and dominant IgG4 deposition by immunofluorescence IgG subclass staining was observed.Furthermore, enhanced granular expression of THSD7A on the glomerular capillary walls prompted the final diagnosis of THSD7A-associated idiopathic MN. THSD7A is a second antigenic target for idiopathic MN; it was identified in 2014, followed by the first discovery of PLA 2 R. [2] The prevalence of THSD7A-associated MN in idiopathic MN is reportedly 3 to 9% in Japan and is supposed to be higher than that in western countries. [9,10]Moreover, THSD7Aassociated MN is reported to often be associated with malignancies. [11]Some studies have reported THSD7A-associated MN to be related to allergic diseases, especially asthma, or eosinophilia. [12,13]In our patient, detailed and repeated examination did not detect malignancies or allergic diseases.The patient's eosinophil count was normal when THSD7A-associated MN was diagnosed, although assessment was difficult due to the administration of steroids.Nonetheless, careful and prolonged follow-up, to monitor the development of malignancy, is warranted, since the onset of MN is known to sometimes precede development of malignancy over the years.In addition, it is reported that the etiology of MN could change from idiopathic to malignancy-associated MN during the clinical course. [14]oreover, previous diagnosis of MCD and MN does not necessarily preclude the possibility of subsequent development of systemic lupus erythematosus, as reported earlier. [7]redominant Th2 immune responses are supposed to play important roles in both MCD and MN. [3]Our patient presented elevated serum IgE level when he first developed MCD, although we did not evaluate the patient's other immune conditions, such as Th1/Th2 cytokines or thymus and activation-regulated chemokine.Repeat renal biopsy is now seldom performed in patients with both MCD and MN; however, the present case suggested that susceptible individuals could develop both MCD and MN, depending on as-yet-unknown pathogenetic mechanisms, and that histological transition from one to another could occur.Further accumulation and analyses of similar cases would be required to clarify this point.Moreover, the importance of repeat renal biopsy, which is crucial for definitive diagnosis, should be realized especially when unexpected clinical courses occur. The positive rate of serum anti-THSD7A antibodies has been reported to be comparable to that observed by glomerular THSD7A immunostaining. [15]Although immunostaining for THSD7A, using the first renal biopsy tissue, was negative (Fig. 1D), we could not evaluate the patient's serum anti-THSD7A antibodies throughout the clinical course due to the lack of preserved serum samples, which is a limitation of this report.Nevertheless, the present case showed histological transition from MCD to THSD7A-associated MN during long-term steroid treatment, suggesting that when unexpected clinical courses occur, renal biopsy should be performed repeatedly.Further, whether there exists some commonality between MCD and MN should be investigated in a future study. Figure 1 . Figure 1.Histological features of the first renal biopsy.(A) Light microscopy image showing a glomerulus without proliferative changes and thickening of the glomerular capillary walls (periodic acid-Schiff stain).(B) Negative immunofluorescence staining for immunoglobulin G and complement C3. (C) Electron microscopy image showing effacement of podocyte foot processes.No electron-dense deposit was observed.(D) Immunoperoxidase staining for thrombospondin-type-1-domain-containing-7A was negative at this time. Figure 2 . Figure 2. Histological features of the repeat renal biopsy.(A) Light microscopy image showing a glomerulus with almost normal appearance (periodic acid-methenamine-silver stain).(B) Immunofluorescence staining showing the deposition of immunoglobulin (Ig)G along the glomerular capillary walls.The deposition of IgA, IgM, and complements C3 and C1q was negative.(C) Dominant deposition of IgG4 is shown by immunofluorescence staining of the IgG subclasses.(D) Electron microscopy demonstrated small amounts of subepithelial electron-dense deposits (yellow arrows) and podocyte foot process effacement.Spike formation of the glomerular basement membrane was unclear.(E) Immunoperoxidase staining for thrombospondin-type-1-domain-containing-7A was strongly positive on the glomerular capillary walls. Table 1 Laboratory data of the patient.
2023-10-15T06:18:11.921Z
2023-10-13T00:00:00.000
{ "year": 2023, "sha1": "f8cfd5ca45a541c77c926b6b0660e8e347f744f6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f772952129fddde428189b41924284b923445300", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
130218634
pes2o/s2orc
v3-fos-license
The silica-carbon biogeochemical cycle in the Bohai Sea and its responses to the changing terrestrial loadings 20 Silicon (Si) and carbon (C) play key roles in the river and marine biogeochemistry. The Si and C budgets for the Bohai Sea were established on the basis of measurements at a range of stations and additional data from the literature. The results show that the spatial distributions of reactive Si and organic C (OC) in the water column are largely affected by the riverine input, primary production and export to the 25 Yellow Sea. Biogenic silica (BSi) and total OC in sediments are mainly from marine primary production. The major supply of dissolved silicate (DSi) comes from benthic diffusion, riverine input alone accounts for 17% of reactive Si inputs to the Bohai Sea; the dominant DSi removal from the water column is diatom uptake, followed by sedimentation. Rivers contribute 47% of exogenous OC inputs to the Bohai Sea; the 30 dominant outputs of OC are sedimentation and export to the Yellow Sea. The net burial of BSi and OC represent 3.3% and 1.0% of total primary production, respectively. Primary production has increased by 10% since 2002 as a result of increased river loads of DSi and BSi. Our findings underline the critical role of riverine Si supply in primary production in coastal marine ecosystems. 35 Introduction Diatoms control a large part of primary production in marine ecosystems, with more 40 than 50% in the global ocean and more than 75% in coastal waters (Nelson et al., 1995;Rousseaux and Gregg, 2014).The consumption of dissolved silicate (DSi) and production of biogenic silicon (BSi) is primarily controlled by primary production by diatoms (Ragueneau et al., 2000;Tréguer and De La Rocha, 2013). 45 Although ocean margins cover only 8% of the global ocean area (Berner, 1982), the production and accumulation rates of BSi and organic carbon (OC) in these areas are significantly higher than in the open ocean (Hedges and Kiel, 1995;Tréguer and De La Rocha, 2013).Rivers are the dominant Si and OC source in coastal marine ecosystems, accounting for up to 80% of total exogenous input (Bauer et al., 2013; 50 Regnier et al., 2013;Tréguer and De La Rocha, 2013).However, large parts of the world's coastal marine ecosystems have been changing due to decreasing riverine Si discharge as a result of Si trapping in reservoirs (Conley, 1997;Humborg et al., 1997). The decreasing input of Si may lead to a shift from a system dominated by diatoms to one dominated by non-siliceous phytoplankton (Humborg et al., 1997(Humborg et al., , 2000;;Tréguer 55 and De La Rocha, 2013;Rousseaux and Gregg, 2015), which may influence the functioning of coastal marine ecosystems, especially with respect to the carbon (C) cycle. The Bohai Sea is a semi-enclosed, shallow shelf water body of the North-western 60 Pacific Ocean.A large number of rivers drain into the Bohai Sea, typically with densely populated and industrialized coastal areas.Ongoing human activities (dam construction, agriculture and industry) have induced important changes in the river nutrient concentration and composition (Gong et al., 2015;Liu, 2015). 65 Dam construction has caused decreased Si transport by the Yellow River (Liu, 2015;Ran et al., 2015), and distorted nutrient stoichiometry (Tang et al., 2003;Ning et al., 2010;Liu et al., 2011), which changed primary production and phytoplankton Biogeosciences Discuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg- -42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.composition (Tang et al., 2003;Lin et al., 2005;Ning et al., 2010).Phytoplankton abundance had decreased in the period of 1959-1999 (Tang et al., 2003) and dominant 70 species succession from diatoms to non-diatoms had also been found in the 1980s and 1990s (Lin et al., 2005).The water and sediment regulation of the Yellow River since 2002 may enhance primary production by increasing export of water and sediment to the Bohai Sea.Changes of nutrient inputs from rivers in the semi-enclosed Bohai Sea have larger and more long-lasting influence on the ecosystem than in other open seas 75 because the water residence time in the Bohai Sea is about 3 years (Liu et al., 2012), There may be a close connection between the changes in nutrient loading and primary production in coastal areas (Bernard et al., 2011).Recent studies also pointed out the sensitivity of shelf seas to changing riverine loading due to anthropogenic 80 perturbations, but unfortunately these studies did not cover riverine Si input to coastal marine ecosystems and consequences for the C cycle (Li et al., 2014;Woodland et al., 2015).Our understanding of the regional coupled Si-C cycle and ecological effects of changing river loadings in the continental shelves of eastern China is poor (Ragueneau et al., 2010).In this paper we establish a Si and C budget for the Bohai 85 Sea to analyze the coupled Si-C biogeochemistry; the aim is to quantify the influence of changing terrestrial loadings on the Si cycle and primary production in the Bohai Sea. Sampling and analytical methods Two campaigns were carried out in spring (May 3 to 24) and autumn (November 2 to 20) of 2012 at several sampling stations in the Bohai Sea and the adjacent area of the Northern Yellow Sea (Fig. 1).Water samples in surface (0.5 m) and bottom water (< 95 2m from the sea floor) were collected using an oceanography water sampler (Seabird 911 CTD Plus, Sea-Bird Electronics, Bellevue, WA, USA).Ancillary parameters such as temperature and salinity were recorded on board simultaneously.Also, surface (0-1 Discuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg- -42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.cm) sediment and core sediment samples (between 20 and 50 cm long) were collected (Fig. 1b and 1c) in part of the stations.The water samples were filtered with 200 µm Nylon sieves to remove zooplankton, subsequently filtered with 0.45 µm polyethersulfone filters, which were treated according to the following four steps: 1) cleaned with 1:1000 HCl for 24 h; 2) rinsed with Milli-Q water to achieve a neutral pH; 3) oven-dried at 45 o C for 72h; 4) weighed 105 after cooling in a dryer with desiccant.The filters were stored at -20 o C for determination of suspended particulate matter (SPM) and BSi, and filtrates were stored at 4 o C after adding drops of chloroform for determination of DSi. Biogeosciences In addition, the pre-weighed water samples were filtered with 0.70 µm GF/F 110 glass-fiber filters (Whatman, Maidstone, UK), which were also pre-cleaned according to the following four steps: 1) cleaned with 1:1000 HCl for 24 h; 2) rinsed with Milli-Q water to achieve a neutral pH; 3) burned at 450 o C for 4h; 4) weighed after cooling in a dryer with desiccant.The filters were stored at -20 o C for determination of suspended particulate organic carbon (POC), and filtrates were stored at -20 o C for 115 determination of dissolved organic carbon (DOC) in a later stage. Surface sediment samples (0-1 cm) were collected with a box sediment sampler after removing the overlying water, and then packed into sealed bags and frozen at -20 o C for determination of BSi and total organic carbon (TOC).At the same time, sediment 120 core samples were collected using a sampling tube with an inner diameter of 9 cm at some stations (Fig. 1b).Cores were divided into 1 cm intervals after overlying water was collected using syringes (13 mm 0.45 μm, PTFE) with needle tubing.The pore water of each subsample was separated by centrifugation and preserved as above for DSi analyses; finally, subsamples were stored at -20 o C before BSi and TOC analysis Water samples were pretreated as described above. DSi was analyzed with a QuAAtro Autoanalyzer, using the silicomolybdic blue method, with a detection limit of 0.030 μmol l -1 and a relative standard deviation 135 <0.3%.BSi in SPM was extracted by NaOH solution (0.2 mol l -1 , 100 o C, 40 min ) and corrected for mineral interferences using the Si:Al ratios (Ragueneau et al., 2005), while the BSi content in sediment was measured using the alkaline extraction method (1% Na2CO3, 85 o C, extraction during 8 hours, during which the extract is sampled and analyzed every hour ) (DeMaster, 1981), with a measurement uncertainty of 140 0.25% and relative standard deviation <0.3%.Reactive silica (RSi) is the sum of DSi and BSi.DOC was determined using a high-temperature catalytic oxidation technique (Zhang et al., 2013) with a TOC analyzer (TOC-CCPH, Shimadzu, Japan); the relative standard 145 deviation is <2%.For POC determination, 3-5 drops of 2 mol l -1 HCl were added to the sample filters in a closed container with HCl fumes for 24 h to remove inorganic carbon, and then dried at 45 o C (Zhang et al., 2013).Subsequently, POC was determined with an elemental analyzer (Euro Vector EA3000, Via Tortona, Milan, Italy) with standard deviation <10%.150 TOC in sediments was analyzed with the same elemental analyzer.Before measurement, freeze-dried sediment samples were decalcified using 4 mol l -1 HCl and subsequently rinsed with de-ionized water (6-8 times) to achieve a neutral pH, and then pretreated sediments were dried overnight at 60 °C (Hu et al., 2009) for TOC Water budget The water budget of the Bohai Sea (Fig. 2) provides the basis for the calculation of the Si and C budgets.The hydrography of the Bohai Sea is largely determined by the 165 Bohai Sea Coastal Current (BSCC) and exchange with the Yellow Sea.River discharge, precipitation, submarine groundwater discharge, surface runoff and evaporation are taken into account in the water budget calculation for the shelf in steady state as follows: Where Q are water fluxes (km 3 yr -1 ), subscripts R, A, YTB, GW, SR, BTY and EVA denote the river discharge, atmospheric deposition, Yellow Sea inflow, submarine groundwater, surface runoff , Bohai Sea outflow and evaporation, respectively (Table 1, Fig. 2).The estimated river input is about 34 km 3 yr -1 for the 6 major rivers discharging into the Bohai Sea (Table 1), and precipitation and evaporation amount to 175 34 km 3 yr -1 and 85 km 3 yr -1 , respectively, based on Martin et al. (1993) and Lin et al. (2001).The water flux from the Bohai to Yellow Sea (BTY) is 470 km 3 yr -1 , and with a reverse flux (YTB) of 442 km 3 yr -1 there is a net export 28 km 3 yr -1 from the Bohai Sea to the Yellow Sea (Liu et al., 2003a).The submarine groundwater input is about 44 km 3 yr -1 based on estimates of submarine groundwater discharge in the Yellow 180 River delta (Peterson et al., 2008).The budget yields an estimate for surface runoff (QSR) of 1 km 3 yr -1 , which includes the discharge by small streams not included in the above river discharge.We estimated the DSi, BSi, DOC and POC fluxes for 6 major rivers discharging into the Bohai Sea (Table 1 and Fig. 1).Fluxes are calculated with long-time monitoring data.On the basis of monthly data for BSi, POC and SPM, we found that the fraction 205 of BSi and POC in the suspended solids increases exponentially and linearly with SPM concentration, respectively.The BSi and POC concentrations for rivers with missing or scant data were estimated with the following regression equations: Atmospheric deposition Atmospheric input to the Bohai Sea was calculated from the DSi concentration in precipitation (Martin et al., 1993;Zhang et al., 2004), dry deposition (Zhang et al.,220 2004), combined with the area of the Bohai Sea (77300 km 2 ).The POC in the air mainly occurs in the particulate matter with grain size <2.5μm(Chen et al., 1997) and the deposition rate of aerosol is about 0.001 m s -1 (Duce et al., 1991); the POC concentration in aerosol in the Bohai Sea was from the base station of Chang island in the Bohai Sea Strait (Feng et al., 2007), and DOC in the coastal rainwater is from 225 Willey et al. (2000).Rainfall and aerosols have low BSi concentrations and can be neglected as sources (Tréguer and De La Rocha, 2013). Exchanges between the Bohai and Yellow Seas Water exchange between the Bohai and Yellow Sea is driven by the BSCC in the 230 southwest of Bohai Sea and Yellow Sea Warm Current (YSWC) in the Northern Yellow Sea (Fig. 1, Table 1).The RSi and OC fluxes through the Bohai Strait were calculated using the water flux together with the measured RSi and OC concentration data from the Southern Bohai Sea and the Northern Yellow Sea (Table 1).235 Benthic flux at the sediment-water interface The benthic flux of DSi at the sediment-water interface was calculated based on Fick's first law (Berner, 1980) according to: Where JF represents the diffusion rate (mmol m -2 d -1 ); φ is the porosity of the sediment As no direct measurement data are available, we estimated the DOC flux from the benthic flux of DSi to the water column based on the molar ratio of BSi : TOC in 250 surface sediments of 0.56. Sedimentation Sedimentation of BSi and OC in the Bohai Sea were calculated from the accumulation rate and the surface area (Ingall and Jahnke, 1994;Liu et al., 2005).The accumulation 255 rates were based on the following equations: where RBSi and ROC represent for the accumulation rates of BSi and OC (mol m -2 yr -1 ); CBSi and CTOC represent for the BSi and TOC content in surface sediments (%); MAR 260 is the mass accumulation rate of the sediment (g m -2 yr -1 ); 28 and 12 are the molar weight of Si and C, respectively. Submarine groundwater discharge and surface runoff The submarine groundwater DSi flux into the Bohai Sea was calculated from the 265 water flux obtained from 228 Ra and 226 Ra mass balance models (Peterson et al., 2008) and the DSi concentration in groundwater (Lin et al., 2011).As there are no data on DOC input to the Bohai Sea via submarine groundwater, we assumed that the DOC concentration in submarine groundwater equals to that in rivers based on Barrón et al. (2015). 270 Similar to the water budget, DSi and OC input from surface runoff and rivers not included in the large river inputs (Table 1) were obtained as a result of the budget calculation.for the total area of the Bohai Sea by satellite remote sensing technology calibrated against measured productivity (Tan et al., 2011).The rates of DSi uptake by 280 phytoplankton and BSi regeneration rate were calculated using the Redfield ratio (C:Si=106:15, atom basis, Brzezinski, 1985); OC respiration was calculated according to Wei et al. (2004), who demonstrated that respiration accounted for 78% of primary production in the Bohai Sea.285 Distribution of RSi and OC in the water column The DSi concentrations strongly vary in space and time; those in fall exceed those in spring (Table 2).In spring, the distribution of DSi in surface water is similar to that in 3 ). 295 The BSi concentrations are similar to those of DSi; in fall the BSi concentration exceeds that in spring by a factor of four (Table 2).In spring, surface water BSi The DOC concentration in fall is slightly higher than that in spring (Table 2).In spring, the DOC concentration in surface water is lower than in bottom water, but the spatial distributions of DOC in autumn and spring are similar, with fairly high concentrations in the nearshore and low ones in the offshore areas.In fall, DOC concentrations show only small spatial variability (Fig. 3). 310 The POC concentrations and their spatial distributions in spring are close to that in fall (Table 2), with fairly high concentration in the western part of the Bohai Sea and the Yellow River estuary (Fig. 3).315 Distribution of RSi and OC in the sediment The BSi content in the surface sediments is 0.4% and the TOC content in the surface sediments is 0.3% with a large variability in both BSi and TOC (Table 3).The spatial pattern and variability of BSi in sediments is similar to that of TOC, with high concentrations in the mud area of the Bohai Sea and the area adjacent to the Yellow 320 River estuary (Fig. 4). Budget of RSi and OC in the Bohai Sea The estimated riverine RSi and OC fluxes are, respectively, 5.0 Gmol yr -1 and 38 Gmol yr -1 , and DSi and DOC account for 54% and 9% of total RSi and OC fluxes, 325 respectively. 360 The BSi concentration in the Bohai Sea is similar to that in the other parts of the Eastern China Sea (Liu et al., 2005) draining into the Bohai Sea carry abundant BSi and have a large influence on the composition of RSi.In addition, the distribution and transportation of BSi are affected by sediment resuspension (Liu et al., 2005), which may be the main reason why BSi in the bottom water exceeds that in the surface water in parts of the Bohai Sea. 370 DOC is the dominant (>95%) form of OC in the world's oceans (Reeburgh et al., 1997), and a little less so (89%) in the Bohai Sea.The spatial distributions of DOC and POC are similar in the Bohai Sea, both being affected by the same processes, such as input from land by rivers, primary production, biological action, sediment resuspension and many other factors.Our results show a significant negative 375 correlation between POC concentrations and salinity (r =−0.430, p < 0.05, in spring; r =−0.348, p < 0.01, in autumn), indicating that POC concentrations in the coastal areas exceed those in the high salinity waters and that the distribution of POC is largely determined by the terrestrial input.The POC distribution is also affected by sediment resuspension in ocean margins (Zhu et al., 2006), which may explain why POC in 380 bottom water is generally higher than in surface water. The average molar Si : C ratio of BSi and POC in the Bohai Sea of 0.12 is close to that of diatoms in coastal waters (Brzezinski, 1985).This means that BSi is mainly from marine primary production by diatoms, which is consistent with the results from 385 Jiaozhou Bay (Liu et al., 2008a) and East China Sea (Liu et al., 2005).The C : N atomic ratio in SPM ranges from 1 to 10, with an average value of 5, indicating that OC also originates from marine phytoplankton production. Distribution of RSi and OC in sediments 390 There are no differences of both BSi and TOC among seasons at the 95% confidence level.The BSi content of surface sediments in the Bohai Sea is similar to that in the continental shelves in Eastern China (Liu et al., 2009) Qiu, 1997) and the equatorial Pacific Ocean (Piela et al., 2012). High concentrations of both BSi and TOC concentrations in the mud area of the Bohai Sea (Fig. 4) suggest that the sediment grain size and hydrodynamic setting have an important influence on the preservation of BSi.BSi content in the sediment is much 400 lower than that in SPM (0.1%-3.0%, average 0.8%), which indicates that BSi in particles has been degraded during sedimentation and burial.Meanwhile, the average Si : C ratio in sediments of 0.56 is much higher than that in suspended particulate matter.This confirms that degradation rate of OC in the ocean is faster than that of BSi (Ragueneau et al., 2000) due to the lower preservation efficiency of autogenetic 405 OC than that of autogenic BSi (Muller-Karger et al., 2005;Tréguer and De La Rocha, 2013). 410 The budget of RSi shows that the benthic flux across the sediment-water interface is the major source of reactive Si in the Bohai Sea water mass, contributing 49% of the total RSi input (Table 1, Fig. 5).The next largest source is from submarine groundwater, comprising 26% of total inputs.The river input accounts for 17%, and all other inputs are minor (surface runoff, 7%; atmospheric deposition, <1%).The 415 dominant output fluxes of RSi in the water column is by the sedimentation and export to the Yellow Sea, contributing 99% and 1% to Si removal in the budget, respectively. Overall, considering all exogenous input of OC into the Bohai Sea, riverine flux alone accounts for 47%, followed by the benthic flux of DOC, accounting for 34%; the 420 remaining 19% is from surface runoff (8%), submarine groundwater discharge (6%), and atmospheric deposition (5%).The dominant outputs of OC in the Bohai Sea are sedimentation (75% of total output) and the outflow to the Yellow Sea (25%).The BSi share in river export in total RSi of 46% is much higher than the average 425 value for global rivers (15%) (Laruelle et al., 2009).POC comprises 90 % of the riverine OC, which also exceeds the average for world rivers of 40% (Hedges et al., 1997).The Yellow River export to the Bohai Sea is 68% of total exogenous input for RSi and 75% for OC. 430 The water exchange between the Bohai and Yellow Seas has only a minor influence on the budget of RSi and OC; however, it has an important effect on the distribution, transport, transformation and retention time of RSi and OC. The benthic recycling of Si in the sediment is a particularly important flux into the 435 DSi pool in the water column, which confirms earlier studies (Van Cappellen et al., 1997).DSi concentration gradients in the pore water at all studied stations show a diffusion flux from sediment to water column.The diffusion rates vary from 0.38 to 0.62 mmol m -2 d -1 , similar to previously reported data (Liu et al., 2011).The high benthic flux plays an important role in maintaining the level of primary production in 440 the water column and also results in a concentration gradient, with higher DSi concentration in bottom than in surface waters. The DOC in pore water is also an important source of DOC in the water column (Burdige et al., 1999;Barrón et al., 2015).Another way to estimate the benthic DOC 445 flux is by assuming DOC diffusion rates to be similar to those in bare sediments (0.9 mmol m -2 d -1 ) (Burdige et al., 1999).This yields a DOC flux of 26 Gmol yr -1 , which confirms our estimate (27 Gmol yr -1 ) based on the assumed BSi : TOC ratio of 0.56. According to the difference of the diffusion flux of DSi and BSi sedimentation, the 450 net burial flux of BSi is 15 Gmol yr -1 , which is 3.3% of the total primary production. This large BSi sedimentation flux exceeds the average value for the world ocean (2.6%) (Tréguer and De La Rocha, 2013).The gross burial efficiency of BSi is 50% in the Bohai Sea, similar to the East China Sea of 36−97% (Liu et al., 2005) and higher than the average of 17−20 % in the world's oceans (Bernard et al., 2010; 455 Tréguer and De La Rocha, 2013). The net burial of OC is 33 Gmol yr -1 or1.0% of the primary production, which is also much higher than the world ocean (0.3%, Muller-Karger et al., 2005).This shows that the Bohai Sea is a potential sink for both Si and C. 460 Previous studies showed that nutrient inputs from the submarine groundwater were 1-2 times higher than those associated with river discharge into the Bohai Sea (Liu et al., 2011).Relative contributions from submarine groundwater and riverine Si input are similar to those in the Yellow Sea (Kim et al., 2005) and Mediterranean Sea 465 (Rodellas et al., 2015), a semi-closed sea like the Bohai Sea.Our estimate for submarine groundwater DSi input exceeds riverine input, and this agrees with the above studies, while the DOC input from submarine groundwater is less important than riverine input.470 Response of primary production to changing riverine RSi transport The DSi concentration in the Bohai Sea had decreased in the period 1980-1990, and has been stable more recently.The average DSi concentration in the Bohai Sea in the early 2000s was only 1/3 of that in 1980s (Tang et al., 2003;Ning et al., 2010;Liu et al., 2011;Liu, 2015).Meanwhile, the DIN concentration increased from 1.7 μmol l -1 475 in the 1980s (Tang et al., 2003) to 5.1 μmol l -1 in 2000 (Li et al., 2003) and 10.6 μmol l -1 in 2012.Nutrient stoichiometry has changed significantly with molar Si : N ratios varying from 14 in 1980s to 1.5 in 2000 and 0.6 in 2012, respectively. The Bohai Sea has therefore changed from an N limited ecosystem in the 1980s to a 480 Si limited system in recent years, and the BSi production by diatoms now largely depends on available silica (Tang et al., 2003;Ning et al., 2010;Liu et al, 2011).The Yellow River discharge represents more than 70% of the total freshwater discharge into the Bohai Sea (33 km 3 yr -1 , Table 1).Since the water residence time in the Bohai regulation in spring has led to peak events (Fig. 6) (Gong et al., 2015;Liu, 2015). Using a factor of 3 increase of RSi river export (from 0.9 prior to 2002 to 4.3 G mol yr -1 at present, see Fig. 5) with the regression equation in Fig. 7 results in an increase of primary production by 10% since 2002 in comparison with the levels in 2000 and 500 2001.This is confirmed by data indicating that , DOC in the Bohai Sea increased from 2.1 mg l -1 before (Zhang et al., 2006) to 2.6 mg l -1 (Chen, 2013) and 3.9 mg l -1 (this study) after the Yellow River water-sediment regulation in spring., the TOC concentrations in the Bohai Sea have been increasing in the same period, which indicates that increasing Si loadings may enhance both TOC and DOC levels in the 505 Bohai Sea, particularly in the part close to the river mouth. Conclusions The distributions of RSi and OC in the Bohai Sea show seasonal and regional variation, and are mainly affected by the riverine input, primary production and water 510 exchange between the Bohai Sea and Yellow Sea.BSi and TOC are mainly from marine primary production, and areas with high BSi and TOC contents in the sediments are mainly in the estuarine and mud areas.The riverine flux contributes 47% of all exogenous OC input to the Bohai Sea, followed by benthic flux of DOC, accounting for 34%; surface runoff (8%), submarine groundwater input (6%), and atmospheric deposition (5%) represents the remaining 19%.The dominant outputs of OC in the Bohai Sea are sedimentation 525 (75% of total output) and the outflow to the Yellow Sea (25%).The Bohai Sea is a sink for both Si and C, net burial of BSi and OC in sediments amounting to 3.3% and 1.0% of primary production, respectively. DSi in the Bohai Sea had decreased and then maintained stable in the last three PP is the primary production (Gmol C yr -1 ) and PP0 is the intercept, representing the background primary production from all RSi sources except the Yellow river; FRSi is the RSi flux of the Yellow River (G mol yr -1 ); a is a constant (Gmol Gmol -1 ). Biogeosciences Discuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg- -42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License. 100 125 in a later stage.Sampling expeditions were also carried out at the Lijin Station (Shandong Province) Biogeosciences Discuss., doi:10.5194/bg-2016-42,2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.at the Yellow River (Fig. 1a) during a full hydrological year in 2013-2014.Water samples were collected for DSi, BSi, DOC and POC measurements once per month at 130 20 cm below the surface with at least 3 sampling points across the river main channel. 185The Si and C budgets of the Bohai Sea are estimated using a steady-state box model, focusing on the reactive Si (RSi, the sum of DSi and BSi) and OC in the water column Biogeosciences Discuss., doi:10.5194/bg-2016-42,2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.and accounting for the major hydrological, chemical and biological processes.In this calculation, we use estimates for the fluxes of RSi and OC into and out of the Bohai Sea, i.e. exchange through the Bohai Strait (FE; FE = Input to the Bohai Sea (FYTE) -190 Output to the Yellow Sea (FBTY)), riverine input (FR), surface runoff (FSR) from surficial runoff and small rivers not included in FR, submarine groundwater discharge (FGW), atmospheric input (FA), flux from porewaters (FB) and sedimentation (FS) (Table1).195 Internal processes such as primary production (FP), regeneration (FRC), respiration and degradation are also taken into account.FE, FB, and FS are based on measurements described in this paper, as well as the major contributions of Yellow River to FR, while the other fluxes are based on literature values.The various budget terms are discussed in more detail below. Biogeosciences Discuss., doi:10.5194/bg-2016-42,2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.275 2.3.7Primary production Primary production was estimated from the average primary production in the euphotic layer, obtained by integrating the seasonal data from 1998 to 2008 estimated 290 bottom water, and DSi concentrations are fairly low in the Northwestern part of the Bohai Sea, but high in the southeastern part, particularly in the Bohai Strait.In autumn, DSi concentration in surface water is lower in the central part of Bohai Sea than in other areas, with fairly high levels in the Laizhou Bay and Bohai Strait (Fig. concentrations are fairly high in the Bohai Bay, Laizhou Bay and Yellow River estuary, and lower in the central part of the Bohai Sea.The distribution of BSi in the bottom 300 water differs from that in the surface water, with relatively high BSi concentrations occurring in the central area of the Bohai Sea.In autumn, the distribution of BSi in surface water is similar to that in bottom water, with fairly high concentrations in the Yellow River estuary and other nearshore areas (Fig.3).BiogeosciencesDiscuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg--42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License. 330 The inputs of RSi and OC from the Yellow Sea into the Bohai Sea are 3.2 Gmol yr -1 and 120 Gmol yr -1 , respectively, while the output RSi (3.5 Gmol yr -1 ) and OC (140Gmol yr -1 ) fluxes from the Bohai Sea to the Yellow Sea are similar resulting in net Biogeosciences Discuss., doi:10.5194/bg-2016-42,2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.outputs of RSi and OC from Bohai Sea of 0.3 Gmol yr -1 and 20 Gmol yr -1 , 335 respectively.Internal cycling of Si and OC are important terms in the budget.Based on primary production in the euphotic zone of the Bohai Sea, C sequestration is 3280 Gmol yr -1 , which means that 460 Gmol yr -1 of BSi is ingested according to the Redfield ratio, 340 and 2560 Gmol of OC (78%, see 2.3.7) is consumed by respiration.The estimated sedimentation fluxes of BSi and OC are 30 and 60 Gmol yr -1 , respectively.Recycling of BSi in the water column amounts to 430 Gmol yr -1 of DSi released.Biodegradation and photo oxidation of OC in the water column is about 3220 Gmol yr -1 .345 The benthic fluxes of DSi and DOC at the sediment-water interface are further important sources of respectively 15 Gmol yr -1 and 27 Gmol yr -1 .The calculated submarine groundwater discharge into the Bohai Sea (Table 1) amounts to 8.0 Gmol yr -1 for DSi and 4.4 Gmol yr -1 for DOC.The estimated surface runoff fluxes are 2.1 Gmol yr -1 for RSi and 6.4 Gmol yr -1 for OC. of RSi and OC in the water column The DSi distribution is largely affected by the circulation system of the Bohai Sea, 355 particularly in the area adjacent to the Bohai Strait.The high DSi concentration in the southeastern part of the Bohai Sea is due to high DSi in the water mass coming in from the Northern Yellow Sea through the Bohai Strait.The DSi distribution is also influenced by the terrestrial input, particularly in the area near the mouth of the BiogeosciencesDiscuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg--42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License. BiogeosciencesDiscuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg--42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.Sea is about 3 years(Liu et al., 2012), changes of riverine Si input in the Bohai Sea 485 would have long-lasting influence on the ecosystem's functioning.Statistical analysis suggests that there is a significant relationship between DSi and RSi flux of the Yellow River in year n-1 and primary production in the Bohai Sea in year n (DSi: p<0.005;RSi: p=0.02) (Fig.6 and 7) reflecting the long residence time of Si.This also suggests that changing terrestrial Si loadings have a direct and long-time influence on 490 primary production in the Bohai Sea.Since 2002, the water discharge and sediment load of the Yellow River have increased significantly compared with the late 1990s due to the water and sediment regulation (Fig. 6).The annual DSi flux of Yellow River in July increased 5-10 fold and RSi flux 495 by a factor of 3 since 2002; since this year sediment load and water discharge BiogeosciencesDiscuss., doi:10.5194/bg-2016Discuss., doi:10.5194/bg--42, 2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License.The benthic diffusion in the Bohai Sea is the major source of external Si to the water 515 column, accounting for 49% of the exogenous Si inputs, followed by the submarine groundwater discharge (26%), riverine input (17%), surface runoff (7%), and atmospheric deposition (<1%).The dominant removal processes of RSi in the Bohai Sea are BSi sedimentation (99% of total output) and the outflow of RSi to the Yellow Sea (1%). 520 530 decades.Earth surface process modified by human activities and riverine load variations change the exogenous Si input and thus primary production.Primary production in the Bohai Sea has increased by 10% since 2002, as a result of the increasing riverine RSi input from the Yellow River due to water-sediment regulation.535 A quantitative mechanistic understanding of the key processes controlling Si flow and preservation of C in the land-ocean continuum is needed.The mechanistic understanding is necessary to parameterize the various processes involving C and Si and their sensitivity to external perturbations at the larger scales of earth system models.At present, this lack of understanding limits our ability to predict the present 540 and future contribution of the aquatic continuum fluxes to the global C and Si budget.Biogeosciences Discuss., doi:10.5194/bg-2016-42,2016 Manuscript under review for journal Biogeosciences Published: 12 February 2016 c Author(s) 2016.CC-BY 3.0 License. Figure 1 . Figure 1.Rivers, mud area, circulation system and sampling stations in the Bohai Sea (I = Liaozhou Bay, II = Bohai Bay, and III = Laizhou Bay), and Mud area and circulation system are redrawn according to studies of Hu et al. (2012) andSündermann and Feng (2004). Figure 6 . Figure 6.Data for Lijin station in the lower Yellow River for the period 2000-2015 representing (a) DSi and BSi fluxes; (b) data for chlorophyll-a and primary production in the Bohai Sea with standard deviation.DSi in the Yellow River is from Gong (2015), Ran et al. (2015) and this study.BSi data is calculated by Equation 2. Chlorophyll-a and primary production (PP) in 2000-2008 of the Bohai Sea are from Tan et al. (2011), data for 2010 are from Chen et al. (2013) and Zhao et al. (2015). Figure 7 . Figure 7. Relationship between annual RSi loading of the Yellow River and primary production in the Bohai Sea obtained by linear regression (black markers represent PP and silica loading in the same year, red markers represent PP in year n and RSi loading for year n-1.PP data correspond to the data in Figure 6 recalculated to Gmol C yr -1 .The regression equation is PP=a FRSi+PP0, with Table 1 . Main fluxes of reactive silica and organic carbon budget in the Bohai Sea. Table 2 . Reactive silica and organic carbon concentrations in surface water (0.5 m), bottom water (< 2m from the sea floor) and average for water column in the Bohai Sea in 2012. Table 3 . Biogenic silica and total organic carbon contents of the surface sediment and core sediment in the Bohai Sea.
2018-12-26T18:36:40.583Z
2016-02-12T00:00:00.000
{ "year": 2016, "sha1": "98ee878f8f3dfacc6c9796375b414e0b307b1d3c", "oa_license": "CCBY", "oa_url": "https://www.biogeosciences-discuss.net/bg-2016-42/bg-2016-42.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "98ee878f8f3dfacc6c9796375b414e0b307b1d3c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
25894467
pes2o/s2orc
v3-fos-license
A CARE-compliant article: a case of retrograde intussusception with Uncut-Roux-en-Y anastomosis after radical total gastrectomy Abstract Rationale: Postoperative intussusception is an unusual clinical entity and is rarely encountered as a complication following gastrectomy, especially radical total gastrectomy. Patient concerns: A 74-year-old woman was admitted to our hospital with complaints of melena and hematemesis. And the endoscopic biopsy confirmed the poorly differentiated adenocarcinoma of the stomach. Radical total gastrectomy with Uncut Roux-en-Y reconstruction was performed. On the third postoperative day (POD3), the patient complained of paroxysmal pain around the umbilicus, accompanied by nausea and vomiting. Diagnosis: Retrograde intussusceptions after radical total gastrectomy with Uncut Roux-en-Y reconstruction based on exploratory laparotomy. Interventions: On POD4, the abdominal computed tomography (CT) showed small bowel dilatation and fluid accumulation in the upper abdominal cavity, as well as a small mass of soft tissue on the left side of the pelvis. Small bowel obstruction was considered, and exploratory laparotomy was performed. Retrograde intussusception started just below the jejunojejunal anastomosis with possible organic lesions, which was subsequently removed. Outcomes: The patient recovered well and was discharged 15 days after the second operation. Lessons: This case report was written for 3 purposes: to increase awareness of this complication after radical total gastrectomy with Uncut-Roux-en-Y reconstruction; to emphasize early diagnosis through clinical manifestation, physical examination, and auxiliary examination with abdominal CT; and lastly, to emphasize that a reasonable surgical procedure should be performed immediately after diagnosis. Introduction Postoperative intussusception is an unusual clinical entity and is rarely encountered as a complication following gastrectomy, especially radical total gastrectomy with Uncut-Roux-en-Y reconstruction. Here, we report a case of retrograde jejunojejunal intussusception at Uncut-Roux-en-Y anastomosis occurring 3 days after radical total gastrectomy, and include a review of the literature on this unusual complication. Case report A 74-year-old woman was transferred to our institution with complaints of melena and hematemesis. The patient had no significant medical history. On admission, she was hemodynamically stable. Hypoproteinemia (albumin 33.0 g/L) and anemia (hemoglobin 75 g/L) were noted on blood examination. The total abdominal enhanced computed tomography (CT) revealed a thickened and strengthened gastric wall the of small curved side of the stomach fundus, without distant metastasis. On gastroduodenoscopic examination, a poorly differentiated adenocarcinoma of the stomach was found. However, chest X-ray examination showed no pulmonary metastasis. Editor: N/A. YZ, FW, YJI contributed equally to this work and should be considered as cofirst authors. In this study, no special individuals and organizations need to be appreciated. Ethical approval is unnecessary because the patient and his family have signed the informed consent before some important diagnosis and treatment. Radical total gastrectomy with Uncut-Roux-en-Y (Fig. 1B) reconstruction was performed after transfusions and albumin delivery to the patient. The duodenal stump was closed with staples (No. 26 Prius star), with the stapler nail seat placed into the bottom of the esophageal stump. The jejunum was opened at a distance of ∼15 cm from the ligament of Treitz, and the stapler was inserted into the distal jejunum at approximately 30 cm and anterior to the colon to complete the anastomosis of the esophagus and jejunum (AEJ). The efferent loop was opened at ∼45 cm from AEJ, and a side-to-side jejunojejunostomy was performed, to divert the duodenal fluid, by Albert-Lembert sutures. The afferent loop near AEJ was ligated by a Double No. 7 suture, and Uncut-Rouxen-Y reconstruction was completed. A 10 cm nasojejunal feeding tube (NJFT) was inserted into the distal part of the efferent loop of the jejunostomy, for early postoperative enteral nutrition. Two days after the operation, the patient did not complain of abnormal symptoms. Normal saline (250 mL) was slowly dripped through NJFT. However, on POD3, the patient complained of paroxysmal pain around the umbilicus, accompanied by nausea and vomiting, which persisted for until POD6 without any apparent relief. During that time, 300 to 950 mL black-green small intestine juice was drained via the NJFT, daily. Laboratory tests indicated no remarkable findings except for elevated C-reactive protein up to (77.66 mg/L). Physical examination revealed a lump in the left lower abdomen, which became smaller and even disappeared after vomiting. On POD4, abdominal CT showed small bowel dilatation and fluid accumulation in the upper abdominal cavity, as well as a small mass of soft tissue on the left side of the pelvis, which was considered a small bowel obstruction (Fig. 1C). Active conservative treatment was given to the patient, including intestinal decompression through NJFT, anti-infection, maintenance of water and electrolyte balance, and adequate parenteral nutrition; however, the abdominal symptoms were not significantly relieved. On POD6, abdominal CT showed that the small mass of soft tissue was still present with increased abdominal cavity fluid accumulation (Figs. 1D, 2B). An emergency laparotomy was performed and a retrograde jejunojejunal intussusceptions was found ∼20 cm distal to the jejunojejunal anastomosis (Fig. 1A). After the manual reduction was performed carefully, the intussusception came loose subsequently. The distal jejununal segment (intussusceptum) was ∼10 cm in length, with good blood supply and peristalsis. However, due to the risk of organic lesions, the intussusception was removed. The jejunojejunostomy was performed using Albert-Lembert sutures. No obvious abnormality was found in pathological examination. Four days after the second operation, abdominal CT examination showed no obvious expansion and effusion of the small intestine (Fig. 2C), while the small mass tissue was no longer present (Fig. 2D). Postoperative recovery was uneventful, and the patient was discharged 15 days after the second operation. Discussion Intussusception is primarily a disease of children and infants, with only about 5% of cases occurring in adults, [1,2] Although childhood intussusceptions are idiopathic in 90% of cases, adult intussusceptions have an organic lesion origin in 70% to 90% of cases, with >50% of the lesions reported as malignant. [2,3] In a large multicenter study of 44 patients with adult intussusception, 37% of small bowel and 58% of colonic adult intussusception were malignant. [4] Although postoperative intussusception is a rare clinical entity in both age groups, it is also more common in the pediatric population than in adults. It accounts for 5% to 10% of all cases of postoperative ileus in infancy and childhood, [5] but only ∼1% of cases in adults. [6] Intussusception is an extremely rare complication after gastric surgery, with the incidence is reported to be <0.1%. [7] Jejunal intussusceptions after total gastrectomy were first reported by Bozzi [8] and are recognized as an uncommon complication, occurring in only 0.07% to 2.1% of patients who undergo gastrectomy. [9] A review of the literature revealed 27 cases of retrograde intussusception occurring after total gastrectomy, including the current case. [10,11] Reported digestive tract reconstructions included Roux-en-Y reconstruction and Billroth II method, and only 1 case with Uncut-Roux-en-Y reconstruction. Furthermore, while 5 cases developed in the early postoperative period, the current case is the earliest reported and occurred 3 days after the first operation. Other cases developed within 1 to 22 years after surgery. Given the wide time course of postoperative incidence, we should be highly vigilant for this complication at any time after operation, especially in the early postoperative period. In adults, the exact mechanism behind intussusception is unknown. In the present case, neither functional nor mechanical causes were identified as leading causes of intussusception. A variety of postoperative conditions, such as adhesions around the suture lines, [1,12] a long intestinal tube, [13] increased intraabdominal pressure, [7] excessive length of afferent loop, [9] and reverse peristalsis, [12,14] have been proposed as possible mechanisms of intussusception after gastric surgery, but a prevailing cause has not been confirmed. As the exact cause of postoperative intussusception cannot be confirmed, attention should be focused on the identification and diagnosis of the condition. In fact, determination of whether emergency surgery is needed rather than the causes of intussusception should be considered priority, especially for early postoperative patients. Early diagnosis is crucial for early surgical intervention; when the operation is performed within 48 hours of diagnosis, the mortality rate is ∼10%. In contrast, operations delayed beyond 48 hours may be associated with a mortality rate of up to 50%. [15] Abdominal CT is extremely useful for the diagnosis of this condition, with the identification of multiple concentric rings being the characteristic sign of intussusception. [16] Furthermore, CT can provide important information on whether an emergency operation is required. On POD3 in the present case, the patient complained of paroxysmal pain around the umbilicus, accompanied by nausea and vomiting. Meticulous physical examination revealed a lump in the left lower abdomen, which became smaller and disappeared after vomiting. In addition, the most severe abdominal pain occurred when bowel sounds were the most active. After vomiting, abdominal pain was relieved and intestinal gurgling sounds recovered. According to the above description, paralytic ileus was excluded and mechanical bowel obstruction could be diagnosed. Intussusception is one of the causes of mechanical intestinal obstruction. The symptoms of abdominal pain did not ease after active conservative treatment. In addition, abdominal CT showed increases in abdominopelvic cavity fluid over a short time frame (Figs. 2A, B), with a preserved mass. As a result of these signs, emergency surgery was performed. Endoscopic reduction of jejunogastric intussusception has been suggested in a few selected cases [17] ; however, this procedure is associated with a significant risk of recurrence. [18] Surgery is the prominent treatment in jejunojejunal intussusceptions, though the specific operation depends on the intraoperative findings. Simplicity and effectiveness are emphasized in emergency operation, with priority in minimizing trauma, especially for early postoperative patients. In our opinion, it is necessary to remove the intussusception to avoid recurrence, especially in cases where there are defined organic lesions in the bowel. This case report was written for 3 purposes: to increase awareness of this complication after radical total gastrectomy with Uncut-Roux-en-Y reconstruction; to emphasize early www.md-journal.com diagnosis through clinical manifestation, physical examination, and auxiliary examination with abdominal CT; and lastly, to emphasize that a reasonable surgical procedure should be performed immediately after diagnosis.
2018-04-03T02:52:56.486Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "08414dbdfe7558062ff3f8274fa7689699b44cd0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000008982", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08414dbdfe7558062ff3f8274fa7689699b44cd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231961008
pes2o/s2orc
v3-fos-license
Moxibustion against Cyclophosphamide-Induced Premature Ovarian Failure in Rats through Inhibiting NLRP3-/Caspase-1-/GSDMD-Dependent Pyroptosis Premature ovarian failure (POF) is a clinical term used to describe a condition in which women present with amenorrhoea, hypergonadotropic hypogonadism, and infertility under 40 years old, which are mainly characterized by ovarian granulosa cell inflammation and death. Pyroptosis is a proinflammatory form of programmed cell death. However, the roles of pyroptosis in POF and moxibustion (Mox) on pyroptosis in POF have not been elucidated. The aim of the present study was to investigate the protective effect of moxibustion against cyclophosphamide- (CP-) induced POF and to determine the underlying mechanisms. The results indicated that Mox could decrease the follicle-stimulating hormone (FSH) and luteotropic hormone (LH) and increase estradiol (E2) in serum, which indicated that it could improve ovarian reserve capacity. Mox also ameliorated CP-induced ovarian injury accompanied by decreased levels of interleukin-1β (IL-1β), IL-18, and gasdermin D (GSDMD), which are key features of pyroptosis. Further investigation showed that Mox alleviated POF through NLRP3-mediated pyroptosis. On the one hand, Mox directly inhibited TXNIP/NLRP3/caspase-1 signaling-induced pyroptosis, and on the other hand, it indirectly decreased NLRP3, pro-IL-1β, and pro-IL-18 through inhibiting TLR4/MyD88/NF-κB signaling. Our results show that Mox might be a new therapeutic strategy for the treatment of POF. Introduction Premature ovarian failure (POF) is a clinical term used to describe a condition in which women present with amenorrhoea, hypergonadotropic hypogonadism, and infertility under 40 years old. e incidence of POF in women under 40 and 30 years of age has been reported to be 1% and 0.1%, respectively [1]. POF is usually characterized by low gonadal hormone (estrogen) levels and high gonadotropins (LH and FSH) levels [2,3]. Secondary amenorrhoea, which is usually based on POF, makes the patient suffer from the psychological distress associated with fertility loss. In addition, untreated ovarian failure has an increased risk of developing osteoporosis, cardiovascular disease, and cognitive decline [4,5]. us, optimizing ovarian function is important for women with POF. e main mechanism of POF is follicle depletion and dysfunction, manifested by ovarian atrophy and cortical fibrosis [6]. During follicular development, a large number of follicles undergo atresia, a process tightly controlled by the fine balance between survival and death factors [7]. erefore, it is important to determine the molecular mechanisms of ovarian injury in POF. Cyclophosphamide (CP), an alkylating molecule, is used in numerous cancer chemotherapeutic regimens. Besides the beneficial effect of CP on target areas of the tumor, it could also damage various organs depending on the age and sex of patients [8]. In clinical work, female patients with cancer who were treated with CP can experience infertility due to premature ovarian failure [9]. Chemotherapy induces apoptosis of growing follicles and fibrosis ovaries in the stromal blood vessels [10]. It has a damaging effect on ovarian tissue and increases the risk of POF. erefore, CP was used in the present study to induce POF in female rats. Ovarian failure is accompanied by a chronic inflammatory response [11]. Pyroptosis is a form of programmed cell death, and the activation of pyroptosis is closely related to inflammation processes [12]. e features of pyroptosis are gasdermin family-mediated pore formation on the plasma membrane, cell swelling, and plasma membrane disruption, as well as release of proinflammatory intracellular contents including IL-1β and interleukin-18 (IL-18). Gasdermin D (GSDMD) is cleaved by activated caspase-1 to trigger pyroptosis by forming membrane pores [13]. Activation of inflammasomes, especially NLR family pyrin domain containing 3 (NLRP3), plays an important role in pyroptosis. NLRP3 promotes caspase-1-mediated IL-1β, IL-18, and cleaved GSDMD. e molecular mechanisms of inflammasomes and pyroptosis are as follows. Firstly, the transcription factor promotes the production of proinflammatory factors such as pro-IL-1β, pro-IL-18, NLRP3, and caspase-1. is is mainly regulated by nuclear factor-κB (NF-κB) [14]. Secondly, it activates the inflammatory complex, including NLRP3, ASC, and procaspase-1. is is mainly regulated by ioredoxin Interacting Protein (TXNIP) [15]. Long-term chronic inflammatory response, including IL-1β, could induce ovarian cell damages. erefore, we hypothesized that pyroptosis may play an important role in the progress of ovarian failure. Moxibustion (Mox) is a traditional Chinese therapy using burned moxa stick made from dried mugwort. e combustion of the mugwort permits transmission of heat to the body that has various pathologic changes [16]. Previous studies showed that Mox has beneficial effects on arthritis, knee osteoarthritis, and pain [17,18]. It indicated that such pleiotropic effects include improving immune function and inhibiting oxidative stress and apoptosis. Evidence suggests that Mox has a good effect on POF [19], but the underlying mechanism still remains unclear. e present study was designed to explore the effect of moxibustion on POF and its underlying mechanism. Animals. Twelve-week-old female Sprague Dawley (SD) rats were purchased from the Shanghai SLAC Laboratory Animal Co. Ltd. All animals were housed under specific pathogen-free (SPF) conditions (24 ± 2°C room temperature, 65 ± 5% humidity, and 12/12 h light-dark cycles) with drinking water and food available ad libitum. e animal welfare and experimental procedures adhered strictly to the dictates of the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. Experimental Procedure. After accommodation for one week, rats were randomly allocated into three groups (n � 15 for each group): (1) control group, (2) model group (cyclophosphamide), and (3) moxibustion (Mox) group. e rats of the model and Mox groups were intraperitoneally injected with 50 mg/kg of CP on the first day and then 8 mg/ kg for 15 consecutive days, while the control group was injected with 0.9% saline instead of CP. Moxibustion with an ignited moxa stick was administered to rats at Guan-yuan (CV4, located on the midline 3 cm below the umbilicus) and Sanyinjiao (SP6, located on the inside of the leg just above the ankle) acupoints, respectively (as shown in Figure 1). Each acupoint was given moxibustion for 10 min twice per day for 3 weeks. All rats were sacrificed under anesthesia and blood and ovarian were collected for further study. Histological Analysis. Ovarian samples were excised immediately, fixed in 4% paraformaldehyde (PFA), embedded in paraffin wax, and cut into slices of 4 μm thicknesses. e paraffin sections were dewaxed in xylene, dehydrated in ethanol, and stained with hematoxylin-eosin (HE), and then they were observed under light microscopy at 100x magnifications. Enzyme-Linked Immunosorbent Assay (ELISA). e blood was centrifuged at 3000 rpm for 10 min at 4°C to obtain serum. e levels of hormone secretion (FSH, E2, and LH) and inflammatory cytokines (IL-1β and IL-18) in serum were determined by ELISA kits according to the manufacturer's instructions (Elabscience, China). Immunohistochemical Staining. e proteins of TLR4 and p-NF-κB in ovarian were evaluated by immunohistochemical staining. Briefly, the ovary was fixed with 4% PFA, embedded in paraffin, and cut into slices of 4 μm thicknesses. e paraffin sections were dewaxed in xylene, dehydrated in ethanol, and microwaved in sodium citrate buffer for 20 min to repair the antigen. e samples were incubated with 3% hydrogen peroxide for 30 min to block endogenous peroxidase and blocked with 5% goat serum for 30 min. All samples were treated with primary antibodies TLR4 (1 : 50) and p-NF-κB (1 : 200) at 4°C overnight. e next day, each sample was incubated with goat anti-rabbit/mouse IgG for 30 min and then horseradish enzyme tag chain enzyme avidin for 30 min. en, samples were stained with 3-3′diaminobenzidine (DAB) and restained with hematoxylin. e samples were observed under light microscopy at 100x magnifications. Statistical Analysis. All data were presented as mean ± SD (standard deviation). Normality was checked for all data before comparisons using one-way ANOVA followed by Tukey's multiple comparison test. A p value less than 0.05 was considered statistically significant. Mox Could Alleviate CP-Induced Ovarian Damage and Abnormal Hormone Secretion. Hematoxylineosin staining was used to analyze the histological changes of CP-induced ovarian damage. As shown in Figure 2(a), the texture of ovarian tissues, including primordial follicles, antral follicles, and cumulus oophorus, was intact in the control group. e model group showed increased ovarian atrophy and stroma empty space, and ovarian granulosa cell damage was found compared with the control group. However, compared with the model group, the above pathological damage was alleviated by Mox treatment. As is well known, POF could induce abnormal hormone secretion. We used to detect the levels of serum hormone secretion by ELISA kit. As shown in Figure 2(b), the concentrations of FSH and LH were increased, and E2 was decreased in the serum of the model group. As expected, Mox decreased the concentration of FSH and LH and increased E2 in serum. e above studies indicate that Mox could alleviate CP-induced POF. However, the mechanisms of Mox are not as well understood. Mox Could Alleviate CP-Induced Ovarian Pyroptosis. In order to study whether pyroptosis occurred in CP-induced POF rats, we further detected the pyroptosis markers, including IL-1β and IL-18, through ELISA kits. As expected, IL-1β and IL-18 were increased in POF rats (Figure 3(a)). Mox could decrease the IL-1β and IL-18 in POF rats. Gasdermin D (GSDMD), which is a pore-forming protein that promotes the secretion of IL-1β and IL-18, could induce pyroptosis. e proteins of IL-1β, IL-18, and cleaved GSDMD were significantly increased in POF rats, which further indicated that pyroptosis occurred in the POF rats. Meanwhile, Mox decreased the proteins of IL-1β, IL-18, and cleaved GSDMD in POF rats (Figure 3(b)). Preliminary studies showed that pyroptosis occurred in the POF rats, and Mox could inhibit pyroptosis. Mox Inhibited Pyroptosis through Inhibiting the TXNIP/ NLRP3/Caspase-1 Signaling Pathway. As is widely known, IL-1β, IL-18, and cleaved GSDMD were activated by caspase-1, which was activated by NLRP3. In order to investigate the mechanisms of antipyroptosis in Mox, we detected NLRP3-related protein. As shown in Figure 4, the NLRP3, ASC, and cleaved caspase-1 were remarkably increased in CP-induced POF rats. Mox significantly decreased NLRP3, ASC, and cleaved caspase-1 in POF rats. NLRP3 was always activated by TXNIP. e further study showed that Mox significantly inhibited the CP-induced increase of TXNIP. e above study indicated that TXNIP/NLRP3/caspase-1medicated pyroptosis plays a key role in POF and Mox suppressed pyroptosis through inhibiting the TXNIP/ NLRP3/caspase-1 signaling pathway. Mox Decreased NLRP3, Pro-IL-1β, and Pro-IL-18 through Inhibiting the TLR4/MyD88/NF-κB Signaling Pathway. NF-κB-related signaling pathway which mediates the upregulation of pro-IL-1β, pro-IL-18, and NLRP3 genes plays the first step of NLRP3 activation, and NLRP3/ASC/ caspase-1 could not assemble without NF-κB. In order to investigate whether the NF-κB-related signaling pathway plays an important role in POF, we detected the principal NF-κB-related signaling in the TLR4/MyD88/NF-κB signaling pathway. As expected, the proteins of TLR4, MyD88, and p-NF-κB were significantly augmented in CP-induced POF rats (Figure 5(a)). Mox treatment significantly reversed these elevations induced by CP-induced POF rats ( Figure 5(a)). e results were further verified by immunohistochemical ( Figure 5(b)). Mox treatment significantly inhibited TLR4 and p-NF-κB in the ovary. Discussion Our study showed that Mox could alleviate CP-induced POF through inhibiting pyroptosis and indicated that NF-κB activation and TXNIP/NLRP3 inflammasome triggering caspase-1-dependent pyroptosis play an important role in ovarian injury in rats; the main mechanism is illustrated in Figure 6. e present study provided new insights into the mechanism of moxibustion in treating POF. CP-induced ovarian damage has been characterized as follows: it decreased various follicles and ovarian mass and increased ovarian atrophy and stroma empty space. Moreover, the secretion of the endocrine hormone is also affected by CP [20]. Mox could significantly reverse the pathological changes of POF induced by CP. In addition, Mox can effectively improve the hormone metabolism as demonstrated by the lowering of FSH and LH levels and increasing the concentration of E2 in serum. ese results demonstrated that Mox could suppress the deleterious effects of CP and reverse POF. As a chronic inflammatory disease, POF was affected by numerous cytokines, which initiate and maintain the pathological response in the ovary [21]. Pyroptosis is a proinflammatory form of regulated cell death and plays an important role in tissue homeostasis and immunity. Caspase-1 activation, which subsequently catalyzes the cleavage of the precursor cytokines pro-IL-β and pro-IL-18 into a mature and active form, has been considered to be closely related to pyroptosis [22]. Gasdermin D (GSDMD) is a central effector and executor protein of pyroptosis by inducing the formation of large pores in the plasma membrane [23]. We found that the expression of GSDMD and the serum levels of IL-β and IL-18 increased in CP-induced rats, while Mox treatment reversed these changes and significantly attenuated ovarian injury. Nucleotide-binding oligomerization domain receptors (NLRs) are intracellular proteins that participate in mammalian immunity. NLRs can be grouped according to their physiological functions in the immune system. NLRP3 inflammasome complex consists of three submits: NLRP3, apoptosis-associated speck-like protein (ASC), and caspase-1. NLRP3 binds to ASC and then recruits procaspase-1 to form an inflammasome complex. Inflammasome complex cleaves procaspase-1 into caspase-1 and subsequently cleaves pro-IL-1β and pro-IL-18 into mature IL-1β and IL- 18. Caspase-1 also induces GSDMD into cleaved GSDMD and further triggers pyroptosis [24]. Based on our results, Mox could inhibit NLRP3/ASC/caspase-1-mediated pyroptosis. TXNIP is a key antioxidant in the body and is necessary for the activation of the NLRP3 inflammasome via direct interaction with NLRP3. Depending on our results, Mox treatment can significantly inhibit the expression of TXNIP in rats. is led to the speculation that downregulation of TXNIP also plays a role in Mox repression of ovarian injury. erefore, we hold the opinion that Mox inhibits TXNIP/NLRP3/caspase-1 signaling-medicated pyroptosis and alleviates ovarian failure in POF. Evidence-Based Complementary and Alternative Medicine NF-κB-related signaling pathway mediates the upregulation of pro-IL-1β, pro-IL-18, and NLRP3 genes and plays the first step of NLRP3 activation, and NLRP3/ASC/ caspase-1 could not assemble without NF-κB. e NF-κB family consists of five rel homology-containing proteins, which are normally retained in the cytoplasm through binding to inhibitors of NF-κB (IκB) [25]. Receptormediated signaling pathway activates the IκB kinase (IKK) complex and subsequently phosphorylates the inhibitory cytoplasmic IκBα, thus allowing NF-κB to translocate the nucleus and to initiate gene transcription. NF-κB-dependent signal is necessary for the activation of NLRP3 [26]. e present study revealed that Mox could inhibit CP-induced activation of NF-κB and NLRP3 in ovarian. e results indicated that the inhibition of the NF-κB signaling pathway may develop the antipyroptosis effect of Mox. Toll-like receptors (TLRs) are conserved patternrecognition receptors (PRRs), which are activated by a variety of pathogen-associated molecular patterns (PAMPs) [27]. Toll-like receptor 4 (TLR4) is a member of the TLR family which is activated by lipopolysaccharide. TLR4 signaling pathway plays an important role in the innate immune response and is responsible for inflammatory responses. e activation of TLR4 of the cell surface is followed by the activation of myeloid differentiation primary response gene 88 (MyD88), which is a downstream adaptor molecule of TLR4. Activated MyD88 engages the phosphorylation of the IκB complex, followed by the activation of NF-κB [28]. e present study showed that Mox decreased NLRP3, pro-IL-1β, and pro-IL-18 through inhibiting the TLR4/MyD88/NF-κB signaling pathway. Conclusion In conclusion, the present study showed that moxibustion had a therapeutic effect on CP-induced premature ovarian failure in rats. e underlying mechanism might be partially attributed to the inhibition of ovarian pyroptosis. On the one hand, Mox directly inhibited TXNIP/NLRP3/caspase-1 signaling-induced pyroptosis, and on the other hand, it indirectly decreased NLRP3, pro-IL-1β, and pro-IL-18 through inhibiting TLR4/MyD88/NF-κB signaling. Our results show that inhibiting pyroptosis or giving Mox might be a new treatment for POF. Data Availability e datasets generated during the current study are available from the corresponding author upon reasonable request. Ethical Approval All the animal experiments were approved by the Animal Experimental Ethics Committee of Nanjing University of Chinese Medicine (Nanjing, China). Conflicts of Interest e authors declare that they have no conflicts of interest. Authors' Contributions Guang-xia Ni and Cai-rong Zhang designed the study. Cairong Zhang analyzed the data and wrote the manuscript. Wan-qi Lin and Chang-cheng Cheng performed animal experiments and Mox. Cai-rong Zhang, Wei-na Zhu, Wei Tao, and Han Deng performed HE staining, ELISA, western blot, and immunohistochemistry. All authors read, revised, and approved the manuscript.
2021-02-20T05:04:19.225Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "e674e4f277feeadc4ae8d929f655138e6bc376ba", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2021/8874757.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e674e4f277feeadc4ae8d929f655138e6bc376ba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251086288
pes2o/s2orc
v3-fos-license
Structure-Function Relationships in Temperature Effects on Bacterial Luciferases: Nothing Is Perfect The evaluation of temperature effects on the structure and function of enzymes is necessary to understand the mechanisms underlying their adaptation to a constantly changing environment. In the current study, we investigated the influence of temperature variation on the activity, structural dynamics, thermal inactivation and denaturation of Photobacterium leiognathi and Vibrio harveyi luciferases belonging to different subfamilies, as well as the role of sucrose in maintaining the enzymes functioning and stability. We used the stopped-flow technique, differential scanning calorimetry and molecular dynamics to study the activity, inactivation rate, denaturation and structural features of the enzymes under various temperatures. It was found that P. leiognathi luciferase resembles the properties of cold-adapted enzymes with high activity in a narrow temperature range and slightly lower thermal stability than V. harveyi luciferase, which is less active, but more thermostable. Differences in activity at the studied temperatures can be associated with the peculiarities of the mobile loop conformational changes. The presence of sucrose does not provide an advantage in activity but increases the stability of the enzymes. Differential scanning calorimetry experiments showed that luciferases probably follow different denaturation schemes. Introduction Living systems have highly sensitive regulatory mechanisms to perceive a wide range of environmental conditions and quickly adapt to even the slightest changes. In particular, microorganisms can tune the salt concentration, density, viscosity and other characteristics of the cell matrix in response to acute stress or during self-regulating processes that enable them to maintain homeostasis [1][2][3][4]. One of the most important parameters that affects microbial metabolism and proliferation is temperature. It is generally accepted to distinguish microorganisms into psychrophiles, mesophiles, and thermophiles according to the range of temperatures which they can inhabit. The accumulation of water-soluble organic compounds either by de novo synthesis or by consumption from the surrounding environment is a common mechanism of osmoregulation, UV-protection, and thermal adaptation among extremophiles and mesophiles as well (including Bacteria, some Eukaryota and a few methanogenic Archaea species) [5]. These compounds, so called extremolytes, are of different chemical structure, comprising saccharides, polyols, heterosides, amino acids, and their derivatives [6]. The adjustments occurring in the cell are aimed at maintaining various parameters including viscosity, which is crucial for diffusion-controlled reactions [3], as well as the structure and functions of cytosol components and mainly proteins [7,8]. Furthermore, proteins themselves could be adapted to temperature shifts rate and denaturation thermodynamics of the luciferases revealed that V. harveyi enzyme is more thermostable. Sucrose was found to have a stabilizing effect against thermal inactivation and the denaturation of both luciferases. Thus, we carried out a detailed comparative study of the temperature effects on two types of bacterial luciferase that have never been performed before. Activity of V. harveyi and P. leiognathi Luciferases at Different Temperatures The kinetics of light emission in reactions catalyzed by V. harveyi or P. leiognathi luciferase was studied in the buffer and 30 (w/w)% sucrose at temperatures in the range of 5-45 • C. After mixing with substrates, bacterial luciferase can only make a single turnover due to the quick autoxidation of unbound FMNH − . Hence, the reaction kinetics is flash-like with a rapid increase succeeded by a relatively slow decay of light intensity (Figure 1). The following empirical kinetic parameters are estimated: the peak intensity (I max ), the total quantum yield (Q*), and the decay constant (k decay ). Their dependences on temperature are shown in Figures 2 and S1 (in the Supplementary Material). tures: a mobile loop forming the active center and a hairpin loop located near the α/β-interface. We also found that the addition of sucrose significantly reduces the mobile loop fluctuations in the P. leiognathi structure at high temperature. On the contrary, sucrose was less effective in stabilizing both flexible motifs of V. harveyi luciferase. The analysis of thermal inactivation rate and denaturation thermodynamics of the luciferases revealed that V. harveyi enzyme is more thermostable. Sucrose was found to have a stabilizing effect against thermal inactivation and the denaturation of both luciferases. Thus, we carried out a detailed comparative study of the temperature effects on two types of bacterial luciferase that have never been performed before. Activity of V. harveyi and P. leiognathi Luciferases at Different Temperatures The kinetics of light emission in reactions catalyzed by V. harveyi or P. leiognathi luciferase was studied in the buffer and 30 (w/w)% sucrose at temperatures in the range of 5-45 °C. After mixing with substrates, bacterial luciferase can only make a single turnover due to the quick autoxidation of unbound FMNH − . Hence, the reaction kinetics is flash-like with a rapid increase succeeded by a relatively slow decay of light intensity ( Figure 1). The following empirical kinetic parameters are estimated: the peak intensity (Imax), the total quantum yield (Q*), and the decay constant (kdecay). Their dependences on temperature are shown in Figures 2 and S1 (in the Supplementary Material). Comparing luciferases in the buffer, one can see that according to I max V. harveyi luciferase demonstrates its highest level of activity over a wider temperature range than P. leiognathi enzyme (20-35 • C vs. 20-25 • C, Figure 2a,b). However, the amount of emitted light in a single turnover, Q*, in the range 10-25 • C is about twofold less for V. harveyi enzyme than for P. leiognathi one, but becomes almost equal in value for both luciferases at higher temperatures ( Figure S1a,b). The decay rate, k decay , increases under higher temperatures (≥35 • C) more sharply for P. leiognathi luciferase (Figure 2d). Thus, in spite of high homology, P. leiognathi luciferase exhibits the properties of a more thermolabile enzyme than the V. harveyi one. Dependence of kinetic parameters of luciferases from V. harveyi (a,c) and P. leiognathi (b,d) on temperature in the buffer (empty squares) and sucrose solution (filled squares): the peak intensity, Imax, (a,b) and the decay constant, kdecay, (c,d). The dashed lines are a guide for the eye. Concentrations in the reaction mixture were: luciferase-0.5 µM, flavin-15 µM, decanal-50 µM. Comparing luciferases in the buffer, one can see that according to Imax V. harveyi luciferase demonstrates its highest level of activity over a wider temperature range than P. leiognathi enzyme (20-35 °C vs. 20-25 °C, Figure 2a,b). However, the amount of emitted light in a single turnover, Q*, in the range 10-25 °C is about twofold less for V. harveyi enzyme than for P. leiognathi one, but becomes almost equal in value for both luciferases at higher temperatures ( Figure S1a,b). The decay rate, kdecay, increases under higher temperatures (≥35 °C) more sharply for P. leiognathi luciferase (Figure 2d). Thus, in spite of high homology, P. leiognathi luciferase exhibits the properties of a more thermolabile enzyme than the V. harveyi one. To mimic a viscous environment and the extremolyte effects on bioluminescent reactions from two bacterial species, we studied reaction kinetics in sucrose solution (Figure 1). A concentration of 30% was used, which corresponded to 0.99 M of sucrose and gave more than a threefold higher viscosity as compared with the buffer. From the kinetic curves in Figure 1, one can see that the reaction noticeably slows down in a more viscous medium for both luciferases, which is due to the diffusion control of some reaction steps [24]. The change of kinetic parameters in sucrose solution over the entire temperature To mimic a viscous environment and the extremolyte effects on bioluminescent reactions from two bacterial species, we studied reaction kinetics in sucrose solution ( Figure 1). A concentration of 30% was used, which corresponded to 0.99 M of sucrose and gave more than a threefold higher viscosity as compared with the buffer. From the kinetic curves in Figure 1, one can see that the reaction noticeably slows down in a more viscous medium for both luciferases, which is due to the diffusion control of some reaction steps [24]. The change of kinetic parameters in sucrose solution over the entire temperature range is shown in Figure 2 and Figure S1. We observed that in the presence of sucrose the peak intensity decreases at low temperatures (for V. harveyi luciferase-in the range 5-20 • C, for P. leiognathi one-in the range 5-15 • C) (Figure 2a,b), while the effect on the decay constant appears only at high temperatures (Figure 2c,d). All of this leads to a decrease of the reaction quantum yield in sucrose solution at lower temperatures for both luciferases ( Figure S1). Interestingly, at temperatures >30 • C in the presence of sucrose, k decay is higher for V. harveyi luciferase and lower for the P. leiognathi one, as compared with the buffer. The observed temperature effects on the activity of the enzymes indicate that (i) P. leiognathi luciferase is more sensitive to temperature change than the V. harveyi enzyme and (ii) viscous sucrose solution gives no advantages for bacterial luciferases' functioning either under "hot" (35-45 • C) or in "cold" (5-15 • C) conditions. These facts will be discussed in Section 3. Thermal Inactivation of V. harveyi and P. leiognathi Luciferases We measured the remaining activity of V. harveyi and P. leiognathi luciferases at 20 • C after incubating the enzymes for different time intervals at various temperatures in the absence and presence of sucrose. The remaining activity at a certain time was expressed as percent of activity in the initial moment (i.e., without incubation), which is described in detail in Section 4.2. The time course of thermal inactivation of the enzymes at different temperatures is shown in Figure 3. intensity decreases at low temperatures (for V. harveyi luciferase-in the range 5-20 °C, for P. leiognathi one-in the range 5-15 °C) (Figure 2a,b), while the effect on the decay constant appears only at high temperatures (Figure 2c,d). All of this leads to a decrease of the reaction quantum yield in sucrose solution at lower temperatures for both luciferases ( Figure S1). Interestingly, at temperatures > 30 °C in the presence of sucrose, kdecay is higher for V. harveyi luciferase and lower for the P. leiognathi one, as compared with the buffer. The observed temperature effects on the activity of the enzymes indicate that (i) P. leiognathi luciferase is more sensitive to temperature change than the V. harveyi enzyme and (ii) viscous sucrose solution gives no advantages for bacterial luciferases' functioning either under "hot" (35-45 °C) or in "cold" (5-15 °C) conditions. These facts will be discussed in Section 3. Thermal Inactivation of V. harveyi and P. leiognathi Luciferases We measured the remaining activity of V. harveyi and P. leiognathi luciferases at 20 °C after incubating the enzymes for different time intervals at various temperatures in the absence and presence of sucrose. The remaining activity at a certain time was expressed as percent of activity in the initial moment (i.e., without incubation), which is described in detail in Section 4.2. The time course of thermal inactivation of the enzymes at different temperatures is shown in Figure 3. It is worth noting that for both enzymes we observed an induction phase (from 0.5 to 8 min, depending on temperature) ( Figure 3). Incubation during this time resulted in the remaining activity of about 100%, indicating that the most thermally inactivated molecules recovered after the immediate cooling of incubated enzymes. The longer heat It is worth noting that for both enzymes we observed an induction phase (from 0.5 to 8 min, depending on temperature) ( Figure 3). Incubation during this time resulted in the remaining activity of about 100%, indicating that the most thermally inactivated molecules recovered after the immediate cooling of incubated enzymes. The longer heat treatment resulted in a consistent decrease of the remaining activity, which was well described by a single exponential function. The corresponding rate constants of thermal inactivation of the bacterial luciferases are shown in Table 1. In a few cases an additional slow phase of inactivation was observed for both enzymes on further heating, namely at 50 • C (Figure 3a,c). Such an effect is usually explained by the aggregation of the protein molecules that prevents their refolding [25]. These data points were not included in the estimation of the inactivation rate constants as well as the points of the induction phase. Comparing thermal inactivation of two luciferases, we found that the V. harveyi enzyme is less sensitive to heating, since its remaining activity was always greater than that of the P. leiognathi enzyme under the same conditions. The presence of 30% sucrose appeared to reduce the rate of thermal inactivation of both luciferases (Table 1 and Figure 3c,d). The effect was the most pronounced for P. leiognathi luciferase at 50 • C. In this case, enzyme inactivation slowed down more than four times. Using the rate constants, we performed the Arrhenius analysis of thermal inactivation of the two luciferases in the buffer and sucrose solution ( Figure 4). REVIEW 7 Using the rate constants, we performed the Arrhenius analysis of thermal inac tion of the two luciferases in the buffer and sucrose solution ( Figure 4). In spite of the protective effect of sucrose, evident from the decrease of the inac tion rate constants (Table 1), it was revealed that the activation energy of thermal in vation, Ea, does not change in viscous medium. In particular, Ea in the buffer and suc solution was found to be 237 ± 30 and 224 ± 7 kJ/mol for V. harveyi luciferase and 255 and 243 ± 47 kJ/mol for P. leiognathi luciferase, respectively. Such estimations do not a In spite of the protective effect of sucrose, evident from the decrease of the inactivation rate constants (Table 1), it was revealed that the activation energy of thermal inactivation, E a , does not change in viscous medium. In particular, E a in the buffer and sucrose solution was found to be 237 ± 30 and 224 ± 7 kJ/mol for V. harveyi luciferase and 255 ± 27 and 243 ± 47 kJ/mol for P. leiognathi luciferase, respectively. Such estimations do not allow for the distinguishing of the degree of thermal stability of the two luciferases as well. Thermal Stability of Luciferases as Revealed by Differential Scanning Calorimetry To compare thermal stability of the luciferases, their denaturation was studied by differential scanning calorimetry. Figure 5 shows the temperature dependencies of excess heat capacity functions (C p exp ) obtained for bacterial luciferases at pH 6.9 in the buffer and in sucrose solution. It can be seen that the curves demonstrate peaks of cooperative heat absorption corresponding to protein denaturation. In Table 2 the thermodynamic parameters of the thermal denaturation of the luciferases are summarized. All samples are characterized by the lack of calorimetric reversibility and an asymmetry of the heat absorption peaks. The degree of peak asymmetry (∆Q − /∆Q + ) is shown in Table 2. Generally, the asymmetry of the heat absorption peaks can be associated either with the non-equilibrium of the denaturation process or with the accumulation during denaturation of one or more intermediate states of different stability. Nevertheless, it can be concluded that V. harveyi luciferase has slightly higher temperature stability than the P. leiognathi one, both in buffer and in 30% sucrose. However, the enthalpy of thermal denaturation is greater in P. leiognathi luciferase than in V. harveyi. This indicates that the amount of the interactions (hydrophobic, van der Waals) that disrupt with increasing temperature is greater in P. leiognathi luciferase than in the V. harveyi one. The thermal denaturation of two luciferases also differs in the degree of cooperativeness: P. leiognathi luciferase has a higher cooperativeness (∆T 1/2 is less), i.e., the destruction of the native structure occurs in a narrower temperature range. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW 8 of Figure 5. Temperature dependence of excess heat capacity functions for bacterial luciferases fr V. harveyi (green lines) and P. leiognathi (blue lines) in the buffer (solid lines) and in 30% sucr (dashed lines) at pH 6.9. The addition of 30% sucrose to the buffer stabilized the luciferases: their T m shifted by 4.7 K (P. leiognathi) and 5.0 K (V. harveyi). However, the sucrose effects on the process of thermal denaturation of two bacterial luciferases are different. In P. leiognathi enzyme, the addition of sucrose led to a broadening of the peak and an increase of its asymmetry. Such an effect of organic substances on the heat absorption curve is characteristic; it was observed on different proteins and explained by a slowdown of the denaturation kinetics process [26]. In the case of V. harveyi luciferase, the addition of 30% sucrose did not change either ∆T 1/2 or ∆Q − /∆Q + . Thus, our DSC experiments indicate that the protein denaturation schemes could be different for two studied luciferases. Temperature Dependent Molecular Dynamics of V. harveyi and P. leiognathi Luciferases To evaluate the role of structural dynamics of the proteins in experimentally obtained patterns, we conducted a molecular dynamics (MD) computation of two bacterial luciferases surrounded by water molecules at low (5, 15 • C), medium (27 • C) and high (45, 60 • C) temperatures. Firstly, based on the obtained MD trajectories, we analyzed three parameters: the root-mean-square deviation of the backbone atoms (RMSD), the radius of gyration (R g ), and the solvent accessible surface area (SASA). It allowed assessing the protein stability in the course of molecular dynamics and overall structure change. The results for the last 10 ns of the simulations are summarized in Table S1. No essential change of RMSD and R g values for both enzymes was detected with increasing temperature. The RMSD values of V. harveyi luciferase were found to be lower than that of P. leiognathi luciferase (1.6-2.3 Å vs. 1.9-2.6 Å, respectively) and at 60 • C the standard deviation of the RMSD of V. harveyi luciferase appeared to be less than that of the P. leiognathi enzyme. It could reflect less deviation of the V. harveyi luciferase structure from the initial one at higher temperatures. The SASA values were obtained in the ranges of 284-290 and 293-301 × 10 2 Å 2 for the V. harveyi and P. leiognathi luciferases, respectively. It indicates that the last enzyme could be less compact. Additionally, the molecular dynamics of the two luciferases surrounded by the mixture of water and sucrose molecules (corresponding to concentration 30%) was analyzed at 27 • C and 60 • C. It is worth noting that at high temperature the presence of sucrose reduced the deviation of SASA from the average value for V. harveyi luciferase (Table S1), while for the P. leiognathi enzyme there was no effect. Secondly, we calculated the average root-mean-square fluctuations (RMSF) of C α atoms of the luciferases in order to examine the structural flexibility of protein segments at different temperatures. For each temperature the change of flexibility was estimated as a difference according to ∆RMSF = RMSF T − RMSF 27 , where RMSF T and RMSF 27 are parameters for temperature T and 27 • C, respectively. In Figures S2 and S3 for two luciferases ∆RMSF of an α-subunit, which bears the enzyme active center, and a β-subunit, which stabilizes active enzyme, are shown [27]. The temperature effects on flexibility of a functional mobile loop in α-subunits were found for both enzymes (Figure 6a,b). The increased fluctuation of the region near αHis150 in V. harveyi luciferase was also observed ( Figure S2). This amino acid residue is located at the apex of the hairpin loop structure (142-160 a. r.), which passes along the periphery of an α/β-interface of the luciferase [28]. Significant fluctuations in this region at 45 • C could expose a part of the α/β-interface to the solvent molecules, which could affect the enzyme activity. The mobile loop is a particularly important functional region adjacent to the active site of the luciferase. It undergoes conformational changes upon enzyme-substrate complex formation [29]. We observed that at 5 °C two segments of the mobile loop change their flexibility in different manners as compared to 27 °C. The loop segment about 260-275 a. r. became less mobile in V. harveyi luciferase (Figure 6a), while its rigidity in the P. leiognathi enzyme did not change (Figure 6b). The loop segment at about 275-290 a. r., on the contrary, had a higher mobility in V. harveyi luciferase ( Figure 6a) and a lower mobility in P. leiognathi enzyme under low temperatures (Figure 6b). Hence, we revealed that the mobile loop flexibility differs between the two luciferases at 5 °C, and there is moderate negative correlation between the ΔRMSF profiles of P. leiognathi and V. harveyi mobile loop segments, with a correlation coefficient of −0.52 (Table S2). Table S2. Inspecting the possible influence of sucrose on the mobile loop of the luciferase, we compared ΔRMSF of the proteins surrounded by water and sucrose-water mixture at 27 and 60 °C (Figures 6c,d, S4 and S5). The obtained data indicate that the flexibility of the mobile loop was reduced in the presence of sucrose. The effect was more pronounced for P. leiognathi luciferase: fluctuations of its mobile loop at 60 °C in the presence of sucrose became close to the level observed in water at 27 °C. The presence of sucrose molecules in the vicinity of amino acids forming the mobile loops of the luciferases during MD modeling was estimated using the minimum-distance distribution function, MDDF [30]. Based on MDDF we constructed the density maps of sucrose appearance at a distance of 1.5-3.5 Å from amino acids of the mobile loops (Figures 7a,b and S6). It turned out that the sucrose atoms located most often within the first solvation shell (in 1.5-2.0 Å) of aspartate negatively charged side chains. Moreover, the density of sucrose molecules near the mobile loop residues of P. leiognathi luciferase increases at high temperatures (Figures 7b and S6). Table S2. The mobile loop is a particularly important functional region adjacent to the active site of the luciferase. It undergoes conformational changes upon enzyme-substrate complex formation [29]. We observed that at 5 • C two segments of the mobile loop change their flexibility in different manners as compared to 27 • C. The loop segment about 260-275 a. r. became less mobile in V. harveyi luciferase (Figure 6a), while its rigidity in the P. leiognathi enzyme did not change (Figure 6b). The loop segment at about 275-290 a. r., on the contrary, had a higher mobility in V. harveyi luciferase ( Figure 6a) and a lower mobility in P. leiognathi enzyme under low temperatures (Figure 6b). Hence, we revealed that the mobile loop flexibility differs between the two luciferases at 5 • C, and there is moderate negative correlation between the ∆RMSF profiles of P. leiognathi and V. harveyi mobile loop segments, with a correlation coefficient of −0.52 (Table S2). Inspecting the possible influence of sucrose on the mobile loop of the luciferase, we compared ∆RMSF of the proteins surrounded by water and sucrose-water mixture at 27 and 60 • C (Figure 6c,d, Figures S4 and S5). The obtained data indicate that the flexibility of the mobile loop was reduced in the presence of sucrose. The effect was more pronounced for P. leiognathi luciferase: fluctuations of its mobile loop at 60 • C in the presence of sucrose became close to the level observed in water at 27 • C. The presence of sucrose molecules in the vicinity of amino acids forming the mobile loops of the luciferases during MD modeling was estimated using the minimum-distance distribution function, MDDF [30]. Based on MDDF we constructed the density maps of sucrose appearance at a distance of 1.5-3.5 Å from amino acids of the mobile loops (Figures 7a,b and S6). It turned out that the sucrose atoms located most often within the first solvation shell (in 1.5-2.0 Å) of aspartate negatively charged side chains. Moreover, the density of sucrose molecules near the mobile loop residues of P. leiognathi luciferase increases at high temperatures (Figures 7b and S6). flexible segment as compared with the structure in water at 27 °С, while the negative ΔRMSF corresponds to a more rigid segment. Correlation coefficients between the ΔRMSF profiles of V. harveyi and P. leiognathi mobile loops are presented in Table S2. Inspecting the possible influence of sucrose on the mobile loop of the luciferase, we compared ΔRMSF of the proteins surrounded by water and sucrose-water mixture at 27 and 60 °C (Figures 6c,d, S4 and S5). The obtained data indicate that the flexibility of the mobile loop was reduced in the presence of sucrose. The effect was more pronounced for P. leiognathi luciferase: fluctuations of its mobile loop at 60 °C in the presence of sucrose became close to the level observed in water at 27 °C. The presence of sucrose molecules in the vicinity of amino acids forming the mobile loops of the luciferases during MD modeling was estimated using the minimum-distance distribution function, MDDF [30]. Based on MDDF we constructed the density maps of sucrose appearance at a distance of 1.5-3.5 Å from amino acids of the mobile loops (Figures 7a,b and S6). It turned out that the sucrose atoms located most often within the first solvation shell (in 1.5-2.0 Å) of aspartate negatively charged side chains. Moreover, the density of sucrose molecules near the mobile loop residues of P. leiognathi luciferase increases at high temperatures (Figures 7b and S6). Discussion We experimentally compared the temperature effects on two bacterial luciferases in the following directions: (i) the enzymes functioning under various temperatures, (ii) the rates of enzymes' thermal inactivation, (iii) the thermal denaturation of luciferases. We aimed to reveal signs of protein adaptation to temperature variation for any of two enzymes and to connect them with structural properties of the proteins. However, as thermal adaptation of luminous bacteria could have occurred due to other mechanisms, such as, for example, the accumulation of extremolytes, we tested the influence of sucrose on the temperature dependencies, both functional and structural, of the bacterial luciferases. The study of reaction kinetics at 5-45 °C revealed the different sensitivity of two luciferases to the temperature of solution (Figures 2 and S1). In particular, P. leiognathi lu- Discussion We experimentally compared the temperature effects on two bacterial luciferases in the following directions: (i) the enzymes functioning under various temperatures, (ii) the rates of enzymes' thermal inactivation, (iii) the thermal denaturation of luciferases. We aimed to reveal signs of protein adaptation to temperature variation for any of two enzymes and to connect them with structural properties of the proteins. However, as thermal adaptation of luminous bacteria could have occurred due to other mechanisms, such as, for example, the accumulation of extremolytes, we tested the influence of sucrose on the temperature dependencies, both functional and structural, of the bacterial luciferases. The study of reaction kinetics at 5-45 • C revealed the different sensitivity of two luciferases to the temperature of solution (Figure 2 and Figure S1). In particular, P. leiognathi luciferase responds by the pronounced activity change for each temperature shift of 5 • C, while V. harveyi luciferase provides about the same peak intensity within a wide range of 20-35 • C. The observed thermolability of P. leiognathi luciferase and its higher activity under optimum conditions (at 20-25 • C) as compared with V. harveyi enzyme are consistent with the conception about the structural and functional features of cold-adapted proteins [31]. Indeed, the activity of P. leiognathi luciferase at 10-15 • C, although not maximal, is significantly higher than that of V. harveyi. Our molecular dynamics simulations shed light on the structural reasons for this by showing that there are two flexible motifs within the catalytic α-subunit that exhibit distinct mobility in the two luciferases at low temperatures (Figure 6a,b). Apparently, the observed change in structural movements does not provide the necessary mobility of the active site residues of V. harveyi enzyme for effective catalysis under cold conditions. Generally, the structure of V. harveyi luciferase was more stable at 60 • C as indicated by a smaller standard deviation of RMSD (Table S1). Such behavior is characteristic of proteins capable of functioning at higher temperatures, and not at lower ones. It is worth noting that the majority of known psychrophilic strains of luminous bacteria do belong to the Photobacterium and Aliivibrio genus, which have "fast" luciferases [15]. Thus, the data on the activity of the studied luciferases at different temperatures are consistent with their phylogeny and molecular dynamics. Unexpectedly, the empiric kinetic parameter-the decay constant (k decay ), turned out to be more sensitive to temperature change for the reaction of V. harveyi luciferase than of P. leiognathi (Figure 2c,d). In the range of 10-30 • C, k decay increases 4.4-fold for the first enzyme and 3.2-fold for the latter. The decay constant is known to depend on rates of three elementary steps of the reaction: the dark decay of peroxyflavin intermediate (k dd ), aldehyde binding, and the formation of an electronically excited intermediate [32]. The contribution of k dd into k decay substantially differs for two luciferases. In reaction with decanal at 20 • C, the k decay values are 0.32 and 0.21 s −1 for the V. harveyi and P. leiognathi enzymes, whereas the corresponding k dd are 0.06 and 0.15 s −1 [33,34]. Thus, distinct temperature dependences of k decay for two luciferases could be caused by two reasons: (i) by a stronger destabilizing effect on peroxyflavin intermediate in V. harveyi reaction and/or (ii) by temperature influence on the others than k dd rate constants which could be more pronounced in the V. harveyi reaction due to their high contribution into k decay . Earlier, we demonstrated that for the reaction of P. leiognathi luciferase in the presence of sucrose the rate of the dark decay of peroxyflavin intermediate becomes lower, while the rate of the formation of an electronically excited intermediate increases as compared with the buffer [24]. This could underlie the contrasting changes of k decay observed for two luciferases at 40-45 • C in a sucrose solution (Figure 2c,d). The possible mechanism of sucrose's influence on the stability of the reaction intermediates at high temperatures was revealed by molecular dynamics simulations. We observed that at 60 • C the sucrose significantly reduces the fluctuations of the functionally important mobile loop near the active site of P. leiognathi (Figure 6d), most likely through direct temporary contacts with aspartate side chains (Figure 7b). For V. harveyi luciferase stabilization of the mobile loop by sucrose is less pronounced (Figure 6c), which can cause the increase of the k decay at higher temperatures. Moreover, it was found that sucrose promotes higher fluctuations of αHis150 within the hairpin loop of V. harveyi luciferase, which could also influence k decay as well ( Figure S4). Based on the observed temperature effects on the activity of the enzymes in the presence of sucrose, we can conclude that this cosolvent provides no advantages for the functioning of bacterial luciferases either at high (35-45 • C) or at low (5-15 • C) temperatures. Furthermore, we observed that, at low temperatures, sucrose suppresses luciferase activity, probably due to high viscosity which leads to increased diffusion control of the reaction. However, along with activity, the structural stability of proteins also plays an important role in maintenance of their function under extreme conditions, and saccharides are known to be one of the best protein protectors against denaturation [35]. Hence, we further examined the kinetics of thermal inactivation of V. harveyi and P. leiognathi luciferases in the buffer and in the presence of sucrose. In addition, heat-induced denaturation of the enzymes was investigated using DSC. We observed that under the same conditions the thermal inactivation rate is always smaller for V. harveyi luciferase than for P. leiognathi (Table 1). Sucrose slows down the thermal inactivation of the enzymes by a factor of two to four times, but without a significant reduction in the activation energy of the process. This sucrose effect could also be a result of a decrease in mobile loop fluctuations at high temperatures, since the enhanced mobility of the protein segments (especially near the active center) could trigger the enzyme transition to an inactive state under heating. The thermal denaturation of the luciferases studied with DSC indicated that the temperature of the maximum of the heat absorption peak (T m ) is slightly higher for the V. harveyi protein than for the P. leiognathi one and it increases in sucrose solution. Thus, both approaches showed that V. harveyi luciferase has higher temperature stability than P. leiognathi and that sucrose is able to protect the proteins against thermal inactivation and denaturation. In addition, we also found significant differences in ∆H cal between the two luciferases. Many interactions, such as hydration, hydrophobic effect, van der Waals interactions, and hydrogen bonding (between polar surfaces in protein and in water) contribute to the enthalpy of denaturation [36]. Since the spatial structures of P. leiognathi and V. harveyi luciferases are very similar, the difference in ∆H cal can be explained by the presence of a slightly larger number of unstructured parts in V. harveyi luciferase compared to P. leiognathi luciferase. This was confirmed by our early data on CD spectroscopy: V. harveyi luciferase has slightly less secondary structure compared to P. leiognathi luciferase [37]. According to our MD data, we also see that the hairpin loop within the V. harveyi structure was more flexible than in the P. leiognathi structure at high temperatures. It is worthy of note that our previous investigation of the unfolding of V. harveyi and P. leiognathi luciferases with urea showed that the latter has a more stable tertiary and secondary structure [37]. In agreement with these results, our current data on the enthalpy of thermal denaturation also indicate that P. leiognathi luciferase has a greater number of intramolecular interactions, which are disrupted with increasing temperature (parameter ∆H cal in Table 2). The chemical unfolding of the bacterial luciferase can be considered much more studied than thermal denaturation [38]. A number of investigations have proven a fourstate scheme for denaturation of V. harveyi luciferase with urea [39,40], where the unfolding of the C-terminal domain of the α-subunit is followed by the dissociation of the two subunits and the subsequent denaturation of them. In our previous work [37], we demonstrated that P. leiognathi luciferase in urea solutions follows the same unfolding pathway. However, the results presented in this work suggest that the mechanism of thermal denaturation of the two studied enzymes could differ. This is not very surprising, since chemical and thermal denaturation processes do not have to correlate, as the denaturant interacts specifically with the protein, while temperature generally affects intramolecular contacts indiscriminately [41]. Lack of knowledge about thermostability and the activity of bacterial luciferase at different temperatures is one of the issues when it is implemented as a reporter for the real-time monitoring of various processes in microbial or eukaryotic cells [42][43][44]. Previous studies demonstrated that in order to obtain a reproducible luminescent response and to understand this response, a detailed analysis of the characteristics of the proteins themselves is necessary, as well as the influence of the environment on their structure and function [43,45]. Therefore, our data has significant implications for the use of two types of luciferase with distinct sensitivity to temperature in terms of both activity and stability as reporters. Materials Lyophilized recombinant luciferase P. leiognathi (99% purity) was purchased from Biolumdiagnostika Ltd. (Krasnoyarsk, Russia). Luciferase V. harveyi was lyophilized after expression in the E. coli strain BL21 (DE3), and purification was as described in [37] in the Photobiology laboratory of the Institute of Biophysics SB RAS (Krasnoyarsk, Russia). Flavin mononucleotide (FMN, Sigma-Aldrich, Burlington, MA, USA) and decanal (Acros Organics, Fair Lawn, NJ, USA) were used as luciferase substrates. Ethylenediaminetetraacetic acid (EDTA, ROTH) was applied as an electron donor during FMN photoreduction. Sucrose (Gerbu, Heidelberg, Germany) in a concentration of 30 wt% was used as a model extermolyte. Stock solutions of the reactants were prepared in a potassium phosphate buffer (0.05 M, pH 6.9), except for decanal, which was dissolved in ethanol (2 × 10 −3 M). The concentrations of FMN and luciferase were determined spectrophotometrically with the extinction coefficients of ε 445 = 12,400 M −1 cm −1 and ε 280 = 80,000 M −1 cm −1 , respectively. Activity and Thermal Inactivation of Bacterial Luciferases The reaction kinetics for the luciferases was measured at 5-45 • C in a single-turnover assay using a SX20 stopped-flow spectrometer (Applied Photophysics, Leatherhead, UK) as described elsewhere [24]. Decanal and photoreduced flavin mononucleotide in the buffer or in the sucrose solution were loaded into one drive syringe of the spectrometer and pre-incubated there for 5 min. Similarly, bacterial luciferase in the buffer or sucrose solution was pre-incubated for 5 min in another drive syringe. The enzyme and substrates were then mixed in the measuring cell and the kinetics of light emission was registered for 10-30 s. The cell and drive syringes were thermostated at the required temperature. The concentrations in the reaction mixtures were 0.5 µM for luciferase, 15 µM for flavin and 50 µM for decanal. The thermal inactivation rate of the luciferases was estimated by measuring the remaining activity of the enzymes after their incubation for various times under the required temperature in the range 40-55 • C. Solid-state thermostat Gnom (DNA technology, Moscow, Russia) was used for enzyme incubation. Reaction kinetics were measured at 20 • C in a single-turnover assay with a SX20 stopped-flow spectrometer as described above, but without incubation in drive syringes. The remaining activity (R) of luciferase was then calculated by percentage according to R = (Q*/Q 0 *)·100, where Q* and Q 0 * are areas under the kinetic curves obtained with thermally treated and thermally untreated enzymes, respectively. The rate constant of thermal inactivation, k, was determined by fitting the dependence of R on incubation time, t, with the function R = A·e −kt using Origin 8.0 software (OriginLab, Northampton, MA, USA). The coefficient of determination, R 2 , was >0.96 for all of the approximations. The activation energy of thermal inactivation (E a ) was calculated according to the Arrhenius equation from the slope of dependence lnk (1/T). Differential Scanning Calorimetry Calorimetric measurements were made using a precision scanning microcalorimeter SCAL-1 (Scal Co. Ltd., Pushchino, Russia) with 0.33-mL glass cells at a scanning rate of 1 K/min and under 2.5 atm pressure [46]. The protein concentrations ranged from 1.0 to 4.0 mg/mL in 50 mM potassium-sodium phosphate buffer with a pH of 6.9. In this concentration range, the thermodynamic parameters remained stable. The experimental calorimetric profiles were corrected for the calorimetric baseline, and the molar partial heat capacity functions were calculated in a standard manner. The excess heat capacity, C p exp , was evaluated by subtracting the linearly extrapolated initial and final heat capacity functions with correction for the difference of these functions by using a sigmoid baseline [47]. A typical value for the partial specific volume for globular proteins (0.73 cm 3 /g) was accepted arbitrarily, since it does not influence the calculated excess heat capacity. In the calculation of molar thermodynamic quantities, the molecular weight used was 80,000 Da, which corresponds to the dimer state of the luciferases. MD Simulation The previously prepared structures of the V. harveyi and P. leiognathi luciferases [37] were taken for molecular dynamics simulations (MD) at different temperatures (5,15,27,45, 60 • C) using the GROMACS 2020.4 software package and CHARMM36 force field [48,49]. The enzymes were solvated in a cubic box with periodic boundary conditions and at least 12 Å away from the protein to each of the box edges. Two strategies were applied for the solvation procedure: (1) TIP3P explicit water [50] was used for simulation at all studied temperatures; and (2) a mixture with sucrose molecules [51] adjusted to simulate the concentration at 30 wt% was exploited for molecular dynamics at 300 and 333 K. The net charge of the systems was neutralized with sodium ions. The minimization of each system was performed with the steepest descent method (maximum force of 1000.0 kJ/mol). The system was then heated, and sequential execution of isochoric and isobaric equilibrations was performed for 10 ns each with a 2 fs time step. A V-rescale thermostat at the required temperature and a Parrinello-Rahman barostat at 1 bar of pressure was used. Protein heavy atoms were restrained during equilibration [52,53]. The cutoff distance for the shortrange non-bonded interactions was 10 Å. Electrostatic interactions were treated using the particle-mesh Ewald method [54]. An integration step of 2.0 fs was used, and bonds were constrained with the LINCS algorithm [55]. We conducted 100-ns molecular dynamics simulations for each system. Each MD run was repeated three times to obtain the average value of the structural parameters for the studied proteins. Conclusions In this study we aimed at comparing the effect of different temperatures on the structure and function of two bacterial luciferases: the Vibrio harveyi and Photobacterium leiognathi species. Despite high homology, these enzymes belong to different subfamilies with specific structural features of the active center and distinct reaction kinetics [20]. Thus, V. harveyi luciferase is classified as a "slow" enzyme, whereas P. leiognathi luciferase is a "fast" one. To the best of our knowledge, systematic studies of the thermolability of "slow" and "fast" bacterial luciferases has yet to be carried out. The results of our study indicate that the "fast" P. leiognathi luciferase is thermolabile and has maximal activity in a narrower temperature range than does the "slow" V. harveyi luciferase. The latter is weakly sensitive to temperature change and remains active up to 35 • C. However, P. leiognathi luciferase provides twice the total quantum yield per single turnover under optimal conditions. At lower temperatures, the activity of the "fast" enzyme is significantly higher compared to the "slow" one as well. This is probably due to the peculiarities of its functional mobile loop dynamics, resulting in the ability to more effectively stabilize the reaction intermediates reflected in the smaller decay constant of the reaction. Moreover, our data revealed an interesting pattern that despite the greater number of intramolecular contacts disrupted during the unfolding of P. leognathi luciferase, it has slightly lower temperature stability. Thus, luciferase with the fast decay kinetics demonstrates the properties of the cold-adapted enzymes, while luciferase with the slow kinetics is more resistant to high temperatures. All of this reflects the well-known interplay between the activity and structural stability of enzymes, implying that more stable proteins could not provide a high rate of catalysis due to the rigidity of the structure. In addition, we studied the effect of sucrose on the stability and activity of two bacterial luciferases. Our investigation has revealed that sucrose does not stabilize both enzymes in terms of activity, but it can protect the proteins from thermal denaturation. It is noteworthy that the influence of sucrose on the denaturation mechanisms of two luciferases seems to be different and requires more detailed examination. Thus, after experimental research and molecular dynamics modeling of temperature effects. we have found that, despite high homology, the V. harveyi and P. leiognathi luciferases exhibit distinct features in response to heating. However, to answer the question of whether the found structural and functional peculiarities are a sign of the entire bacterial luciferase
2022-07-27T15:05:32.355Z
2022-07-23T00:00:00.000
{ "year": 2022, "sha1": "aec0965de19da2ff719bab93bef9b4ac1dd96aab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/15/8119/pdf?version=1658571419", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "791c569a490a81df7ebc84bd3f42a0e13bcf4a9f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232424178
pes2o/s2orc
v3-fos-license
In vitro models of fetal lung development to enhance research into congenital lung diseases Purpose This paper aims to build upon previous work to definitively establish in vitro models of murine pseudoglandular stage lung development. These can be easily translated to human fetal lung samples to allow the investigation of lung development in physiologic and pathologic conditions. Methods Lungs were harvested from mouse embryos at E12.5 and cultured in three different settings, i.e., whole lung culture, mesenchyme-free epithelium culture, and organoid culture. For the whole lung culture, extracted lungs were embedded in Matrigel and incubated on permeable filters. Separately, distal epithelial tips were isolated by firstly removing mesothelial and mesenchymal cells, and then severing the tips from the airway tubes. These were then cultured either in branch-promoting or self-renewing conditions. Results Cultured whole lungs underwent branching morphogenesis similarly to native lungs. Real-time qPCR analysis demonstrated expression of key genes essential for lung bud formation. The culture condition for epithelial tips was optimized by testing different concentrations of FGF10 and CHIR99021 and evaluating branching formation. The epithelial rudiments in self-renewing conditions formed spherical 3D structures with homogeneous Sox9 expression. Conclusion We report efficient protocols for ex vivo culture systems of pseudoglandular stage mouse embryonic lungs. These models can be applied to human samples and could be useful to paediatric surgeons to investigate normal lung development, understand the pathogenesis of congenital lung diseases, and explore novel therapeutic strategies. Introduction Impaired lung development can be caused by both genetic and environmental factors. In several conditions, such as congenital pulmonary adenomatoid malformation (CPAM), pulmonary sequestration, congenital diaphragmatic hernia (CDH), and oligohydramnios, the maldevelopment of the lung requires surgical attention. Chronic lung disease of premature babies attributed to lung hypoplasia is also a major problem in neonatal medicine. As congenital respiratory dysfunction threatens the neonate immediately after birth, and current postnatal treatments are suboptimal, novel strategies which aim to rescue impaired lung development prior to birth are desirable. With regard to CPAM and pulmonary sequestration, controversy exists on the efficacy of prophylactic surgical resection. Resection is currently justified by the risk of recurrent infection and theoretically high probability of malignant transformation if left untreated [1,2]. Given the likelihood that genetic anomalies are involved in their etiology, the answer to the risk of carcinogenesis may lie within the functions of the molecular pathways involved in the pathogenesis of the malformations [3,4]. From this perspective, the development of in vitro models 1 3 to understand the mechanisms of lung development will be key to unravel these issues. Mammalian lung development begins with sprouting of primordial buds from the ventral side of the foregut endoderm. Once the primary lung buds form, they extend into the surrounding mesenchyme and begin the process of branching morphogenesis to finally give rise to the complex airway tree [5]. Lung branching morphogenesis occurs during the pseudoglandular stage (5-17 post-conception weeks in humans, E12-E15 in mice). In mice this process has been well characterized, and branching appears to be highly stereotyped [6]; however, in humans the resulting morphology of the complete airway tree is thought to be more susceptible to variation [7]. Generally, branching is achieved through the elongation and bifurcation of the epithelial bud tips; these consist of Sox9-positive multipotent progenitor cells which give rise to Sox9-negative bronchiolar, and later alveolar descendants [8,9]. In light of the recent advances, it has become possible to examine biological phenomena at the cellular (and subcellular) level. Investigation of the molecular mechanisms underlying normal lung development, as well as pathogenic conditions associated with the fetal stage, is underway [10]. We additionally have an increased understanding of the key similarities and differences between humans and the mice used as model systems, particularly with regard to developmental mechanisms and the relevant signaling factors involved [9]. Notably, it has been shown that 96% of othologous gene expression between the developing mouse and human lung tip cells are conserved, indicating a high degree of phenotypic similarity between the two [11]. On the other hand, several critical differences have been recognized, such as differential expression of the transcriptional factor SOX2 at the tip of the branching airways. In humans, SOX2 and SOX9 are co-expressed here throughout the pseudoglandular stage; however, Sox2 is consistently absent in mouse epithelial tips [11][12][13]. Here, we report in vitro models of pseudoglandular stage lung development by using mouse embryonic tissue and a combination of three different approaches, namely whole organ culture, culture of mesenchyme-free epithelium rudiments, and fetal lung organoid culture. These models can be readily applied to human fetal lung samples and will allow investigation of the molecular mechanism of fetal lung development through several avenues. Whole lung culture All animal experiments were performed by personnel having UK Home Office Personal Licence (PIL I7ED92582) in line with ethical approval. Wild type CD-1 mice were mated and marked as E0.5 pregnant when they presented a vaginal plug. Pregnant mice were euthanized by cervical dislocation at E12.5, and the embryos were extracted through a laparotomy. Lungs were dissected under a dissecting microscope with extra care to preserve the whole structure. The trachea was divided just below the larynx. After brief washing in PBS, the lung explants were embedded in a 10 μL of 100% Matrigel Growth Factor Reduced (MRF) solution (Corning) dropped on a Transwell® membrane (Corning). After Matrigel drops underwent gelation, 800 μL of DMEM/F12 medium (Thermo Fisher Scientific) supplemented with 1% p/s (Thermo Fisher Scientific) and 0.1% bovine serum albumin, BSA (Sigma-Aldrich) was added to the wells below the membranes. The explants were incubated at 37 °C, 5% CO2, and 21% O2 for up to 72 h. Culture medium was replaced daily. Mesenchyme-free mouse lung epithelium rudiments culture E12.5 mouse lungs were washed in PBS and subsequently treated with 8U/mL Dispase (Thermo Fisher Scientific) for 2 min at room temperature. Mesenchymal tissue was removed with tungsten needles as previously reported [11] and lung epithelium was embedded in MRF drops in 48-well plates. After MRF gelation, 250 μL of DMEM/F12 supplemented with 1% p/s, 0.1% BSA, and 1% Insulin-Transferrin-Selenium (ITS) (Thermo Fisher Scientific) were added. Different combinations of human Recombinant FGF10 (Peprotech) and CHIR99021(Tocris) have been tested as described in Fig. 3. Real-time PCR analysis Total RNA was isolated with the RNeasy Micro kit (Qiagen), according to manufacturer's instructions. Reverse transcription to cDNA was performed using the high-capacity cDNA reverse transcription kit (Thermo Fisher Scientific), according to manufacturer's instructions. Real-time PCR was performed using TaqMan Gene Expression Assay probes (Thermo Fisher Scientific) and Master Mix (Thermo Fisher Scientific) on a Step One Plus Real-Time PCR System (Applied Biosystems). The following TaqMan probes were used: Sox9 (Mm00448840_m1), Sox2 (Mm03053810_s1), and Cdh1 (Mm01247357_m1). Gapdh (Mm99999915_g1) was used as a reference gene. Gapdh was used as an internal control gene to calculate 2^(-ΔCT). Relative gene expression was estimated as 2^(-ΔΔCT) by the Livak method normalizing with the average of 2^(-ΔCT) of E12.5 native lungs. Immunofluorescence analysis Lung explants were released from MRF using cell recovery solution (corning). The specimens were fixed in 4% paraformaldehyde, PFA (Sigma-Aldrich) for 30 min at 4 °C, washed thrice in PBS, and dehydrated and cryoprotected in 30% sucrose overnight. The right and left lungs were separately embedded in Optimal Cutting Temperature compound, OCT (Thermo Fisher Scientific) and stored at -80 °C. Lung organoids were treated with cell recovery solution and fixed in 4% PFA for 15 min at 4 °C. Retrieved organoids were embedded in OCT and frozen at -80 °C. Sections were created at the thickness of 7 and 10 µm for whole lungs and organoids, respectively, using a Cryostat (Leica) and stored at -20 °C. Statistical analysis For the epithelial tip culture model, one-way ANOVA test was performed to assess differences in the number of buds at each time point across the different culture conditions. Less than 0.05 of p value was regarded as statistically significant. Statistical analyses were performed using R v3.6.3 (The R Foundation for Statistical Computing, http:// www.R-proje ct. org). Ex vivo fetal lung culture Pseudoglandular stage mouse lung cultures were established to recapitulate branching morphogenesis ex vivo. Branching morphogenesis of lung explants from E12.5 embryos cultured in standard air-liquid interface was assessed by bright field microscopy ( Fig. 1). Pictures were captured at 24-hour intervals, demonstrating constant growth of the lungs and generation of new branches. The morphology of the developing airways was compared to E12.5 and E15.5 native lungs, which should correspond to day 0 and day 3 of the cultured lungs, respectively. Although the number of distal buds was decreased compared to E15.5 native tissue, a similar morphology in the cultured sample at day 3 was observed by E-Cadherin and Collagen-4 immunostaining analysis, suggesting a maintained process of branching morphogenesis. A time course analysis was performed using Real-Time PCR (qPCR) in order to quantify the expressions of key epithelial markers, demonstrating sustained expressions of Cdh1, Sox2, and Sox9 in the cultured specimens over the experimental period (Fig. 2). Optimization of culture conditions for ex vivo culture of mesenchyme-free lung epithelium rudiments The molecular mechanisms which instruct a subgroup of tip cells to undergo symmetry breaking and branching initiation are still not completely understood. However, it has been reported that mesenchyme-free mouse lung epithelium rudiments can spontaneously branch ex vivo if exposed to recombinant Fgf10, showing that pre-patterned mesenchyme is not essential to trigger branching [15,16]. Hence, the ex vivo culture of fetal lung epithelium represents an ideal model system to investigate the contribution of localized extracellular cues to branching morphogenesis. We enzymatically and mechanically removed mesenchyme from E12.5 mouse embryonic lungs and embedded epithelium rudiments in Matrigel drops, to mimic in vivo mechanical stiffness while allowing branching if supplemented with recombinant Fgf10 [17]. Recent studies suggest that the addition of Wnt signaling activators such as the GSK3 inhibitor CHIR992201 enhances branching ex vivo in conjunction with Fgf10 [18,19]. We sought to further optimize this culture condition by screening different concentration combinations of Fgf10 and CHIR992201, to obtain at least four cycles of epithelial branching starting from Day 0. Specifically, we compared a combination of "high/high" FGF10/CHIR with a "low/high" and a "low/ low". Both the high/high and the high/low culture conditions lead to enlarged tips by day 4, suggesting over-proliferation of Sox9-positive progenitor cells, which prevents formation of new branches (Fig. 3). Conversely, the low/low condition allowed defined branching over 3 days in culture and a significant increase compared to the other two conditions as revealed by quantification of branches normalized to Day 0 (Fig. 3). These results build upon and are consistent with those reported previously [18]. These optimized culture conditions establish an ex vivo model of mesenchyme-free spontaneous lung branching, which may be manipulated to investigate the contribution of specific extracellular cues. Testing of an established protocol of self-renewing fetal lung organoid culture Fetal lung organoids from bud tip cells allow recapitulation in vitro of the spherical symmetry of a bud tip cell. Taking advantage of an already published study based on the sorting of Sox9-positive cells from E12.5 mouse lung epithelium [14], we tested the potential of mesenchymefree tips to generate fetal lung organoids without cell sorting (Fig. 4). Immediately after Matrigel embedding residual, mesenchymal cells are still present in the culture. Then, from Blue: high FGF10 and high CHIR99021, red: low FGF10 and high CHIR99021, yellow: low FGF10 and low CHIR99021, high FGF10: 500 ng/mL, low FGF10: 200 ng/mL, high CHIR99021: 3 μM, low CHIR99021: 1 μM. Each condition includes 3-5 biological replicates. Analysis of covariance; ***p < 0.001; **p < 0.01; *p < 0.05. Right: representative microscopic images of the epithelial tip culture in the different conditions. Bulging of the buds was observed in the high FGF10/high CHIR99021 and low FGF10/high CHIR99021 conditions, whereas the buds in the low FGF10/low CHIR99021 condition displayed productive branching, especially at day 3 and day 4. Scale bar 100 μm Fig. 4 Mouse Organoid culture for self-renewing bud tip cells. Left: representative immunofluorescence images of organoids. In the zoomed pictures (lower row), the top side is the apical (luminal) side. Epithelial cells forming 3D spheroids uniformly expressed positive Sox9 signal. Scale bar 50 μm. Right: representative microscopic images of organoids at different passages. Self-renewing capacity was maintained up to six passages Passage 1, a homogeneous culture of spherical organoids can be observed that is conserved for at least six passages. The organoids uniformly express Sox9 by immunostaining analysis, along with apical polarization of F-actin, a wellestablished feature of bud tips in vivo and a key factor in generating the mechanical tension required for branching. Discussion In this work, we present three different models that recapitulate mouse fetal lung development, specifically lung branching morphogenesis, in an ex vivo environment. First, we demonstrated that whole lungs cultured over 3 days maintained expression of key genes related to epithelial branching, and appeared morphologically similar to native tissues. These findings support the value of the ex vivo culture system in the investigation of the early fetal lung development. By virtue of its simplicity, this model has a good potential to study reactions of lungs by applying external factors, such as mechanical pressure, oxidative stress, and nanoparticles loaded with growth factors or microRNAs. Secondly, we recapitulated that mesenchyme-free epithelium rudiments are able to generate new branches if supplemented with an optimized combination of recombinant FGF10 and CHIR99021. This supports the proposed role of mesenchymal cells as a supplier of growth factors and signaling molecules rather than a constructor of the branching morphology [16]. Finally, we tested and slightly modified an established protocol for obtaining fetal lung organoids from the epithelial tips. Sox9-positive organoids represent selfrenewing progenitor cells of the airway, which retain both multipotency and proliferative potential, allowing the investigation of differentiation protocols toward bronchiolar or alveolar fate. Traditionally, research into lung development in the field of pediatric surgery has been based on in vivo studies in animal models [20][21][22]. However, the utility of animal models is generally limited by the heterogeneity of models and lower cost-effectiveness, making it difficult to validate the observations by repeating the experiments in a consistent manner. Moreover, the difference in gene and protein expression between animals and humans is a hurdle for clinical translation of any discoveries. For example, one of the most widely used animal models of CDH, nitrofen-induced pulmonary hypoplasia, is not replicable in humans, and thus its relevance to human CDH is unclear. Compared to in vivo animal models, ex vivo models provide opportunities to scrutinize phenomena in a more consistent manner and in fine detail at the cellular and subcellular level. The pseudoglandular stage, focused on in this paper, is a crucial period for lung organogenesis, during which disturbance of branching morphogenesis may leave a critical and unalterable deficiency in final lung morphology. In CDH, compression to the lung on the affected side can theoretically happen as early as the mid-pseudoglandular stage, and given that the initial timing of organ herniation varies across patients, whether the lung is compressed during this period may have a large impact on the severity of lung hypoplasia [23]. Therefore, investigating how and to what extent mechanical force affects branching morphogenesis is important for further understanding of the pathogenesis of pulmonary hypoplasia in CDH [24]. Overexpression of FGF10 during the pseudoglandular stage has been demonstrated to induce a formation of cystic lesions resembling CPAM type 1 [25]. In our epithelial tips model, tips treated with high FGF10 or high CHIR99021 exhibited a bulging, cystic morphology without proper separation into new branches. This suggests the necessity of FGF10 regulation for the generation of daughter buds, which is likely to be balanced by Wnt signaling, as indicated by effective branching in the low CHIR99021 condition. This finding is an accordance with a previous report showing cystic malformation caused by deficiency of the Wnt receptor Frizzled 2 in branching airway epithelium [19]. With regard to transcription factors, diffuse expression of SOX9 and SOX2 in the epithelial cells lining the cystic lesion of CPAM type 1 and 2 have been reported [3,26]. This suggests disrupted regulation of the respective genes in the cystic lesions. In our whole lung culture, lung explants continued to generate new buds, while epithelial airway genes were also expressed. By manipulating this system, we will be able to obtain quantitative and spatial information of gene and protein expression, providing a new insight in the inception of cyst formation. Understanding these pathways will be the key to addressing the risk of malignancy related to congenital lung malformations. Conclusion Congenital lung malformations are consequences of aberrations in genetic and epigenetic regulation of normal developmental processes. Proper understanding of these mechanisms at the different levels, specifically gene, protein, and tissue levels, is vital for exploring and testing novel therapies, including fetal intervention. We have provided evidence for the utility and reliability of three complementary in vitro models of lung development which have already been adopted in basic biology, but could be particularly useful to paediatric surgeons. These in vitro systems can be harnessed to develop a more comprehensive understanding of the developing fetal lung and will be key to successful translation of basic research to clinical practice. and the NIHR Great Ormond Street Hospital Biomedical Research Centre. P.D.C. is supported by National Institute for Health Research (NIHR-RP-2014-04-046). F.M. is supported by a NIHR BRC Catalyst Fellowship. S.S. is supported by Japan society for the promotion of science overseas research fellowships (310072). The authors would like to thank Brendan Christopher Jones for reviewing this manuscript. Compliance with ethical standards Conflict of interest None of the authors has any conflict of interest related to this article. Ethical approval All animal experiments were performed by personnel having UK Home Office Personal Licence (PIL I7ED92582) in line with ethical approval. Informed consent This article does not contain any studies with human participants which requires informed consent. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-03-31T13:57:34.618Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "f90edf13479f95b03b33be746cdc801c871b86c1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00383-021-04864-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f90edf13479f95b03b33be746cdc801c871b86c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265007500
pes2o/s2orc
v3-fos-license
Implementation Analysis of COVID-19 Vaccination Policy in Southwest Maluku Regency The implementation of the COVID-19 vaccination policy can be influenced by several factors, including communication, available resources, task disposition INTRODUCTION One of the Government's efforts to tackle the surge of Covid-19 MATERIALS AND METHODS This research is a qualitative study.The research conducted by the author is descriptive qualitative with data collection techniques using observation and interviews.This research is a descriptive qualitative study in which data collection techniques include observation and interviews.The research was conducted in Southwest Maluku Regency in October-November 2022, with a total of 5 informants consisting of key informants, namely the Regent of Southwest Maluku as the chairman and spokesperson of the Task Force, regular informants including the Head of the Regional Disaster Management Agency, the Head of Surveillance and Immunization of the Southwest Maluku District Health Office, and the Coordinator of the COVID-19 Task Force Expert Team, and regular informants including community leaders. RESULTS The Covid-19 vaccination policy implemented by the Indonesian government is a top-down approach, aimed at addressing the surge in Covid-19 cases in the country.The effectiveness of the vaccination program can be influenced by communication, resources, disposition, and bureaucratic structure.The study findings revealed the following: recipients.In communication, there are several indicators to determine effective communication in implementing government policies, namely transmission communication, clarity communication, and consistency communication. 1 The Coordinator of the COVID-19 Task Force of Southwest Maluku stated that: "The government of Southwest Maluku regency is actively conducting socialization regarding COVID-19 and the importance of implementing COVID-19 vaccination to prevent a resurgence of cases and create immunity within the community" (Coordinator of the COVID-19 Task Force). "In the initial stages of implementation, we faced many challenges due to misinformation received by the public, such as false information about the dangers of the vaccine, which caused various issues" (Coordinator of the COVID-19 Task Force). The challenges faced in implementing the COVID-19 vaccination policy are related to the demographic nature of Southwest Maluku, which consists of islands.This presents its own obstacles, such as inadequate access, communication barriers due to limited network coverage, and a time-consuming distribution process (Regent of Southwest Maluku). "At the beginning of the implementation, people often heard inaccurate news about the vaccine. Therefore, religious leaders frequently encourage their congregations to participate in the program together, aiming to overcome the COVID-19 pandemic in Southwest Maluku" (Religious Leader). Resources: Based on interviews conducted with the Regent of Maluku Barat Daya as the head of the COVID-19 handling team: "The budget for COVID-19 handling primarily comes from two sources, namely the state budget (APBN) and the regional budget (APBD).This allows the public to receive the vaccine free of charge at designated locations" (Regent of Southwest Maluku). "Currently, the implementation of the COVID-19 vaccination policy receives funding from the Ministry, regional government, health department, and the National Disaster Management Agency (BNPB).The budget is considered sufficient and well-managed.In addition to covering consumables and medical equipment, it also includes provisions for the consumption and transportation needs of the vaccinators" (Spokesperson for the COVID-19 Handling Team of Southwest Maluku). "The resources for handling COVID-19 and implementing the COVID-19 vaccination, including the first, second, and booster doses, come from two sources.The vaccine funding is obtained from the state budget (APBN) channeled through the provincial health department.Meanwhile, funds for implementation, such as socialization activities and vaccination events, are acquired from the regional budget (APBD) through the contingency fund, both in 2021 and 2022" (Chairperson of BNPB Southwest Maluku). Disposition: Regarding the implementation of vaccination in the district, the Regent mentioned: "Currently, the number of vaccinators is considered sufficient.The vaccination teams are coordinated by health centers, hospitals, and elements of the Indonesian National Armed Forces and Police" (Regent of Southwest Maluku). "The vaccination implementation involves multiple teams coordinated by the health department, including health centers and hospitals.The Indonesian National Armed Forces and Police also provide significant assistance in the implementation process" (Spokesperson for the COVID-19 Handling Team). "The number of vaccinators in the district is adjusted based on the number of cases.Currently, the number of personnel is sufficient.However, due to the overwhelming response from the community, the task force sometimes faces challenges in fulfilling their duties.Therefore, the government ensures an equitable distribution of personnel, reaching all elements of the community on the islands" (Chairperson of the Surveillance Team for COVID-19 Handling). Organizational Structure: Based on interviews conducted with the Head of the COVID-19 Handling Task Force in Southwest Maluku: "The handling of COVID-19 and the acceleration of COVID-19 vaccination in Southwest Maluku district are directly led by the Regent and involve several coordination teams.Each team has its own tasks and responsibilities, working synergistically" (Regent of Southwest Maluku). "In terms of bureaucracy, the task force is currently led by the Regent, with guidance from superiors, and has eight team coordinators. Decision-making in the implementation of the COVID-19 vaccination is done through deliberation. The government often consolidates with relevant stakeholders to discuss current issues and their solutions, as well as conducting socialization activities" (Spokesperson for the COVID-19 Handling Team). "Communication barriers arise due to the geographical nature of the islands, which affects cellular signal and Wi-Fi coverage.Additionally, the vaccine distribution system needs to consider expiration dates and distribution procedures" (Head of BNPB). DISCUSSION George C. Edwards III, in Mustafa Lutfi-Kurniawan's article (2012:121-125), discusses four factors or variables that play a role in the success of policy implementation. 2The variables or factors that influence the success or failure of policy implementation are as follows: Communication: Communication is the process of conveying information from a communicator to a recipient.In the context of policy communication, it refers to the process of delivering policy information from policy-makers to policy implementer.The information needs to be conveyed to the implementer so that they understand the substance, purpose, direction, and targets of the policy.This enables the implements to prepare for the implementation of the program and ensures efficient and coordinated operations.Communication as a factor in policy implementation involves key aspects such as information transformation (transmission), information clarity, and information consistency.Transformation aims to ensure that information is not only transmitted to program implementers but also to relevant parties and target groups.Clarity ensures that information is easily understood and avoids misinterpretation by policy implementers, target audiences, or related stakeholders.On the other hand, consistency expects that the information conveyed maintains consistency to avoid concerns among policy implementers, target audiences, or related parties. 3cording to Edward III's theory of Policy Implementation (1980), in terms of communication, it can be considered appropriate as the government has made efforts to disseminate information about the vaccination policy implementation in West Southeast Maluku Regency.In addition to dissemination, the government has made persuasive approaches to religious leaders in an effort to educate the community, as religious leaders have significant influence in community education. 4sources: According to Van Meter and Van Horn, as cited in Utami (2022), 5 besides policy standards and goals, the implementation of a policy requires support from both human resources and nonhuman resources. 6Human resources are the most important resource in determining the success of an implementation.Each stage of implementation requires qualified human resources appropriate to the tasks indicated by the policy.Additionally, financial resources are also crucial alongside human resources.In the context of COVID-19 mitigation, human resources include healthcare professionals directly involved in controlling the spread of the virus, such as doctors and healthcare workers, as well as those who are part of the overall COVID-19 mitigation efforts, such as government officials in the Health Department.Doctors and healthcare workers play a crucial role in COVID-19 mitigation as they administer COVID-19 vaccinations to the public. 5search findings reveal that resources in the implementation of COVID-19 include human resources (vaccinators), the competence or abilities of implementers, budgetary resources, and equipment resources. Disposition: One of the factors influencing the effectiveness of policy implementation is the disposition of implementers.It refers to how the implementing party, in this case the government, responds to emerging issues with a sense of responsibility, honesty, and commitment in policy implementation.In this regard, based on Edward III's theory of policy implementation (1980), it can be considered reasonably successful in terms of disposition, as the number of implementing personnel has been fulfilled and they have been able to distribute personnel effectively throughout the archipelago's demographics. 4 Organizational structure: Bureaucratic structure plays a role in the systematic and efficient implementation of the COVID-19 vaccination program.The purpose of the bureaucratic structure is to facilitate task distribution and responsibility allocation to individuals involved in the COVID-19 vaccination program based on their potential and competence in respective fields.The bureaucratic structure has two aspects: Standard Operating Procedures (SOP) and the fragmentation or distribution of responsibilities. 7SOP functions as a guideline for the implementation of the COVID-19 vaccination program.It serves as the legal basis for the implementation, ensures communication among the vaccination team, and serves as a measure of the team's discipline in carrying out the program.Hence, SOP is crucial in the implementation of the COVID-19 vaccination program.The SOP implemented in the vaccination program in Kabupaten Maluku Barat Daya is in accordance with the Decree of the Minister of Health of the Republic of Indonesia Number HK.01.07/MENKES/4638/2021 concerning Technical Guidelines for COVID-19 Vaccination Implementation.Therefore, the COVID-19 vaccination implementation in Kabupaten Maluku Barat Daya adheres to the SOP outlined in the Minister of Health's decree.The implementation of the COVID-19 vaccination program is conducted by the central government in collaboration with provincial and district/city governments, as well as legal entities/businesses.The District/City Health Office conducts data collection through vaccinations, which serves as the basis for determining vaccine allocation, distribution, and logistical needs.Additionally, the Health Office collects data on healthcare facilities that will be designated for COVID-19 vaccination services. 5Cite this article: Rap JR, Balqis, Palutturi S, Indar, Masni, Wahyu A, et al.Implementation Analysis of COVID-19 Vaccination Policy in Southwest Maluku Regency.Pharmacogn J. 2023;15(5): 843-845. cases in the country is through the Covid-19 vaccination policy.The Covid-19 vaccination policy is a top-down government policy, as stated in the Republic of Indonesia Ministry of Health Regulation Number 18 of 2021 regarding Amendments to the Ministry of Health Regulation Number 10 of 2021 on the Implementation of Vaccination in the Framework of Combating Coronavirus Disease 2019 (COVID-19).The coverage rate of the Covid-19 vaccination program in the Maluku Province is not yet evenly distributed.Based on data obtained from the Republic of Indonesia Ministry of Health, the coverage of the first dose of vaccination in Maluku Province as of March 2022 has reached 70.68%.This figure is equivalent to 1 million vaccine recipients out of a target of 1.42 million people.Based on Covid-19 vaccination data in the Maluku Province, the top three lowest first dose vaccinations are in West Seram Regency with 79,373 individuals (43.18% of the target), followed by East Seram Regency with 63,793 individuals (53.82% of the provincial target), and South Buru Regency with 38,309 individuals (59.54% of the provincial target).This indicates that Covid-19 vaccination is not yet fully distributed evenly in the Maluku Province.
2023-11-05T16:09:13.808Z
2023-11-02T00:00:00.000
{ "year": 2023, "sha1": "efeb16720e1eb0ab5ea54fa3c49740187fbfe6db", "oa_license": "CCBY", "oa_url": "http://phcogj.com/sites/default/files/PharmacognJ-15-5-843.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "451aa48affb0f523c56aa92960a2cb34128e3758", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
261889903
pes2o/s2orc
v3-fos-license
Acute Carotid Artery Stenting Versus Balloon Angioplasty for Tandem Occlusions: A Systematic Review and Meta‐Analysis Background Despite thrombectomy having become the standard of care for large‐vessel occlusion strokes, acute endovascular management in tandem occlusions, especially of the cervical internal carotid artery lesion, remains uncertain. We aimed to compare efficacy and safety of acute carotid artery stenting to balloon angioplasty alone on treating the cervical lesion in tandem occlusions. Similarly, we aimed to explore those outcomes’ associations with technique approaches and use of thrombolysis. Methods and Results We performed a systematic review and meta‐analysis to compare functional outcomes (modified Rankin Scale), reperfusion, and symptomatic intracranial hemorrhage and 3‐month mortality. We explored the association of first approach (anterograde/retrograde) and use of thrombolysis with those outcomes as well. Two independent reviewers performed the screening, data extraction, and quality assessment. A random‐effects model was used for analysis. Thirty‐four studies were included in our systematic review and 9 in the meta‐analysis. Acute carotid artery stenting was associated with higher odds of modified Rankin Scale score ≤2 (odds ratio [OR], 1.95 [95% CI, 1.24–3.05]) and successful reperfusion (OR, 1.89 [95% CI, 1.26–2.83]), with no differences in mortality or symptomatic intracranial hemorrhage rates. Moreover, a retrograde approach was significantly associated with modified Rankin Scale score ≤2 (OR, 1.72 [95% CI, 1.05–2.83]), and no differences were found on thrombolysis status. Conclusions Carotid artery stenting and a retrograde approach had higher odds of successful reperfusion and good functional outcomes at 3 months than balloon angioplasty and an anterograde approach, respectively, in patients with tandem occlusions. A randomized controlled trial comparing these techniques with structured antithrombotic regimens and safety outcomes will offer definitive guidance in the optimal management of this complex disease. Zevallos et al Tandem Occlusions: Carotid Stenting or Angioplasty a paucity of data in regard to the management of the concomitant cervical lesion. 4 Endovascular management of TOs widely varies according to clinical and technical considerations and proceduralist's preference. 5 Revascularization of the cervical lesion could be performed in an acute or a deferred manner. When performed acutely, carotid artery stenting (CAS)±balloon angioplasty (BA) is a definitive treatment strategy, performed before or following intracranial MT. Acute BA, suction aspiration of the cervical segment, or MT alone implicate a deferred treatment with endarterectomy or stenting in the following days or weeks. Each treatment carries potential risks that are taken into consideration when selecting the best treatment method. For instance, acute CAS involves the risk of symptomatic intracranial hemorrhage (sICH) associated with antithrombotic use in freshly reperfused brain tissue and stent thrombosis. 6,7 In contrast, deferred cervical revascularization can be done in a more planned and secure setting. 8 Although it avoids the immediate need for antithrombotics and potential sICH risk, 5 it carries the risk of stroke recurrence and/or progression. 9 Recent studies suggest a benefit in functional outcomes and reperfusion rates when CAS and MT are performed emergently, without increased rates of sICH. 10-12 Moreover, Anadani et al found IVT was not associated with an increased risk of hemorrhagic transformation. 11 Nevertheless, data from randomized controlled trials on optimal management, procedural features, and safety outcomes are still missing, and all the above-mentioned approaches are used in clinical practice. 5 We aimed to compare the efficacy and safety of CAS±angioplasty with BA alone of the cervical ICA in treating TOs through an aggregated data metaanalysis of the recent literature. Additionally, we aimed to explore the association of the technique approaches (anterograde and retrograde) with the functional and safety outcomes and IVT with sICH. METHODS Search Strategy and Selection Criteria This systematic review and meta-analysis follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We executed a comprehensive literature search using a combination of Medical Subject Headings terms and free text for the concepts of "tandem occlusion," "thrombectomy," "stent," "acute stroke," and "carotid artery disease" in the MEDLINE database, Embase, and the Web of Science from January 2015 through May 2020. We included studies from 2015 that included the randomized controlled trials supporting the benefit of MT as the standard treatment for acute large-vessel occlusionstarted at that time. Complete search strategy is detailed in Data S1. We searched for studies assessing patients presenting with acute high-grade stenosis (70%-99%) or occlusion of the cervical ICA with ipsilateral occlusion of the distal ICA and/or middle cerebral artery treated with MT and endovascular treatment of the extracranial ICA (CAS and/or BA). Inclusion criteria were randomized controlled trials, cohort and cross-sectional studies, case series with ≥10 patients, and casecontrol studies reporting clinical outcomes (modified Rankin Scale [mRS] scores), complications (sICH, embolization, and death), and reperfusion rates. We only included publications with full text in English. We excluded animal models, protocols, reviews, studies with <10 patients, case reports, and other meta-analyses. When we encountered studies with multiple reports CLINICAL PERSPECTIVE What Is New? • Cervical carotid stenting is found to be an effective treatment for tandem occlusions, showing better functional outcomes and reperfusion rates when compared with balloon angioplasty. • Performing mechanical thrombectomy before cervical recanalization was observed to be the most effective approach (retrograde approach). • Moreover, intravenous thrombolysis in patients receiving cervical carotid stenting was observed to be safe, without an increased risk of symptomatic intracranial hemorrhage. What Are the Clinical Implications? • Acute cervical carotid stenting after cerebral reperfusion is a reasonable therapeutic option for patients with intracranial large-vessel occlusion and concomitant cervical tandem occlusions. • A multicenter randomized clinical trial is the natural next step to achieve a standard of care paradigm. from the same patient cohort, we kept the report with the higher number of patients and longer follow-up times. Furthermore, we searched the references of all the included studies to find additional studies. Two independent reviewers initially screened all identified records by reading all titles/abstracts using a free online application for systematic reviews (https:// rayyan.qcri.org/). Then, potentially relevant articles were reviewed as full text. The reviewers performed data extraction from these studies and cross-checked the extracted data. Disagreements in any of these steps were resolved after discussion or with a third senior reviewer when needed. Nonstandard Abbreviations and Acronyms Identified studies from the literature search were then further evaluated for inclusion in the metaanalysis. For the main meta-analysis on the best cervical technique, we only included studies with complete data that compared our outcomes of interest between acute CAS±BA and BA alone. Similarly, for best order of treatment we included studies with acute CAS that compared the outcomes between anterograde and retrograde approaches. Finally, for evaluating the association of IVT status with sICH, we included studies with acute CAS with both IVT groups. Baseline Data and Outcome Variables From each study, we collected demographic information, including number of participants, age, sex, race and ethnicity, comorbidities (hypertension, atrial fibrillation, dyslipidemia, diabetes, coronary artery disease, ICA stenosis), smoking status, initial assessment at presentation, use of IVT, stroke workflow metrics (onset to arrival, onset to puncture, onset to revascularization, and puncture to revascularization), location of the intracranial occlusion, type of endovascular interventions, number of patients undergoing each type of treatment, first endovascular approach (anterograde when proximal ICA occlusion was treated first and retrograde when intracranial occlusion was treated first), devices used (stent and balloon type, embolic protection device), concurrent medications (tissue-type plasminogen activator, anticoagulants, and antiplatelets), procedure-related complications (new stroke, hemorrhage, hemodynamic impairment, acute stent thrombosis), technical success rates of carotid revascularization, and outcome variables. We separately extracted data of interest about the TO revascularization approach and technique including revascularization order in patients undergoing CAS (anterograde versus retrograde), patients with stenting, and patients with angioplasty only as available. The primary outcome was functional outcomes scored by mRS at 90 days. We dichotomized the results as good (0-2) and poor (3-6) outcomes. Secondary efficacy outcomes included reperfusion status assessed by the modified Thrombolysis in Cerebral Infarction grading system. Safety outcomes included sICH as defined by each study and mortality at 90 days. Study Quality and Risk of Bias Assessment We evaluated the quality of the studies using tools according to the study's design. For cohorts with control groups, we used the risk of bias in nonrandomized studies of interventions tool, 13 with the overall risk of bias rated as low, moderate, serious, and critical. Single-arm cohorts were evaluated using the National Institutes of Health quality assessment tool for beforeafter (pre-post) studies with no control group, 14 with the overall risk of bias rated as good, fair, and poor. Statistical Analysis A revised Cochrane risk of bias in randomized trials tool 15 was used for randomized controlled trials with the overall risk of bias rated as low, some concerns, and high risk. We used a random-effects model (Mantel-Haenszel method) for combining cumulative event rates to account for heterogeneity (I 2 ) between studies to directly compare the efficacy and safety outcomes between CAS and BA alone. Summary effect measures (odds ratios [ORs]) were calculated using data extracted from primary studies and were compared using 95% CIs and prediction intervals. Similarly, we compared the same outcomes after classifying the patients undergoing CAS by first endovascular approach (anterograde versus retrograde) and IVT status (received or not). Finally, we evaluated the heterogeneity between studies with visual assessment of forest plots, as well as χ 2 test. We defined important interstudy heterogeneity as an I 2 test result of >50% and a χ 2 test result of <0.1. Analysis was conducted using Review Manager 5. 16 Publication bias was graphically assessed by funnel plot inspection and analyzed by Egger test conducted in R software (R Foundation for Statistical Computing) for Windows version 3.5.2. The data that support the findings of this study are available from the corresponding author upon reasonable request. Literature Search and Study Selection We initially identified 1404 articles through database searching, and 1105 records were screened after duplicates were removed, of which 59 full texts were assessed for eligibility. Twenty-five studies were excluded, 18 of them reported different endovascular Zevallos (Table S1), and 9 were included in our meta-analysis assessing the best endovascular technique for TOs. Screening and selection of studies are detailed in the flow diagram ( Figure 1). Studies included in the review contained the primary outcome and at least 1 of the other outcomes of interest. Thirty-three studies were retrospective cohort studies (22 single-center and 11 multicenter), and 1 was a single-center pilot randomized controlled trial study. 17 Of the 33 cohort studies, 17 were from prospectively collected databases. Fifteen studies evaluated outcomes of CAS±BA, 7,10-12,18-28 and 1 evaluated BA-alone outcomes without comparison groups. 8 Two studies compared CAS in TOs to isolated proximal ICA stenosis, 6 Characteristics of the studies included in the systematic review are summarized in Table S1. Studies were heterogenous on type of intervention, use of embolic protection device (6/34), antiplatelet regimen, information on concurrent management (heparin and antiplatelets), definition of sICH and any intracranial hemorrhage (8/34), and outcome evaluations (15/34 evaluated in stent thrombosis or reocclusion). The type of endovascular approach was reported in most of the studies, but only 4 studies explored the best first approach (anterograde versus retrograde) and compared our outcomes of interest between both groups. Antiplatelet therapy was inconsistently reported in the included studies, and only 4 studies of patients undergoing CAS reported sICH rates in patients with and without IV tissue-type plasminogen activator use. Qualitative Analysis Of the 20 retrospective studies with control groups included in the systematic review and meta-analysis assessed by the risk of bias in nonrandomized studies of interventions tool, 11 studies had moderate overall risk of bias assessments, and 9 had serious overall risk of bias assessments ( Figure S1A). When the bias was assessed per domain, 6 studies had serious risk because of missing data at the 3-month follow-up, 6 had serious risk of confounding bias, and 1 had risk because of the classification of the interventions. Only 1 study 42 mentioned a blinded assessment of mRS score at follow-up ( Figure S1B). Studies with no comparison group assessed by the pre-post tool were of variable quality rating, most of them were rated fair (10/14) to good (3/14), but 1 was rated poor 27 because of unclear objectives and inclusion criteria, not including all the eligible participants, small sample size, and loss at follow-up (Table S2). The randomized pilot trial by Poppe et al had some concerns of bias because of deviation from the intended intervention and in the measurement of the outcome (Table S3). Meta-Analysis of Included Studies For assessing the best endovascular technique, we included 9 studies in the meta-analysis of the primary outcome. Stenting was associated with favorable mRS scores at 3 months (OR, 1.95 [95% CI, 1.24-3.05]) ( Figure 2A). 9,36-43 The 95% prediction interval, however, ranged from 0.69 to 5.48, indicating some uncertainty with the treatment effect of CAS on functional outcome. No significant heterogeneity between studies was found for this outcome (I 2 =31.0%, χ 2 =11.65, P=0.17). Eight studies were included in the meta-analysis of the reperfusion outcome; CAS was associated with higher odds of Thrombolysis in Cerebral Infarction grade 2b-3 (OR, 1.89 [95% CI, 1.26-2.83]), with no significant heterogeneity between studies (I 2 =30.0%, χ 2 =10.01, P=0.19), although the 95% prediction interval ranged from 0.55 to 6.66 ( Figure 2B). 9,36-38,40-43 There were Figure 2C and 2D). 9,36-43 A total of 4 studies provided data to compare safety and efficacy outcomes based on anterograde and retrograde approaches. 12,22,25,40 The meta-analysis of the primary outcome showed the retrograde approach was associated with higher odds of favorable mRS scores at 3 months (OR, 1 Figure 3A through 3D). 12,22,25,40 A total of 4 studies provided data for comparing our safety outcome in patients treated with CAS, MT, and IV tissue-type plasminogen activator to patients treated with CAS and MT alone. 10,11,19,23 The metaanalysis showed no statistically significant difference in the rates of sICH between both groups (OR, 0.66 [95% CI, 0.19-2.30]) ( Figure S2). No evidence of publication bias was found by inspecting the funnel plots and Egger test in most of the outcomes, except for the comparison on successful reperfusion stratified by the best first approach, which demonstrated a significant asymmetry by Egger test (P=0.02) (Figures S3 through S5). DISCUSSION This systematic review and meta-analysis demonstrates that acute cervical CAS in patients presenting with TOs is effective and safe in the setting of MT. Patients treated with CAS have significantly better reperfusion rates and 3-month functional outcomes, without a significant increase in the rates of sICH or mortality. A retrograde approach might have better functional outcomes and reperfusion rates as well. Finally, receiving IVT does not increase the sICH rates in patients who undergo CAS. Our meta-analysis attempted to address a common and controversial matter during the endovascular treatment of the proximal ICA in TOs. 44 As previously shown by our published international survey, emergent CAS±angioplasty and BA with local aspiration seem to be equally preferred techniques (41% versus 38%). 5 Certainly, both CAS and BA have advantages and risks to consider when facing a TO. Because of the wide variety of factors to weigh, proceduralists currently retain full discretion over individual case technique selections, which leads to wide practice variability. 45 CAS seems more effective in treating the cervical ICA lesion and directly treating the cause of the stroke when atherosclerotic plaques or dissections are the culprits. Thus, it decreases the risk of stroke recurrence immediately 9 while improving cerebral reperfusion, clot lysis, or even allowing spontaneous intracranial reperfusion, 22,24,46 at the expenses of a potential risk for acute stent thrombosis and the need for early antithrombotic therapy. 47 On the other hand, BA may prevent futile stenting in patients with poor outcomes 8 and the need of antithrombotics, but with the shortcoming of a potential risk of thrombus formation and stroke recurrence. 37, 48,49 In our analysis, CAS demonstrated an association with better functional outcomes at 3 months, and despite the concerns about increased risk of sICH in association with CAS, we did not find an increase in its rates or mortality. Similarly, our results showed no statistical difference in sICH rates between patients undergoing CAS with and without IVT. Our results are in agreement with previous TITAN (Thrombectomy In Tandem Lesion) study reports of hemorrhagic transformation and support a more aggressive treatment using acute stenting with dual antithrombotic regimen. 37, 50 Initial studies reported pooled data of patients with TOs treated with MT±CAS and compared the recanalization rates and functional and safety outcomes with outcomes from patients with isolated intracranial largevessel occlusiontreated with MT, which confirmed the benefit of MT in TOs, as previously published in the Goyal et al collaboration. 4,28,51,52 Other meta-analyses compared patients undergoing CAS with patients with no stenting, including in the latter several modalities angioplasty±aspiration, suction alone, flow diversion, and clot wire-disruption as treatment modalities. 1,53 For instance, Dufort et al combined multiple cervical treatment regimens (BA, aspiration, and CAS) and patients with no acute treatment in the nonstenting cohort. Despite this heterogeneity, they found similar results to ours, favoring CAS over no stenting in regard to functional independence (OR, 1.43 [95% CI, 1.07-1.91]); however, BA effect size could not be evaluated separately. 53 Interestingly, the Wilson et al meta-analysis compared CAS with BA, but they found no differences in efficacy and safety outcomes. The discrepancy with our study might be explained by the fact that they did not incorporate 2 recent studies, 1 of them a large single-center cohort of 163 patients that showed early neurological improvements in their CAS group. 9 Furthermore, they included noncomparative retrospective studies that only assessed 1 of the techniques without comparison groups (13 CAS and 3 BA studies), which might have introduced additional heterogeneity (I 2 ≥50% for each technique and functional outcome). 51 In our meta-analysis, we exclusively included studies that defined the endovascular revascularization procedures performed in the proximal ICA (CAS or BA) and compared both techniques on our outcomes of Zevallos The order in which the cervical and intracranial lesions should be treated has been under investigation because of the various reasons for preferring one approach over the other. 12,54 Favorable outcomes in the retrograde approach might relate to faster reperfusion times of the intracranial LVO. Additionally, it involves a decreased risk of distal embolization and hemodynamic instability. Yet the steno-occlusive lesion may be difficult to access Zevallos et al Tandem Occlusions: Carotid Stenting or Angioplasty intracranially and restrict technical success of the intracranial MT. Our meta-analysis is the first to demonstrate an association of the retrograde approach with good functional outcomes and successful reperfusion in patients undergoing CAS. Previously, Wilson et al reported no statistical differences between the approaches; however, their approach groups included studies with BA, aspiration, or CAS as neck recanalization techniques and were not directly comparing each technique, resulting in significant heterogeneity (I 2 =63%). 51 We included 4 studies of only patients undergoing CAS in evaluating this subject and reported revascularization rates and good functional outcome. 12,22,25,40 Despite our significant results, it is important to recognized that many confounders play a role in the outcomes of interest, such as infarct core, collateral vasculature, time to reperfusion, and grade of stenosis of the proximal ICA, which were not collected in all the aforementioned studies. Maus et al in their international multicenter study found a successful reperfusion rate of 92% in their retrograde cohort; however, the rate of favorable outcome was only 44%. 22 Our study has several limitations. First, almost all the studies included in our systematic review (33/34) have a retrospective design. Allocation to intervention and concomitant management were decided by treating physicians. Factors that may have influenced both decisions, including premorbid functional state, stroke severity, type of antiplatelet agents used, and cause of stroke were not systematically reported in the included series. Additionally, more patients were treated with CAS than BA alone, which makes the studies heterogeneous and potentially biased. Furthermore, the definition of outcomes and protocols varied across the different studies. We also observed wide PIs when analyzing the primary outcomes. This may have been favored by the small number of studies reflecting some uncertainty about the effects of the techniques. Moreover, they may indicate the existence of settings where stenting has a suboptimal effect. The antithrombotic regimen was not regularly reported in most of the studies. Some multicenter studies even differed between their center's protocols. All these aspects should be considered when interpreting the results of our analysis. However, our metaanalysis has the strength of comparing acute stenting versus BA only and includes the most recent TO cohorts with severe stenosis ≥70%. We suggest a prospective evaluation of both techniques, and an optimal antithrombotic regimen before and after emergent CAS in the acute stroke setting should be further evaluated. CONCLUSIONS Acute CAS of the proximal ICA lesion in TOs is effective and safe. CAS and a retrograde approach have higher odds of successful reperfusion and good functional outcomes at 3 months than BA and an anterograde approach, respectively. CAS seems safe even in patients who received IVT, with no increase of sICH rates. Hence, an aggressive management of TOs should be considered in clinical practice. However, there are still insufficient data about stent patency and antithrombotic therapy that might influence the evaluated outcomes. The limitations of this meta-analysis may pave the way for a definitive, multicenter, high-quality randomized controlled trial evaluating both techniques, where structured antithrombotic regimens and systematically measured efficacy and safety outcomes will provide more guidance in the optimal management of this complex disease. Sources of Funding None. Disclosures Dr Ortega-Gutierrez reports consulting for Medtronic and Stryker Neurovascular. Dr Zaidat reports consulting and speaking for Cerenovus, Stryker, Penumbra, and Medtronic. The remaining authors have no disclosures to report.
2022-01-14T06:16:51.338Z
2022-01-13T00:00:00.000
{ "year": 2022, "sha1": "9d5421b7b285847e01ffe23890e7a4bb2697b4bb", "oa_license": "CCBYNCND", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.121.022335", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1faa3446b8b4ce82beb8106934ce9f866154a8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235711499
pes2o/s2orc
v3-fos-license
Evaluation of Probiotic Properties of Pediococcus acidilactici M76 Producing Functional Exopolysaccharides and Its Lactic Acid Fermentation of Black Raspberry Extract This study aimed to determine the probiotic potential of Pediococcus acidilactici M76 (PA-M76) for lactic acid fermentation of black raspberry extract (BRE). PA-M76 showed outstanding probiotic properties with high tolerance in acidic GIT environments, broad antimicrobial activity, and high adhesion capability in the intestinal tract of Caenorhabditis elegans. PA-M76 treatment resulted in significant increases of pro-inflammatory cytokine mRNA expression in macrophages, indicating that PA-M76 elicits an effective immune response. When PA-M76 was used for lactic acid fermentation of BRE, an EPS yield of 1.62 g/L was obtained under optimal conditions. Lactic acid fermentation of BRE by PA-M76 did not significantly affect the total anthocyanin and flavonoid content, except for a significant increase in total polyphenol content compared to non-fermented BRE (NfBRE). However, fBRE exhibited increased DPPH radical scavenging activity, linoleic acid peroxidation inhibition rate, and ABTS scavenging activity of fBRE compared to NfBRE. Among the 28 compounds identified in the GC-MS analysis, esters were present as the major groups. The total concentration of volatile compounds was higher in fBRE than that in NfBRE. However, the undesirable flavor of terpenes decreased. PA-M76 might be useful for preparing functionally enhanced fermented beverages with a higher antioxidant activity of EPS and enhanced flavors. Introduction Although various exopolysaccharides (EPS) have been applied in the food industry as food additives for natural viscosifiers, emulsifiers, stabilizers, texturizers, and mouthfeel agents, they have also received great attention for human health because of their antitumor, anti-ulcer, antioxidant, blood glucose-regulating, and UV radiation-protecting activities [1]. Microorganism-derived EPS play an important role as technological substances in fermented products because their distinctive physical properties contribute to texture, taste perception, and stability in foods [2]. Different types of EPS derived from different microbial sources have different chemical structures and physiological functions. Lactic acid bacteria (LAB) are important for the production of valuable substances, such as mannitol, conjugated fatty acids, and EPS, during fermentation [3][4][5]. Particularly, EPS derived from LAB possess other functions, including medical, cosmetic, and pharmaceutical effects in human usage [6]. LAB or EPS have been certified as probiotics, but LAB with functional properties such as active EPS production have received little attention. Many LAB strains considered as probiotics are effective against diverse gastrointestinal (GI) exertions by adhering to the gut mucosa and exerting beneficial effects on human health [7]. For use as probiotic bacterial strains, certain imperative characteristics are required to survive in acidic conditions and the presence of bile salts and to adhere to the intestinal tract [8]. Other beneficial effects, such as modulation of lactose intolerance, reduction of cholesterol levels, and regulation of the immune system, are also considered as additional criteria for the selection of probiotics [9]. Additionally, the physical properties of EPS enhance bacterial colonization of probiotic bacteria in the GIT, supporting their functional features [10]. LAB starter cultures with interesting functional characteristics and technological and probiotic properties have been isolated mainly from traditional fermented products [8]. Black raspberry (Rubus coreanus Miquel, BR), which is grown worldwide, is dark purple in color. Its natural colorant properties are used as a base for products such as icecream and sorbet. BR extract (BRE) can also be mixed with many beverages to enhance flavor and balance sweetness and tartness. However, the demand for BR has increased in other ways because of the expanded nutritional knowledge on anticancer, antibacterial, and antioxidant activities [11,12]. Due to the abundance of phenolic compounds, such as flavonoids (anthocyanins, flavonoids, and flavanols), tannins (proanthocyanins and ellagitannins), and phenolic acids (hydroxybenzoic and hydroxycinnamic acids), many products, such as sugar extract, wine, and vinegar, have been developed using BRE. Many studies have indicated that the chemical constituents change the raw BRE, and its fermentation might change the functional aspects, such as the antioxidant activity. Nevertheless, there is a lack of summarized data on the physiological aspects of different preparation approaches. To the best of our knowledge, this is the first study to use LAB for enhancing the functional features of BRE by expressing the functional component, EPS. Previously, Pediococcus acidilactici M76 (PA-M76), which was isolated from makgeolli (traditional Korean rice wine) in our lab, produced a large amount of functional EPS in laboratory medium, showing high antioxidant and cytoprotective activity against alloxaninduced cytotoxicity. Moreover, it had a lipid-lowering effect in high-fat diet-induced obese mice [13]. The purified EPS from PA-M76 is a glucan consisting of glucose units with a molecular mass of approximately 67 kDa, which is unusually small compared to previously published figures [14]. Thus, our study aimed to evaluate the probiotic potential of EPS-producing PA-M76 and apply it to the preparation of functional healthy beverages of BRE. We determined the low pH and bile tolerances, intestinal adhesion capacity, and immune-stimulatory activity of PA-M76, as well as the optimal lactic acid fermentation conditions for optimized EPS production, antioxidant activity, and the profile of volatile compounds in the fermented BRE. Bacterial Strain and Growth Conditions PA-M76 isolated from Korean traditional rice wine, makgeolli, as a functional EPS producer by Song et al. (2013, patent no. KACC91683P) was routinely prepared in MRS broth (Difco Laboratories, Detroit, MI, USA) and grown at 37 °C for 18 h. Additionally, P. pentosaceus (ATCC 33316, PP), which is a widely studied probiotic as Pediococcus sp., and L. rhamnosus GG (ATCC 53103, LGG), a representative probiotic bacterium, were used as reference controls for the evaluation of the probiotic potential of PA-M76 [15]. Acid and Bile Tolerance The in vitro survival ability of PA-M76 under conditions stimulating the GIT was determined using the method described by Oh and Jung (2015), with slight modifications. The active strains grown in MRS were centrifuged (5000× g, for 10 min at 4 °C), washed twice, and re-suspended in sterilized 50 mM phosphate-buffered saline (PBS) adjusted to pH 2.5 or 7.0. The cells (10 9 colony forming units [CFU]/mL) were incubated at 37 °C for 3 h, and the resistance to low pH was assessed by measuring the survival of bacterial colony counts on MRS agar plates. Bile tolerance was determined by incubating LAB cells in MRS broth containing 0.3, 1.0, and 3.0% (w/v) bile salt (oxgall, MBcell, Korea) at 37 °C for 24 h. The surviving bacterial cells on MRS agar were enumerated and the survival rate was calculated as the percentage of colonies grown on MRS agar. Antimicrobial Activity Antimicrobial activity was examined using a paper disc assay according to Chiu et al. (2008) Cell Surface Hydrophobicity and Autoaggregation Bacterial adhesion to hydrocarbons was determined according to the method described by Rosenberg (1984), with slight modifications as follows. After cultivation in MRS broth at 37 °C for 24 h, the obtained fresh cells were centrifuged (8000× g, 5 min) and washed three times with sterilized PBS (pH 7.2). The cells were resuspended and diluted in sterilized PBS to reach an optical density (OD) of 0.5, at 600 nm (A0), and 1.5 mL of the suspension was mixed with an equal volume of n-hexadecane (Sigma, St. Louis, MO, USA) in duplicate, followed by thorough vortex-mixing. The phases were allowed to separate for 1 h at room temperature, after which the aqueous phase was carefully removed and absorbance at 600 nm was measured (A1). The surface hydrophobicity (%) was calculated as follows: H% = (1 − A1/A0) × 100. The hydrophobicity was determined in triplicate. Cells with a hydrophobic index greater than 70% were arbitrarily classified as hydrophobic. For the autoaggregation assay, cells were grown for 18 h at 37 °C in MRS broth and washed three times with sterilized PBS (pH 7.2), resuspended, and diluted in sterilized PBS to reach an OD600 of 0.5 (A0). Four-milliliter aliquots of the cell suspensions were vortexed for 10 s, and autoaggregation was determined during 2 h of incubation at room temperature (At) in triplicate. The autoaggregation (%) was calculated as follows: A% = (1 − A1/A0) × 100. Caenorhabditis elegans Intestinal Adhesion Colonization of the C. elegans intestinal tract by PA-M76 was determined by measuring the number of bacterial cells in the worm intestines [16]. The C. elegans CF512 fer-15(b26)II;fem-1(hc17)IV strain was used in this study, which was routinely maintained on nematode growth medium (NGM) plates seeded with E. coli OP50. First, PA-M76 was sub-cultured (~1 × 10 9 CFU/mL) three times before use. After exposing C. elegans to PA-M76 on NGM plates containing nystatin for 5 days, 10 worms were randomly picked, washed twice with M9 buffer, and placed on brain-heart infusion (BHI) plates containing kanamycin and streptomycin. Then, these plates were exposed to gentamycin (5 µL of a 25 µg/mL solution) for 5 min. Next, the worms were washed three times with M9 buffer and then pulverized using a pestle (Kontes Glass Inc., NJ, USA) in a 1.5 mL Eppendorf tube containing M9 buffer supplemented with 1% Triton X-100. After serial dilution in M9 buffer, the worm lysate was plated on MRS agar (pH 5.0), incubated for 48 h at 37 °C, and then counted for live bacterial cells. The results were compared with those of LGG used as a positive control. Cell Culture and Bacterial Stimulation The murine macrophage cell line RAW 264.7 was purchased from the Korean Cell Line Bank (KCLB, Seoul, Korea) and was cultured in Dulbecco's modified Eagle's medium (DMEM) (HyClone, Marlborough, MA, USA) supplemented with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin solution (HyClone, Marlborough, MA, USA). The cells were incubated under 5% CO2 at 37 °C in a humidified incubator. For cytokine analysis, heat-treated LAB (110 °C, 30 min) were added at concentrations of 1 × 10 7 CFU/well in RAW 264.7 cells after which the cells were plated at a density of 5 × 10 5 cells onto a 96well plate and cultured for 2 h. As a positive control for cell stimulation, 1 µg/mL lipopolysaccharide from E. coli (LPS; Sigma, St. Louis, MO, USA) was also used. After incubation for 24 h, RAW 264.7 cells with heat-killed LAB or LPS were harvested, and the cell culture medium was collected for real-time reverse transcription-polymerase chain reaction (qRT-PCR) assay. Cell Viability Assay Cells were cultured in 96-well plates at 5 × 10 5 cells/well for 2 h and subsequently treated with heat-killed LAB or LPS (1 µg/mL) for a further 24 h. Cell viability was then measured using the EZ-Cytox Assay Kit (Daeil Laboratory, Seoul, Korea) according to the manufacturer's instructions. LPS was used as a positive control. Fermentation of BRE with PA-M76 Organically grown BR were harvested from Gochang province in South Korea, quickly rinsed with tap water, and soaked in a jar with 40% (w/v) sugar at room temperature for 30 days at 20 °C. After soaking, the black raspberries were strongly pressed for 30 min. The obtained BRE was immediately diluted up to 20, 30, and 40° Brix with distilled water and then pasteurized for 15 min at 75 °C. The sterilized BRE was adjusted to pH 4-9 with 0.1 M sodium bicarbonate and 0.5 M citric acid (Samchun Pure Chemical Co. Ltd., Seoul, Korea). The strain PA-M76 was cultured in MRS broth for 24 h at 37 °C and then collected by centrifugation (1580MGR; Gyrozen, Daejeon, Korea) at 10,000× g for 15 min and washed twice with sterilized water. PA-M76 (v/v) was inoculated with BRE, and fermentation was conducted at 25-30 °C for 3-15 days. The fermented BRE (fBRE) samples were collected aseptically every 24 h for further analysis. The obtained fBRE was stored at −80 °C until further analysis. Analysis of Cell Biomass and EPS The obtained fBRE samples (20 mL) were boiled for 20 min to inactivate EPS-degrading enzymes and then quickly cooled to room temperature. Trichloroacetic acid was added to the fBRE samples at a final concentration of 4% (w/v), and then the samples were centrifuged at 8000× g for 20 min (1580 MGR, Gyrozen, Daejeon, Korea). Cell biomass was determined by measuring the weight of the precipitated pellet after drying in a hot air oven at 90 °C to a constant weight. Crude EPS were retrieved by ethanol precipitation after mixing three volumes of cold ethanol and stirring vigorously. The precipitated crude EPS formed overnight at 4 °C were collected by centrifugation and dried as described above. The remaining ethanol was dried in a vacuum concentrator. The dry weights of the cell biomass and EPS were expressed in g/L. Total Anthocyanin, Phenol, and Flavonoid Analysis The total anthocyanin content in fBRE was determined using the pH-differential method described by Giusti and Wrolstad (2001) in duplicate, with slight modifications. Briefly, samples were diluted sequentially up to 50-fold with 0.025 M potassium chloride buffer (pH = 1.0, Kanto Chemical Co., Inc., Tokyo, Japan) or 0.4 M sodium acetate (pH = 4.5, Samchun Pure Chemical Co. Ltd., Seoul, Korea) solution, and the absorbance of the mixture was measured at 530 and 700 nm using a UV-Vis spectrometer. The difference in absorbance between pH values and wavelengths was calculated (mg/L) using the following formula: (A × MW × DF × 1000)/(ε × 1), A: Absorbance of the diluted sample = (A520nm − A700nm) pH 1.0 − (A520nm − A700nm) pH 4.5, MW: molecular weight of cyanidin-3-glucoside = 449.2, ε: molecular absorptivity = 26,900, DF: dilution factor = 50. The total phenolic content in fBRE was determined according to the Folin-Ciocalteu method described by Ju et al. (2009). Gallic acid (Sigma Chemical Co., St. Louis, MI, USA) was used as a standard and each fBRE was diluted 50-fold with distilled water. Twenty-five microliters of diluted fBRE was transferred to a 1.75 mL tube to which 75 µL of 2 N Folin-Ciocalteu's phenol reagent (Sigma Chemical Co., MI, USA) was added, and the mixture was maintained for 5 min at room temperature. This was followed by the addition of 200 µL of 7.5% sodium carbonate (Samchun Pure Chemical Co. Ltd., Seoul, Korea) solution and 700 µL of distilled water, and the mixture was incubated for 60 min at room temperature after which the absorbance was measured at 765 nm. The results were expressed as milligrams of gallic acid equivalent (GAE) per gram of fBRE. The total flavonoid content was determined using the modified Meda method (2005). Each fBRE was diluted 50-fold with distilled water. Five hundred microliters of diluted fBRE were transferred to a 1.75 mL tube to which 30 µL of 5% sodium nitrite (Junsei Chemical Co., Osaka, Japan) was added, and the mixture was incubated at room temperature for 5 min. The pre-reacted sample was added to 30 µL of 10% aluminum chloride (Fluka Chemical Co., NY, USA), and maintained at room temperature for 6 min. Finally, 200 µL of 1 M sodium hydroxide (Samchun Pure Chemical Co. Ltd., Seoul, Korea) was added to the maintained sample after which absorbance was measured at 510 nm. The total flavonoid content was determined using a catechin standard and the results were expressed as milligrams of catechin equivalent (CE) per gram of fBRE. DPPH Radical Scavenging Activity The antioxidant activity of fBRE was assayed by its scavenging effect on 2, 2-diphenyl-1-picrylhydrazyl (DPPH) as a free radical. Briefly, a 0.2 mM DPPH solution was prepared with methanol. Each sample was diluted with distilled water and then 100 µL of the diluted sample was mixed with 400 µL of prepared DPPH solution in a 1.75 mL tube. The mixture was covered with aluminum foil to protect it from light and maintained at 37 °C for 30 min. After the reaction, the absorbance of the resulting solution was measured at 517 nm against a blank. Ascorbic acid was used as a standard, and the percentage of DPPH quenched was calculated every minute as follows: % DPPH quenched = [1 − (Abs (sample)/(Abs (control) × 100]. Lipid Peroxidation Inhibitory Activity The lipid peroxidation inhibitory activity of BRE and fBRE was also measured according to the method described by Osawa and Namiki [17], with some modifications. Each sample was diluted with distilled water, and then 1 mL of the diluted sample was mixed with 1 mL of linoleic acid (50 mM), previously dissolved in ethanol (99.5%). After incubation at 60 °C in the dark for 8 days in tightly sealed glass vials, 100 µL of the oxidized BRE and fBRE samples were mixed with 4.7 mL of 75% (v/v) ethanol, 0.1 mL of 30% (w/v) ammonium thiocyanate, and 0.1 mL of 0.02 M ferrous chloride, dissolved in 1 M HCl. After 3 min, the absorbance of the oxidized samples was measured spectrophotometrically at 500 nm. Ascorbic acid was used as the standard and the inhibition degree of linoleic acid autoxidation percentage was calculated as follows: [(100 − sample Abs)/negative control Abs] × 100. Headspace Solid-Phase Microextraction (SPME) and Gas Chromatography-Mass Spectrometry (GC-MS) Analysis The aroma components were analyzed using SPME. To capture volatile compounds, 1 mL of culture, 1.5 g of NaCl, and 1 ppm of 2-methyl-3-heptanone (internal standard) were added to 20 mL glass vials (Supelco, Bellafonte, PA, USA). The SPME fiber needle was inserted into the vial, the volatile components were adsorbed on the autosampler for 30 min, and desorption was conducted using a GC-MS (7890B GC System/5977B MSD, Agilent Technologies) injection port at 220 °C for 5 min. Divinylbenzene/carboxen/polydimethy siloxanelsiloxane (DCP, 50/30 µm) was used as the SPME fiber. Each adsorbed volatile compound was analyzed using an HP-5 MS column (Agilent Technologies, Santa Clara, CA, USA; 30 m × 250 µm × 0.25 µm). The mobile phase gas was helium, and the flow rate was maintained at 1.0 mL/min. The oven temperature was maintained at 40 °C for 1 min and then increased to 250 °C at a rate of 20 °C/min and maintained for 3 min. The inlet temperature was 220 °C, and GC-MS was performed in the splitless mode with an MS ionization voltage of 70 eV, a source temperature of 230 °C, and an interface temperature of 280 °C. Mass spectra of each peak component separated by GC-MS were confirmed with those of the Mass Hunter database library (Agilent Mass Hunter Software, Agilent technology, USA). The volatile components were quantified by comparing the peak area of 2-methyl-3-heptanone and the peak area of the volatile flavor component identified. Statistical Analysis Data are expressed as the mean ± standard deviation (SD). Statistical significance was determined by one-way analysis of variance (ANOVA) using SPSS software version 21 (IBM SPSS Institute, Inc., Chicago, IL, USA), followed by Duncan's test. Differences were considered statistically significant at p < 0.05. Fundamental Probiotic Properties: Tolerance and Antimicrobial Activity To affect the GI, any potential probiotic strain should exhibit tolerance to acid and bile salts. After being subjected to gastric acidity (low pH) and intestine conditions (bile salts), the ability of probiotic strains to survive in adequate numbers is important for food industry applications [18]. In this study, the ability of PA-M76 isolated from Korean fermented rice wine to survive in simulated gastric juice and bile salts was evaluated in vitro. As shown in Table 1, the growth of PA-M76 decreased (p < 0.05) after 3 h of incubation at 37 °C under acidic conditions (pH 2.5), but showed a survival rate of 71.7%. The level was similar to the survival levels of PS and LGG, in which the control and reference probiotic strains exhibited survival percentages > 70% 3 h after exposure to simulated intestinal juice (p > 0.05). Considering that the low pH of 1.5 to 3.0 in the human stomach may increase to 4.5 during ingestion, PA-M76 may achieve a higher survival ability under low pH in the stomach. Compared to our strain, other Pediococcus strains could retain their ability at pH 3.0-4.0, but strains displayed high viability loss at lower pH. Furthermore, PA-M76 strains were highly resistant to bile stress. No differences in the cell counts were observed when PA-M76 strains were incubated for 24 h with bile salts ranging from 0.3 to 3.0% (Table 1). In contrast, compared with PA-M76, the reference strains PP and LGG showed marginal viability loss, with survival rates of 89.1 and 80.9% under 3.0% bile salt conditions, respectively (p < 0.05). The human intestine contains the relevant physiological concentrations of bile salts, ranging from 0.3 to 0.5%, and bile salts are harmful to living cells because they damage the structure of the cell membrane [8]. In the strain selection assay [19], those with growth rates above 50% showed good resistance in the presence of 0.3% bile salts. In this regard, the high survival ability of PA-M76 might reflect good resistance to bile and the low pH of gastric juice, which indicated that the strain could survive under harsh conditions in the gut. Additionally, compared to control strains, PA-M76 showed a broader inhibitory activity spectrum against most of the tested pathogens, regardless of the presence of Gram-negative and Gram-positive bacteria, as shown in Table 2. Regarding results of the tests for pH, temperature, NaCl and glucose tolerance, numbers indicates that the test strain can grow on the MRS broth with the respective conditions. Survival rates (%) of P. acidilactici M76 in simulated gastric juices and bile salt. Different small letters (a-c) in the same column indicate significant differences of values among three different LAB strains (p < 0.05). PA-M76, P. acidilactici M76; PP-33316, P. pentosaceus (ATCC 33316); LGG, Lactobacillus rhamnosus GG (ATCC 53103). Viable cell numbers were counted on MRS agar after incubation at 37 °C for 48 h. Adhesion Properties of PA-M76 The ability to adhere to the intestinal epithelial surface and colonize the GIT is one of the main desirable characteristics of probiotics [20]. To determine the adhesive properties of PA-M76, we first examined its surface hydrophobicity and autoaggregation. As shown in Figure 1A, PA-M76 showed unexpectedly lower surface hydrophobicity and autoaggregation than LGG (73.5 and 76.3% versus 61%, respectively; p < 0.05), whereas PP showed lower hydrophobicity and autoaggregation compared to the other two strains. Recently, as an alternative to in vitro models, such as propagated human intestinal cell lines, a C. elegans surrogate in vivo model was successfully used as a simple, rapid, and economic model system to study bacteria-host interactions in the gut, as the intestinal cells of C. elegans are similar in structure to those in humans [21]. In this study, a C. elegans model system was used to investigate the ability of PA-M76 to attach to the intestinal tract. As shown in Figure 1B, PA-M76 showed high GIT colonization ability. The strain exhibited outstanding persistence in the C. elegans intestine with a cell count of over 5.0 log CFU/mL per worm for 5 days. Unexpectedly, LGG, used as the control strain in this study, showed poor attachment in the gut environment at 1 CFU/mL per worm, although LGG has been shown to bind to enterocytes in a previous study [22]. Four strains of Lactobacillus sp. were selected as potential probiotic bacteria among 2000 LAB strains isolated from infant feces because of their remarkably high colonization ability on the C. elegans intestine of over 4.3 CFU/mL per worm after 5 days [20]. Furthermore, Pediococcus strains exhibited relatively high GIT colonization (>3.5 log CFU/mL per worm), whereas other Lactobacillus, Streptococcus, and Weissella sp. could not colonize the intestinal tract of C. elegans [23]. Moreover, EPS synthesized by several bacteria play a role in adhesion to surfaces, such as eukaryotic cells, and the modulation of the host immune system [2,20]. Thus, the results indicate that PA-M76 shows excellent colonization of the worm intestinal tract, probably due to its ability to produce EPS [24]. Immuno-Stimulatory Activity of PA-M76 to RAW 264.7 Cells Intestinal macrophages are key players in mucosal immune responses to defense against injurious agents, such as pathogens and food antigens. Some LAB strains can activate innate immune cells by inducing or enhancing cytokine production [25]. Probiotics may exert immunomodulatory effects in a strain-specific manner via specific bacterial components [26]. Therefore, each probiotic candidate must be evaluated to identify its specific biological activity. In this study, the immune-stimulatory activity of PA-M76 on RAW 264.7 macrophages was assessed. First, the cytotoxic effects of the M76 strain and reference strains against RAW 264.7 cells were determined using the MTT assay, and none of the strains had any observed effect on the viability of RAW 264.7 cells after treatment for 24 h (data not shown). To assess the effects of PA-M76 on immune responses, the expressions of cytokine genes, such as IL-1β, IL-6, IL-12, and TNF-α, which may be produced by the activated macrophages, were determined qRT-PCR. Our results showed that the production of four cytokines in RAW 264.7 cells was significantly increased by treatment with LAB strains (Figure 2). Furthermore, treatment with PA-M76 resulted in significantly higher expression of IL-1β, IL-6, and IL-12 than the control strains PP and LGG (p < 0.05). Additionally, the expression of TNF-α by PA-M76 was almost similar to that by LGG. To date, various probiotic LAB strains have been shown to enhance nonspecific cellular immune responses, including activation of macrophages, natural killer (NK) cells, antigenspecific cytotoxic T-lymphocytes, and the release of various cytokines [26]. These health effects imparted by probiotic bacteria are strain-specific but not intra-strain specific. Many studies on probiotic cultures have been conducted for Lactobacillus and Bifidobacterium sp., but only a few have reported the immunomodulatory activities of Pediococcus sp., such as P. pentosaceus NB-17, originated from Japanese traditional vegetable pickles [27], P. parvulus 2.6, isolated from cider [19], P. pentosaceus OZF, derived from human breast milk [25], and P. pentosaceus L1, isolated from Chinese fermented vegetables [28]. Moreover, little is known about the immunomodulatory activities of P. acidilactici. Our study showed that the treatment of PA-M76 induced the expression of cytokine genes in RAW 264.7 cells, and, to our knowledge, this is the first study to report the effects of P. acidilactici on immunomodulatory function. Black Raspberry Fermentation by PA-M76 Enhanced Production of Functional EPS The BRE did not allow the growth of PA-M76 at a low initial inoculation cell density. The initial inoculation cell density of PA-M76 increased up to 8.2 log CFU/mL, which was maintained during fermentation (data not shown here). As shown in Figure 3A, when the BRE was fermented with PA-M76 at 25 °C, the cell density of the starter increased until 48 h and decreased slightly up to 15 days of fermentation. However, PA-M76 cell density increased very slowly up to 72 h above 30 °C and decreased dramatically at the late fermentation stage, as shown in Figure 3A. The value of pH decreased to 3.53 ± 0.12 and 3.47 ± 0.09, respectively, at these temperatures after 15 days of fermentation. Generally, mesophilic bacteria of Pediococcus species, including Lactococcus, Leuconostoc species, Lactobacillus kefir, L. brevis L. fermentum, and Bifidobacterium bifidum, showed an almost 50% higher EPS production rate when the organisms were grown at a lower temperature of 25 °C than at 30 °C [29]. The cell biomass of PA-M76 markedly increased at pH 6 and 40°Brix in a sugar concentration-dependent manner. However, the concentration of EPS did not correlate significantly with the increase in cell biomass ( Figure S1). In contrast, EPS production by strain M76 in BRE was more favorable at a higher acidic pH (4.0) than at a neutral pH (6.0-7.0), although the production of EPS tended to increase in a sugar concentrationdependent manner. The physiological factors that play a crucial role in EPS production include pH, temperature, incubation time, and medium composition. Particularly, acidic stress usually inhibits bacterial growth but can stimulate EPS production in some LAB strains [30]. The high acidity of the BRE might cause acid stress to the cells, resulting in increased production of EPS [31]. In this study, EPS production by PA-M76 showed a time-dependent dramatic increase in synthesis rate until 72 h, producing 1.62 g/L as shown in Figure 3B (p < 0.05). EPS production tended to decline after 72 h, suggesting enzymatic degradation caused by prolonged cultivation, but it was still maintained up to 1.04-1.08 g/L until the final stage of BRE fermentation. The cell biomass was highest at the initial stages on days 3 and 15 (3.14 and 3.35 g/L, respectively), but EPS production was highest on day 3 (1.62 g/L). As a result, maximum EPS production by PA-M76 was observed after three days of BRE fermentation. It seemed that extending the fermentation time resulted in decreased EPS content due to increased EPS degradation activity. To our knowledge, the volume of EPS produced under these optimized conditions in BRE is comparatively sufficient to prepare functional beverages because of its observed proliferative effect on RIN-m5F cells, cytoprotective activity against alloxan-induced cytotoxicity at 10 mg/mL concentration, and lipid-lowering effect at 10 mg/mL concentration. In conclusion, maximum EPS production was achieved up to 1.62 g/L under optimal conditions at pH 4.0, at 25 °C for 3 days with 30°Brix of BRE. This volume of EPS produced under the optimized conditions in fBRE would be sufficient to produce functionally active beverages with health-promoting effects. Phenolic Compounds and Antioxidant Activity of fBRE We evaluated the contents of phenolic compounds such as total anthocyanins (TA), total polyphenols (TP), and total flavonoids (TF), of fBRE fermented under the optimized conditions with high EPS production, as described above. As shown in Table 3, the TA concentration of fBRE (45 TAC mg/L) was almost identical to that of NfBRE (43 TAC mg/L), suggesting that the lactic acid fermentation of BRE by PA-M76 was not significantly different from that of non-fermented BRE (NfBRE). Analysis of TF also showed no significant (p < 0.05) difference between fBRE and (99.9 mg CE/100 g) versus 81 mg CE/100 g in NfBRE and fBRE, respectively). However, a significant (p < 0.05) increase was only found at TP (379.3 mg GAE/100 g) compared to NfBRE (349.4 mg GAE/100 g). Lactic acid fermentation of berries is challenging owing to their high contents of acids and phenolic compounds. The abundant phenolic compounds in berries, including hydroxycinnamic, neochlorogenic, and chlorogenic acids, are usually hydrolyzed to quinic and caffeic acids, which are subsequently degraded to vinyl and ethyl catechols via the action of phenolic acid decarboxylases and reductase [32]. However, reportedly, mulberry juice fermented with L. plantarum, L. paracasei, and L. acidophilus showed an increase in the content of phenolic acids, total anthocyanins, and flavanols [33]. The effect of lactic acid fermentation might be debatable, but the phenolic compound profile during lactic acid fermentation is greatly affected by the strain and starting materials used. β-Glucosidase enzymes can hydrolyze the flavonoid conjugates during lactic acid fermentation and influence polyphenols during lactic acid fermentation [34]. The strain used in this study, PA-M76, might not be sufficient for enhancing the content of phenolic compounds during lactic acid fermentation of BRE, even though TP increased slightly, possibly due to this catalyst. Three different methods were used to assay the antioxidant activity in vitro with NfBRE and fBRE after fermentation with P. acidilactici M76. First, the antioxidant activity was assayed by DPPH radical scavenging activity using the oxidant stable 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical. The DPPH radical scavenging activity of fBRE was significantly higher than that of NfBRE and showed a radical scavenging activity of 51.9% (p < 0.05) towards the stable radical DPPH, as shown in Figure 4A. We also measured the antioxidant activity by assaying the quantification of the inhibition of linoleic acid peroxidation, which is thought to proceed through radical-mediated abstraction of hydrogen atoms from methylene carbons in polyunsaturated fatty acids [35]. When fBRE was assayed, the rate of inhibition of linoleic acid (44.8%, p < 0.05) was significantly higher than that of NfBRE, as shown in Figure 4B. We also used a radical cation, ABTS (2,2′-azino-di-[3-ethylbenzthiazoline sulfonate] to determine the total antioxidant activity of fBRE and NfBRE. When fBRE was used for this assay, the ABTS scavenging activity of the fBRE was 23.2 ± 1.3 mM, which was significantly higher than that of NfBRE as shown in Figure 4C. Interestingly, the antioxidant activity of radical scavenging activity, inhibition activity of linoleic acid peroxidation, and total antioxidant capacity towards ABTS radical of the fBRE by PA-M76 were significantly higher than those of NfBRE, indicating enhanced lactic acid fermentation by PA-M76. Previously, the crude EPS extract from PA-M76 showed the highest DPPH radical scavenging activity of 48.1% at a concentration of 1 mg/mL, with a dose-dependent increase in the activity [24]. The fermentation of BRE by PA-M76 caused an increase in the concentration of EPS (1.62 mg/L), as shown in Figure 3B. Although black raspberries are generally known as an abundant source of phytochemicals, such as phenolic acids, flavonoids, anthocyanins, and tannins, with well-documented antioxidant activity, we did not observe a significant increase in the contents of those phenolic compounds in this study, as described above. No significant differences in the total antioxidant activity were observed between NfBRE and fBRE. Therefore, the antioxidant potential of NfBRE and fBRE can be largely attributed to the originally existing phenols in the raw BRE. Nevertheless, our results showed that the significantly enhanced antioxidant properties of fBRE might be caused by EPS-enriched fermentation by PA-M76. Production of Volatile Compounds during Fermentation of BRE with P. acidilactici M76 During fermentation with PA-M76, a total of 28 volatile compounds were identified, as shown in Table 4. Terpenes (7) and esters (12) were present in higher numbers than acids (3), alcohols (5), and aldehydes (1), which are the commonly identified volatile compounds in fruit-based fermented products. The total concentration of volatile compounds in fBRE was higher than that in NfBRE. During fermentation, esters increased in concentration and expanded as the largest group of compounds in which ethyl acetate (fruity/sweet-like odor), ethyl octanoate (fruity/winey/sweet-like order), and ethyl decanoate (sweet waxy/fruity) were detected as the major compounds [36,37]. Importantly, these compounds are considered the highest odor-active compounds in fermented fruit beverages [38,39]. In the case of fermented beverages, isoamyl acetate and ethyl hexanoate, which have small molecular weights, mainly contribute to the fruit flavor. Esters represent a major volatile component in fermented fruit beverages [39]. In this study, ethyl acetate, which produces fruity and sweet-like notes, accounted for 29% of the total volatile compound abundance at the final stage, which was not detected at the initial stage of fermentation and in NfBRE. In the alcohol groups, isoamyl alcohols (fruity/sweet-like odor) dramatically increased in concentration as the largest group of compounds during BRE fermentation, which is usually high in wine-producing flowers and honey smells [40]. Additionally, phenylethyl alcohol (floral/rose) was detected as the major alcohol, and its amount increased with increasing fermentation time from 0.05 ± 0.02 to 0.64 ± 0.31. This volatile compound was one of the major potentially significant higher alcohols from grapes, which may be synthesized in wine as a by-product of yeast fermentation. Their synthesis closely parallels that of ethanol production. Furthermore, a similar number of compounds, including phenylethyl alcohol, accounted for approximately 50% of the aromatic constituents of wine. However, benzaldehyde (burnt bitter/sharp), which exists in NfBRE, completely disappeared by the end of the fermentation period. Cherry propanol (sweet; fruity/cherry) was detected as one of the volatile compounds in NfBRE that gradually decreased and disappeared after fermentation. Terpenes, such as terpinen-4-ol (woody flavor) and myrtenol (medicinal and mint flavor), which were relatively higher in NfBRE [41,42] and considered as undesirable flavors for fermented products, disappeared or decreased during BRE fermentation. Acid components, such as acetic, octanoic, and decanoic acids, generally impart a poor rancid flavor to foods [43]. Acids such as acetic and octanoic acids remained until the final stage of fBRE, but the amount was too small to be considered. Sweet, floral Relative contents of volatile compounds were determined as peak areas of an internal standard (1 ppm of 2-methyl-3-heptanone in methanol). (1) RT, retention time (2) Odor descriptions were cited from http://www.thegoodscentscompany.com/ accessed on 11 October 2020 (3) nd, not detected. Different capital (A and B) and small letters (a-c) in the same column indicate significant differences of values between two samples with same fermentation day and between eight samples with fermentation time and sample types, respectively (p < 0.05). To understand the interrelationships between volatile flavor compositions and fBRE depending on the fermentation stage, PCA analyses were performed. As shown in Figure 5, an overall PCA biplot was constructed with a total variance of 65.8%. PC1 showed a strong positive correlation to fBRE components and strong negative correlation to NfBRE components. Additionally, PC2 loading clearly showed that the fBRE at different stages was clearly differentiated, with fBRE-2 and fBRE-3 positioned fully in the strongly positive PC1 region due to increased concentrations of esters and alcohols. The later-stage samples were also positioned in the positive PC1 region because of the elevated concentrations of esters. It can be noted that the fBRE tended to have a high concentration of most volatile compounds. Conclusions In this study, functional EPS-producing PA-M76 was shown to resist biological barriers, such as simulated gastric and pancreatic juices, particularly colonizing the surface of the C. elegans intestinal tract in higher populations and exhibiting significant in vitro immunostimulatory activity compared to the conventional commercial starter LGG. The results of the current investigation revealed promising perspectives for the application of functional EPS-producing PA-M76 as a functional probiotic. Moreover, we also applied the EPS-producing strain PA-M76 to prepare functionally enhanced black raspberry beverages and showed how the antioxidant properties of black raspberry extract could be enhanced through lactic acid fermentation by this strain. Considering the great interest in natural materials with functional ingredients, the health-promoting effects of PA-M76 can be exploited for the bioprocessing of diverse food materials in the food industry.
2021-07-03T06:16:56.665Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "b4a5f7288ae5ec041d4cf2a9851a7094d6af45dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/9/7/1364/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "af28122c8ab3e432a6bf3929c92fbd8dc30ba574", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
54880933
pes2o/s2orc
v3-fos-license
Linkage-Based Distance Metric in the Search Space of Genetic Algorithms We propose a new distance metric, based on the linkage of genes, in the search space of genetic algorithms. This second-order distance measure is derived from the gene interaction graph and first-order distance, which is a natural distance in chromosomal spaces. We show that the proposed measure forms a metric space and can be computed efficiently. As an example application, we demonstrate how thismeasure can be used to estimate the extent to which gene rearrangement improves the performance of genetic algorithms. Introduction Distance metrics are fundamental tools for organizing search spaces, because the introduction of a metric is the simplest way to induce a topology [1].Different metrics produce different topologies and thus change the shape of the search space.When a space is to be searched by a genetic algorithm (GA), a good distance metric facilitates navigation of the space [2][3][4][5] and can also improve the effectiveness of search [6][7][8][9][10][11][12].Hamming distance is a popular metric in a discrete space that is to be searched by a GA.Hamming distance has also been widely used in analyses of solution spaces [13][14][15]. Fitness distance correlation (FDC), proposed by Jones and Forrest [14], is a measure of the effectiveness of a distance metric in a space to be searched by a GA.An FDC is obtained by measuring the correlation between fitness and the distance to the nearest global optimum for a number of sample solutions.FDC coefficients range from −1 to 1, where higher values suggest increased difficulty in maximizing fitness and decreased difficulty in minimizing fitness.When a GA is hybridized with a local optimization, the population consists entirely of local optima, and it is then more useful to determine FDCs of local-optimum spaces. In this paper, we propose a new distance measure which takes account of gene interaction and show that it forms a metric space.We use this metric to compute FDCs of search space and show that FDCs obtained in this way have improved correlation with the improvement in GA performance that can be obtained by gene rearrangement.The remainder of this paper is organized as follows.In Section 2, we review gene rearrangement in GAs.In Section 3, we propose a new distance measure for GAs, show that it forms a metric space, and demonstrate an application.Finally, we draw conclusions in Section 4. Gene Rearrangement Holland's schema theorem [16] shows that schemata (i.e., groups of genes) with high fitness, short defining length, and low order have high probabilities of survival in a standard GA. These durable schemata are called building blocks.They make a major contribution to fitness and have a high degree of mutual interaction.The performance of a GA is strongly dependent on the survival and reproduction of these building blocks. The survival probability of a gene group through a crossover is strongly affected by the positions of genes in the chromosome.Schemata consisting of genes in scattered positions tend to be too long to survive.Thus, the strategy used for placing genes significantly affects the performance of a GA.Inversion is an operator which changes the location of genes while a GA is running [17], and the process of rearranging genes dynamically to improve performance is called linkage learning [18].Messy GA [19] is an example of a technique that implicitly uses dynamic gene rearrangement. It has been observed that the performance of GAs on problems with a locus-based encoding can be improved by rearranging the indices of the genes before running the GA.Static gene rearrangement was first suggested by Bui and Moon [20,21], who rearrange genes within a chromosomal representation to improve the quality of schemata and to help the GA to preserve the better schemata.Many studies on the static rearrangement of gene positions [20][21][22][23][24] have showed performance improvements.However, the improvement in performance achieved in this way has been shown to vary greatly between problem instances.This motivated us to develop a distance metric to improve our ability to estimate how much improvement in the performance of a GA on a particular problem instance can be expected through gene rearrangement. A Linkage-Based Distance Measure 3.1.Second-Order Distance Measure.The most usual firstorder distance measure in discrete space is the Hamming distance which is also a natural distance in chromosomal space, although there are other first-order distance measures, such as the quotient metric in redundant encoding [11].We now define a second-order distance measure derived from first-order distance.Given a problem instance , consider the unweighted undirected graph representing first-order gene interaction [23], which is the pairwise interaction of genes.For convenience, we will assume that each gene has an interaction with itself, so that {, } ∈ ( ) for each gene ∈ ( ).Let be the adjacency matrix of and consider as a binary matrix over Z 2 [25][26][27]. Proof.It is enough to show the following four conditions [1]. (ii) Identity of indiscernibles: consider (iii) Symmetry: consider If the inverse of does not exist, we can extend the scope of the distance metric using the following well-defined formulation: We note that if the inverse of exists, then := −1 ( ⊕ ), which implies ( ⊕ ) ⊕ = 0, and hence arg min ‖( ⊕ ) ⊕ ‖ = −1 ( ⊕ ).Our second-order distance and its extension can be computed in ( 3 ) by a variant of Gauss-Jordan elimination [28], where is the number of genes.First-order gene interaction graph (a) Second-order distance = 2 x: y: Figure 1: (a) An example of a first-order gene interaction graph and (b) distances between two example chromosomes and . An Application. Intuitively, our measure of the distance between two chromosomes can be understood as the minimum number of bits that must be changed to transform one chromosome into the other in the genetic process using optimal gene rearrangement.Given an undirected graph = (, ) with edge weights ( ) (,)∈ , the max-cut problem is that of finding a subset ⊂ which maximizes the sum of the edge weights which traverse the cut (, \ ) [29][30][31].Consider the 6-node maxcut problem instance , which is to maximize the following expression: where a vertex V belongs to the position ∈ {0, 1} and ⊕ is the Boolean XOR operator.In this problem instance, edges {V 1 , V 2 } and {V 2 , V 3 } increase the fitness and edges {V 4 , V 5 } and {V 5 , V 6 } reduce the fitness.In the max-cut problem, we can consider that the given graph removing edge weights shows the first-order gene interaction (see, e.g., Figure 1(a)).Figure 1(b) shows an example in which the Hamming and second-order distances between two chromosomes and are obtained by optimal gene arrangement of the gene interaction graph .In this example, ⊕ = (1 1 1 0 1 1) , −1 ( ⊕ ) = (0 1 0 0 0 1) , and hence ‖ −1 ( ⊕ )‖ = 2.If we use the normalized Hamming distance (developed for the 2-grouping problem) [32,33] as the first-order distance measure, the FDC of this problem is −0.50.But when our second-order distance is used, the FDC becomes −0.95. Given a graph = (, ) and its adjacency matrix = ( ), the graph bipartitioning problem is that of minimizing the following expression: where ∈ {0, 1}, a vertex V belongs to the position ∈ {0, 1}, and is a positive constant introduced to penalize unbalanced partitions.If we ignore the second balancing term altogether, we can regard the given graph as the firstorder gene interaction graph of the given problem instance.Bui and Moon [21] tried gene rearrangement in a GA for graph bipartitioning and obtained dramatic improvements in performance for some graphs.We hypothesized that FDCs calculated using our second-order distance would help identify graphs that could benefit most from gene rearrangement, in terms of GA performance.Figure 2 shows the relationship between FDC and the performance improvement of a GA on 16 benchmark graphs (8 random graphs and 8 random geometric graphs) that were used in [34][35][36][37][38][39][40]. Here, the performance improvement means the difference in percentage between the average performances of a GA with and without gene rearrangement (data from [21]).The FDC values were approximated from 10,000 randomly generated local optima.When the first-order (normalized Hamming) distance was used, there was little correlation with the change in performance, but our second-order distance provided a clear correlation (see Figure 2(b) and Table 1). Concluding Remarks In most previous work, distances among chromosomes in GAs have usually been first-order distances, and in partic-ular Hamming distance.We have proposed a second-order distance measure for GAs, which we consider to be more meaningful.We have showed that this distance measure forms a metric space and that it can be computed efficiently. Using second-order distance allows us to see problem spaces from a different viewpoint.We have demonstrated its value in predicting the effectiveness of gene rearrangement, and we envisage it providing further understanding of the working mechanism of GAs. Disclosure A preliminary version of this paper appeared in the Proceedings of the Genetic and Evolutionary Computation Conference, pp.1393-1399, 2005. Figure 2 : Figure 2: Correlation of gene rearrangement with FDC values computed using first-and second-order distance. Table 1 : Effect of gene rearrangement on FDCs computed using first-and second-order distance.
2018-12-08T17:49:04.939Z
2015-03-16T00:00:00.000
{ "year": 2015, "sha1": "bbf746523decdc98e15804a1ef9dd86edad8d156", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2015/680624.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bbf746523decdc98e15804a1ef9dd86edad8d156", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
261065018
pes2o/s2orc
v3-fos-license
Adagio for Thermal Relics A larger Planck scale during an early epoch leads to a smaller Hubble rate, which is the measure for efficiency of primordial processes. The resulting slower cosmic tempo can accommodate alternative cosmological histories. We consider this possibility in the context of extra dimensional theories, which can provide a natural setting for the scenario. If the fundamental scale of the theory is not too far above the weak scale, to alleviate the ``hierarchy problem,"cosmological constraints imply that thermal relic dark matter would be at the GeV scale, which may be disfavored by cosmic microwave background measurements. Such dark matter becomes viable again in our proposal, due to smaller requisite annihilation cross section, further motivating ongoing low energy accelerator-based searches. Quantum gravity signatures associated with the extra dimensional setting can be probed at high energy colliders -- up to $\sim 13$ TeV at the LHC or $\sim 100$ TeV at FCC-hh. Searches for missing energy signals of dark sector states, with masses $\gtrsim 10$ GeV, can be pursued at a future circular lepton collider. I. INTRODUCTION Cosmological observations of light element abundances have led us to conclude that our understanding of the cosmos, based on the Standard Model (SM) and general relativity, can provide a quantitative description of the Universe when it was a few seconds old [1].This corresponds to the era of Big Bang Nucleosynthesis (BBN), dominated by radiation at a temperatures of O(MeV).The agreement between theory and observation implies that the rate of the cosmic expansion in this era, given by the Hubble parameter H, is set by a plasma that cannot have significant contribution from unknown physics. At the same time, the study of cosmology has provided us with some of the starkest clues that our fundamental understanding of the Universe remains incomplete.Key examples are the mystery of what constitutes dark matter (DM) and how visible matter evaded complete annihilation, i.e. what is the source of the cosmic baryon asymmetry.There is broad consensus in particle physics and cosmology that new fundamental ingredients are needed to address these questions.While a great variety of ideas have been proposed to explain either problem, none has been shown to be a definitive resolution, neither empirically nor via inescapable theoretical imperatives. The quantitative understanding of the cosmos back to the BBN era illustrates that the predicted rate for the relevant microscopic processes, when compared to the expansion rate of the Universe at the corresponding epoch, are generally correct within margins of error.Similarly, the efficiency of a new physics mechanism that aims to address a cosmological puzzle is measured against the expansion rate set by H.In general relativity, the expansion rate itself is set by the gravitational response of spacetime to various forms of energy density and (assuming zero curvature) H ∝ M −1 P , where M P ≈ 1.2 × 10 19 GeV [1] is the Planck mass set by Newton's constant G N = M −2 P (with the reduced Planck constant ℏ and the speed of light c set to unity: ℏ = c = 1). The above account implies that if gravity had a different strength at very early times, expectations about the viability of a cosmological model would change.In particular, if gravity was much weaker well before the BBN era, certain processes that are deemed too inefficient may have been sufficiently fast.A good example is provided by thermal relic DM with parameters that lead to inefficient annihilation, resulting in its overabundance and conflict with precision cosmological data.One may address this problem in various ways, for example by generating additional entropy later on to dilute the DM density [2].However, we will entertain a less explored possibility here, namely weaker gravity leading to a slower expansion, which allows a longer time for DM to annihilate before its abundance is set by freeze-out.To realize this 'Adagio' scenario, we will assume that during some early cosmological epoch, the value of the Planck mass was larger, reducing the strength of gravitational coupling. The key feature of our scenario is the time variation of gravitational coupling in the early stages of cosmology, which makes the Planck mass a function of time: M P = M P (t).For t < t * , corresponding to temperatures T > T * , gravity was weaker, i.e.M P (t) > M P (t * ), and we will assume that M P was constant afterwards.In general, we would like t * to be early enough that the well-established features of the early Universe are not perturbed significantly.As we will illustrate, this can be achieved as long as t * is sufficiently small compared to ∼ 1 s (T * is sufficiently large compared to ∼ 1 MeV) corresponding to the onset of BBN, which we will assume to go through according to the standard theory. One may simply postulate that M P had some temperature dependence, M P → M P (T ), that led to its variation over cosmic time, and the value that we observe today was set before the BBN epoch.This is not the way we usually think of gravity, but in fact such a behavior for M P could be realized in theories with n ≥ 1 extra dimensions (an early suggestion for this possibility can be found in Ref. [3]).In an extradimensional framework -which is generally deemed necessary for a proper formulation of quantum gravity in string theory -the observed 4-d M P is a derived quantity.The true fundamental scale M F in (4 + n)-d may be much smaller than M P if the extra dimensions are compact, with a typical size arXiv:2308.10928v2[hep-ph] 14 Dec 2023 4,5], according to On general grounds, one may expect that the extra dimensions are initially small, i.e. ∼ M −1 F , but dynamically grow to become large.This would then translate to a time variation of 4-d gravitational coupling set by M P .Given the above, we will adopt (4 + n)-d theories as our basic underlying framework.Such models can in principle alleviate the "hierarchy problem" -i.e., the smallness of the Higgs mass m H ≈ 125 GeV compared to the implied scale of gravityby lowering M F to be not far above m H . As we will discuss in the following, the relative proximity of M F to the TeV scale generally constrains the maximum reheat temperature of the Universe to ≲ GeV, which would point to masses for thermal relic DM at the GeV scale.This regime of masses has garnered significant attention in recent years, as an alternative to the traditional weak scale models, characterizing dark sectors that are potentially accessible to a wide range of laboratory experiments (see, for example, Ref. [6]).Note that large classes of thermal relic models of DM with a mass ≲ 10 GeV are ruled out by Cosmic Microwave Background (CMB) observations [7,8], making them less motivated as experimental targets.However, if DM annihilation cross sections can be lowered significantly, as in our scenario, those models can become viable again. Next, we will describe the main features of our (4 + n)-d framework and sketch how the early Universe evolution, leading to variable gravity, unfolds in our scenario.We will then consider the implications of our model for DM production and outline some of its phenomenological consequences.A summary and some concluding remarks are given at the end of this work. II. MODELS WITH EXTRA DIMENSIONS In principle, the scale of compactification of n additional dimensions could be very high, which would make the underlying physics inaccessible to low energy measurements.One could assume that the fundamental scale of the (4 + n)-d theory is not very far from the weak scale ∼ O(TeV), potentially addressing its origin.This will be the main scenario we will consider in what follows, for it can be motivated as a resolution of the hierarchy problem and may be probed in future high energy experiments. We will adopt the general picture described in Ref. [9] as the starting point of the cosmological evolution.Initially, all spatial dimensions have a size ≳ M −1 F .The basic idea is that one can construct a model of inflation that satisfies key observational requirements, where the initial inflationary era leads to rapid expansion along the visible 3-brane dimensions, while the compact dimensions remain fixed near the fundamental size ≳ M −1 F .After this main inflationary era ends, the size of the extra dimensions, governed by the radion potential, starts to grow.During this time, the non-compact dimensions shrink and the radiation contained on the visible brane blue-shifts.Once the radiation on the brane and radion potential have equal sizes, the contraction of the visible dimensions would stop. In typical scenarios for which Ref. [9] aims to provide a cosmological framework, the radion eventually reaches the minimum of its potential where the extra dimensions attain their final stabilized size.After this point, the evolution of Universe will resemble the standard 4-d picture, at low energies.A generic problem in this scenario is that the radion ends up as an oscillating modulus which is long-lived on cosmological time scales and can lead to early matter domination and possible conflict with cosmological data.This could be addressed, for example, by a brief period of secondary inflation, diluting the energy density in the radion field [9]. We will consider the framework of Ref. [9], outlined above, but depart from it by adding an interlude to the evolution of the cosmos before it ends up with stable extra dimensions.In our modified scenario, the radion potential initially has a different minimum corresponding to larger compact dimensions than arrived at eventually.This intervening cosmological era would then be governed by a 4-d gravitational interaction that could have been significantly weaker than the one observed today.Below, we will argue that the general demands of our proposed scenario do not fit within the specific model assumed in Ref. [9], where the radion potential acts as the main source of inflation. Assuming that the radion potential V rad (R) is the source of inflation along the 3-brane dimensions, corresponding to the visible world, the primordial density perturbations are given by [9] where R I is the size of the extra dimensions during inflation and S is a parameter that needs to be O(10 −3 ) for a consistent inflationary scenario.To avoid significant deviations from scale-invariant perturbations, as required by data, R I should be approximately constant, R I ∼ M −1 F , during inflation.As we will discuss below, the largest reheat temperature ∼ O(GeV), consistent with cosmological constraints, can be attained for maximal n, and so we will mostly focus on the case with n = 6 extra dimensions and M F ≳ 10 TeV, for general consistency with experimental bounds that we will discuss later.This implies that in order to have a suppressed Hubble constant during freeze-out, the radion potential may only transition to its "late" Universe minimum, corresponding to the present value of M P , at T < GeV.This would typically demand that V rad is governed by scales ≲ O(GeV).Using Eq. ( 2), the preceding considerations imply that δρ/ρ ≲ 7×10 −8 , which is well below the measured value ∼ O(10 −5 ) [1].Note that here, the spectral index n s is given by [9] Since the measured value is n s ≈ 0.97 [8], choosing S ≪ 10 −3 is not a viable option for enhancing the density perturbations in Eq. ( 2). Given the above analysis, we then assume that an appropriate brane-localized potential is present for an inflaton Φ, such that it allows for sufficient inflation of ∼ 60 e-folds, or more, to address large scale features of the cosmos.The density perturbations produced during the slow roll of Φ are given by where gives the inflationary Hubble scale and Φ is the time derivative of Φ.During inflation, 3H I Φ ≈ −dV (Φ)/dΦ, which is subject to the slow-roll condition We thus have For M F R I ∼ 1, choosing V (Φ) 1/2 ∼ (100 GeV) 2 can then easily yield the observed level of density perturbations.Based on the above discussion, we may then assume that the radion potential stays at a minimum that yields R > R 0 , where R 0 is today's size of extra dimensions, until after freeze-out at T < GeV.We note that for the intermediate value of Planck mass would be κ n/2 times larger and the corresponding Hubble constant would be smaller by that amount.This would allow consideration of thermal relic DM with O(10) times smaller annihilation cross sections than in the standard picture, for κ ∼ 2 and n = 6. Constraints on Extra Dimensional Cosmology During the period where the compact extra dimensions are changing size, the (4 + n)-dimensional metric is approximately described by the Kasner solutions [9], where a i and b i are the initial scale factors for the 3 large and n compact dimensions, respectively, and t i is the initial time where the contraction of the compact extra dimensions begins. For the case where the compact dimensions are contracting, the values of k and l in the exponents are given by Note that these are the solutions with opposite sign from those considered in Ref. [9].For n = 6 extra dimensions, we obtain k = 5/9 and l = −1/9.This implies that if the compact dimensions shrink by a factor of κ, then the large dimensions will increase in size by a factor of κ 5 .Since the temperature cools as the 3-dimensional universe expands, we require that the anomalous expansion from the Kasner phase ends before the temperature falls below T BBN ≈ 2 MeV [10,11], to avoid significant deviation from standard BBN.This means the Kasner phase must begin before The presence of large extra dimensions allows for production of light Kaluza Klein (KK) gravitons in the early Universe which could cause conflict with observational data.To avoid such problems, one is led to assume that the Universe did not attain a high reheat temperature, which limits the scope of cosmological models considered in this framework.These considerations were revisited in Ref. [12], and the most stringent constraint, based on preserving the products of the BBN, was determined to be where r ≈ 6 is a numerical factor, and T max is the maximum reheat temperature.To adapt the above bound (12) to our Adagio scenario, we multiply the left-hand side by a factor κ n/2 to account for the fact that we assume a value of M P ∝ R n/2 during the relevant cosmological era that is κ n/2 times larger (to avoid excessively complicated results, we have only considered this factor that gives the main effect for general n).An additional factor of κ should also be included to reflect the growth of the graviton KK mass scale by ∼ κ after the extra dimensions shrink to their late Universe size; the more massive the relic that decays, the more stringent the bound from BBN on its abundance [12].With these modifications, we obtain the following relation that applies to our scenario The above bound could be somewhat alleviated if one accounts for the non-hadronic decay channels of KK gravitons [12], but we adopt it to be more conservative.As can be seen from Eq. ( 13), the dependence of T max on κ ≲ 10 is not very strong.Requiring T min < T max leads to The temperature where the radion potential readjusts to its late value will be taken to be well below the freeze-out temperature which implies T * ≪ T max .This allows for a simpler and more transparent treatment of the cosmic evolution in our work.Hence, we can assume that the DM relic abundance is set while the value of M P is larger than today's value, but constant. III. DEMONSTRATION WITH DARK PHOTON MEDIATED DARK MATTER For the purpose of demonstration, we present an analysis using a concrete dark matter model with fermionic dark matter coupled to a dark photon, associated with a dark U (1) D gauge interaction, which kinetically mixes with the SM photon.We will assume that the dark sector is localized on the same brane as the SM content, making it effectively 4-dimensional.For more general treatments of dark photon and kinetic mixing in extra dimensional models see, for example, Refs.[13][14][15], where other possible effects may also allow circumvention of the CMB bounds considered here. The phenomenologically relevant part of the Lagrangian is with dark matter field χ, of unit U (1) D charge, dark photon field A ′ µ , dark photon field strength tensor F ′ µν , and ordinary photon field strength tensor F µν .The covariant derivative µ has gauge coupling e D , and "dark fine structure constant" α D ≡ e 2 D /(4π).When 2m χ < m A ′ , the annihilation of χ is through an offshell dark photon to SM charged particles.The s-wave thermal cross section for this annihilation to some charged particle with mass m 0 , charge Q, and number of colors n c is given by (see e.g.Ref. [18] for the general formalism, Ref. [19] for the treatment applicable to dark photons, and Ref. [20] for a simplified expression in the limit m 0 ≪ m χ ) ) In the very early universe, this annihilation cross section, together with the Hubble expansion, sets the relic abundance via thermal freeze-out, and at later times, these same annihilations will affect the CMB.On the other hand, the dark photon can be produced on-shell in collisions of SM particles at colliders, with the dark photon then decaying invisibly to dark matter almost 100% of the time.This leads to the search channel of mono-γ plus missing energy at e + e − colliders. To determine the relic density, the following equations from Ref. [21] are used: where x f = m χ /T f determines the freeze-out temperature T f , g is the internal degrees of freedom of the relic, g * counts the relativistic degrees of freedom at freeze-out; for s-wave annihilation j = 0 and σ 0 = ⟨σv⟩.The relic energy density in χ is then given by where h ≈ 0.67 [1].For the thermal freeze-out mechanism to work, it is required that The cold dark matter energy density of the Universe is observed to be Ω CDM h 2 = 0.12 [1].In our Adagio scenario, M P at the time of freeze-out is given by where κ is as in Eq. ( 8) and M P,0 ≈ 1.2 × 10 19 GeV is the value of the 4-dimensional effective Planck mass today.Since both Eq. ( 17) and Eq. ( 18) depend on the product M P σ 0 , a lower cross section can be exactly compensated by a larger Planck mass to produce the same relic abundance at the same freeze-out temperature. Figure 1 shows how the Adagio cosmology modifies the parameter space that leads to thermal relic dark matter, along with the relevant constraints from CMB measurements from Planck [8], mono-γ searches from BABAR [16], and projections for mono-γ searches at Belle II [17] for the choice of model parameters α D = 0.5 and m A ′ = 3m χ .Figure 2 shows the same curves for the choice of model parameters α D = 0.5 and m A ′ = 2.5m χ .Figure 3 also shows similarly for the choice of model parameters α D = 0.2 and m A ′ = 3m χ .Based on the allowed temperature range from Eq. ( 19), m χ could be as large as x f T max , which for typical values of x f ∼ 20 and T max ∼ 1 GeV (for M F = 50 TeV) leads to m χ ≲ 20 GeV. IV. POSSIBLE SIGNALS The main motivation for large extra dimensions is to solve the hierarchy problem by placing the electroweak scale close to the fundamental scale of gravity M F .In this case, quantum gravity effects can be searched for at colliders.A variety of ATLAS [22,23] and CMS [24,25] searches have constrained the fundamental scale to be well above the TeV scale, with the strongest limits requiring M F ≳ 9.2 TeV [24].The ultimate LHC reach for the fundamental scale is at M F = 13 TeV [23].As center of mass energy is critical for reaching much larger values of M F , we expect that a future hadron collider with center of mass energy of 100 TeV [26] would be able to access M F ≲ 100 TeV. The large extra dimension framework naturally points to thermal relic DM around the GeV scale, which can be made compatible with the CMB constraints in the Adagio scenario.The dark photon mediator can be searched for experimentally.While there are many collider searches for dark photons, in this scenario, the dark photon decays to dark matter instead of to SM final states.Searches for mono-γ at Belle II [17] can, in principle, probe some of the relevant parameter space in our scenario, as seen in Figs. 1, 2, and 3, but that region of parameters is disfavored by the Planck limits. Most of the mass region in our dark photon mediated realization of the Adagio scenario will require higher energy, with comparable integrated luminosity to that of Belle II of 50 ab −1 .A future lepton collider with high luminosity, such as FCC-ee or CEPC, could search for dark photons beyond the mass reach of Belle II [27].We note that the cross section for e + e − → γA ′ , in the limit m 2 A ′ ≪ s, scales as 1/s [28], Belle II 50ab -1 proj.where s is the center of mass energy.For a lepton collider with s = m 2 Z , where m Z = 91.2GeV is the Z vector boson mass, integrated luminosities of O(100 ab −1 ) have been envisioned [27].Since the cross section for A ′ production at such a facility would be ∼ 100 times smaller, compared to that at Belle II, we then expect a sensitivity to ϵ that is ∼ 10 times worse.Hence, for a future circular lepton collider operating at the Z pole, we expect a sensitivity to ϵ ∼ 5 × 10 −4 , for m A ′ ≳ 10 GeV, corresponding to m χ ≳ 3 GeV in Fig. 1. V. POTENTIAL ALTERNATIVE UTILITY Here, we will examine "freeze-in" [29] as a possible alternative mechanism for the production of DM.In this framework, DM and its associated interactions are never in thermal equilibrium.This points to very feeble interactions between the visible components of the cosmic energy density and the dark sector.It is interesting that such a connection between the two sectors could actually be motivated by astrophysical data that seem to favor non-standard cooling mechanisms for stellar objects [30].This could be realized through the coupling of electrons, for example, to a light boson. Let us, for simplicity, assume that a light scalar ϕ, in the keV regime, couples to electrons with strength y e .One can roughly take y e ∼ 10 −15 [31] to be in the regime of interest for a possible explanation of anomalous stellar cooling hints.A rough estimate of the freeze-in abundance Y χ ≡ n χ /s produced via the light mediator ϕ, where n χ is the DM number density and s ∼ g * T 3 is the entropy density, can be obtained from with w a numerical factor of O(π −6 ) [29] and y χ the coupling of ϕ to DM χ.For m χ ∼ 0.1 GeV, as an example, DM selfinteraction limits require y χ ≲ 10 −3 [31].For g * ∼ 10, using m χ ∼ 0.1 GeV, for example, we see that Y χ is much smaller than the ∼ 10 −9 required.How about an interaction with muons?Let us assume the coupling y µ ϕμµ.One can estimate the rate for µ + µ − → γϕ as ∼ αy 2 µ T .Requiring this process to be out of equilibrium (for a freeze-in scenario and to avoid overproducing ϕ, which could act as extra radiation and cause tension with BBN), would yield y µ ≲ 10 −9 and hence we may adopt y µ ∼ 10 −10 .Using Eq. ( 21) with y e → y µ , we find Y χ ∼ 10 −10 , which is about O(10) too low.However, in an Adagio scenario with M P → O(10)M P , one may accommodate a freeze-in mechanism using muon initial states.At the same time, the coupling of ϕ to electrons may provide an explanation of anomalous stellar cooling, mentioned above. Here, we also note that for m χ ∼ 0.1 GeV we may assume that the reheat temperature is ∼ 0.1 GeV.In that case, since the mass of the tau lepton m τ ≈ 1.8 GeV there would be a suppressed thermal τ population.Thus, one may assume that its coupling to ϕ is larger than that to muons, g τ ∼ 10 −8 , without overproducing ϕ.A 2-loop diagram can induce [32] δg e ∼ g ℓ α 2 16π 2 (m e /m ℓ ), where ℓ = µ, τ .Here, a roughly m 2 ℓ scaling of lepton couplings to ϕ has been assumed, as a possibility. More detailed calculations are needed for more reliable estimates and the preceding discussion is only meant to elucidate another potential application of our Adagio cosmological scenario. VI. CONCLUDING REMARKS We have shown how extra-dimensional models can realize a changing M P , slowing the timescales of early universe cosmology.Using this Adagio mechanism to reduce the Hubble expansion in the early universe, GeV-scale thermal relic dark matter, disfavored by CMB constraints, becomes viable again.Naturalness arguments that the electroweak scale should not be too separated from the fundamental scale of gravity also point to the GeV scale for thermal relics in this scenario.In such a case, the LHC or future hadron colliders can directly look for quantum gravity effects.This new avenue for producing thermal relics with the correct abundance provides a new motivation for GeV-scale dark sector searches. Though we focused on low mass thermal relic DM production, an Adagio scenario can, in principle, be implemented in other contexts as well.This could potentially affect what is often called the unitarity bound on the thermal relic DM mass, which requires it to be below ∼ 100 TeV [33].If the minimum requisite annihilation cross section can be lowered below the canonical values, one could entertain DM masses above this bound.However, this scenario would require M F to be much larger than that considered in this work to allow reheating to much higher temperatures. Another potential consequence of a smaller Hubble rate, corresponding to larger causally connected volumes, could be in the relation between the temperature at which primordial black holes may form in the early Universe and their typical masses.In the presence of an Adagio interlude, one would generally expect larger masses for such black holes, given the larger collapsing Hubble volume, at a given temperature. Of course, one may also entertain the opposite 'Allegro' scenario, where the Hubble rate is larger than the standard value as a function of temperature (or energy density, in general).This could, for example, allow larger cross sections for thermal relics than the typical expectation, providing other alternatives for viable DM models.However, we will not discuss this possibility further here and leave it for future work. ϵFIG. 1 . FIG.1.Plot of dark matter mass mχ versus kinetic mixing ϵ in a dark photon mediator model with αD = 0.5 and dark photon mass m A ′ = 3mχ.Constraints from Planck[8] (green), BABAR[16] mono-γ searches (blue), and projected reach from Belle II[17] mono-γ searches (red dotted) are shown.Curves showing the parameter space that reproduces the observed relic abundance for standard cosmology (black, dotted) and for our Adagio cosmology with 10 times larger MP (black, solid) in the early universe.The constraint on Tmin is shown by the vertical orange solid line, Tmax for MF = 13 TeV (within LHC reach) corresponds to the vertical orange dashed line, and Tmax for MF = 50 TeV (within FCC-hh reach) is the vertical orange dotted line.
2023-08-23T06:45:34.725Z
2023-08-21T00:00:00.000
{ "year": 2023, "sha1": "632284249fcaf968974b581b1dcb0bb9f1162ff5", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.108.123525", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "e715137cbdc596a4bea7e91e6dcb4730dd37c70c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52196348
pes2o/s2orc
v3-fos-license
Chemical kinetics in an atmospheric pressure helium plasma containing humidity Atmospheric pressure plasmas are sources of biologically active oxygen and nitrogen species, which makes them potentially suitable for the use as biomedical devices. Here, experiments and simulations are combined to investigate the formation of the key reactive oxygen species, atomic oxygen (O) and hydroxyl radicals (OH), in a radio-frequency driven atmospheric pressure plasma jet operated in humidified helium. Vacuum ultra-violet high-resolution Fourier-transform absorption spectroscopy and ultra-violet broad-band absorption spectroscopy are used to measure absolute densities of O and OH. These densities increase with increasing H2O content in the feed gas, and approach saturation values at higher admixtures on the order of 3 10 cm 3 for OH and 3 10 cm 3 for O. Experimental results are used to benchmark densities obtained from zero-dimensional plasma chemical kinetics simulations, which reveal the dominant formation pathways. At low humidity content, O is formed from OH by proton transfer to H2O, which also initiates the formation of large cluster ions. At higher humidity content, O is created by reactions between OH radicals, and lost by recombination with OH. OH is produced mainly from H2O + by proton transfer to H2O and by electron impact dissociation of H2O. It is lost by reactions with other OH molecules to form either H2O + O or H2O2. Formation pathways change as a function of humidity content and position in the plasma channel. The understanding of the chemical kinetics of O and OH gained in this work will help in the development of plasma tailoring strategies to optimise their densities in applications. Introduction The interaction of non-thermal atmospheric pressure plasmas (APPs) with biological matter and their potential applications as biomedical devices [1][2][3][4] are currently a topic of significant interest. APPs have been shown to be effective in many different areas of biomedicine, such as sterilization, 5-7 cancer treatment, [8][9][10][11][12] and wound healing, [13][14][15] and have recently been identified as potential triggers of beneficial immune responses. 16 First trials on patients confirm the effectiveness of APPs. 13,17,18 APPs may offer advantages compared to conventional therapeutics due to their typically small dimensions, offering the possibility of locally confined treatment, low production cost, and the potential to tailor sources for specific applications. A key question in plasma interactions with biological matter is the role played by plasma produced reactive species (RS). RS are known to interact with cells and their membranes, and often serve as signaling agents in cell metabolism. 19,20 They can also cause severe damage to cells at high concentrations. 19,21 For APPs to fulfil their potential in any biomedical application, a full characterization of the sources used to produce them is necessary, including the quantification of RS produced. Reactive oxygen and nitrogen species (RONS), such as atomic oxygen and nitrogen (O and N), ozone (O 3 ), excited states of molecular oxygen (e.g. O 2 (a 1 D)), or nitric oxides, have previously been quantified both experimentally and numerically in O 2 and N 2 containing plasmas. [22][23][24][25][26][27][28] Here, the production of RS in an enclosed APP operating in helium with small contents of humidity is investigated. Water is typically present in the direct vicinity of biological material, and can easily enter the gas phase via evaporation. Therefore, RS produced from water vapor can be created during the treatment of the material when plasmas are applied. Water is also usually present as a feed gas impurity. 29 Therefore, the investigation of RS directly produced from water vapor, such as O and the hydroxyl radical OH, is of interest for biomedical applications. These species can act as precursors for longer-lived species such as hydrogen peroxide (H 2 O 2 ), an important signaling agent in cells, 19,30 and O 3 . In high concentrations both of these species can have toxic effects on biological material. The quantification of RS in APPs represents a challenge for diagnostics based on optical emission from excited states, since the plasma emission is strongly quenched by the ambient gas due to the high pressure. Laser Induced Fluorescence (LIF) and Two-photon Absorption LIF (TALIF) have been previously used to detect species such as O and OH produced from water vapor. [31][32][33][34] However, in order to accurately predict the effect of quenching using these techniques at atmospheric pressure, the densities of all potential quenching particles are needed. This is increasingly challenging in complex gas mixtures and in regions with gradual gas mixing, like the plasma effluent. These techniques also rely on quenching rate coefficients for investigated species with all possible quenchers, which for some cases, particularly quenching involving water molecules, are only poorly known. The implementation of faster laser systems such as picosecond or femtosecond lasers [35][36][37] can help to quantify the effect of these quenching processes. In addition to accounting for the effects of quenching to obtain absolute density measurements using TALIF, an additional calibration measurement involving a gas with a known quantity is typically needed. An alternative diagnostic technique, which is independent of collisional quenching, is mass spectrometry. This technique has recently been used to detect RS such as OH and H 2 O 2 produced from water vapor in the plasma effluent. 38 Similar to LIF and TALIF, this technique requires a calibration measurement to obtain absolute species densities. Mass spectrometry has also been used to detect high order protonated water clusters 39,40 produced in APPs. Species deposited in a liquid by plasma treatment are sometimes investigated by means of absorption spectroscopy in the liquid phase, and electron paramagnetic resonance spectroscopy. 41,42 However, to calculate gas-phase densities from liquid-phase densities, usually a calibration is required. An established optical diagnostic technique for the quantification of OH in the gas-phase is ultra-violet (UV) Absorption Spectroscopy (AS), 34,[43][44][45] which is independent of collisional quenching and does not require an additional calibration measurement. However, measuring ground state densities of atomic species produced in water-containing plasmas, such as O, is challenging using AS since the energy gaps between the ground and excited states of the atoms are large. Therefore, the required excitation wavelengths typically lie in the vacuum ultra-violet (VUV) spectral range, which is strongly absorbed by air. However, atomic species, in particular O and N, have previously been quantified in an APP using synchrotron radiation and a spectrometer with an ultra-high spectral resolution, so-called VUV high-resolution Fourier-Transform Absorption Spectroscopy (VUV-FTAS). 22,23 In this work, we combine VUV-FTAS and UV broad-band AS (UV-BBAS) to determine absolute densities of gas-phase O and OH in a radio-frequency APP jet operated in helium (He) for different values of humidity up to 1.3%. We combine the experimental investigations with zero-dimensional, plug-flow plasma simulations to model the chemical kinetics in the source. These models are commonly used to study properties of atmospheric pressure plasmas. [46][47][48][49][50][51] The role of humidity on the plasma chemistry in APPs has been subject of numerical investigations in the past, 48,[52][53][54] that have established the baseline understanding of these systems. We build upon these prior works by comparison of modeling results to experiments performed for the same conditions. Species densities are measured and simulated mainly in the plasma bulk, to validate the reaction mechanism. The resulting reaction mechanism can then be used to investigate important formation pathways for different RS, and to predict additional species densities that are difficult to measure. In many applications, reactive species exit the plasma source into ambient air where the chemical kinetics will differ from the active plasma region. This transition is not investigated here, but the validated reaction mechanism constructed in this work will act as a base to be built upon for future studies in this area. Atmospheric pressure plasma jet The production of atomic oxygen (O), and hydroxyl radicals (OH), in an atmospheric pressure plasma jet (APPJ) operating in 5 slm helium (either grade 4.6 with 25 ppm N 2 and 7 ppm O 2 impurities, or grade 5 with 3 ppm H 2 O and 2 ppm O 2 impurities) with water vapor (H 2 O) admixtures is investigated. The plasma jet used in this work is shown in Fig. 1 and is the same as described by Dedrick et al. 23 The jet has a plane-parallel electrode configuration. One electrode with an area of 2.4  0.86 cm 2 is powered by a sinusoidal voltage at a frequency of 13.56 MHz while the other electrode, which is the housing of the source, is grounded. Powered and grounded electrodes are separated by 0.1 cm, keeping the critical dimensions and operating parameters close to the 'COST Reference Microplasma Jet', 55 but with a smaller surfaceto-volume ratio (22 cm À1 here instead of 40 cm À1 for the 'COST Reference Microplasma Jet'). An impedance matching network unit (L-configuration) is used to optimise the power coupled into the plasma. The applied voltage across the gap is monitored using a high-voltage probe. A list with the equipment for the power coupling and voltage monitoring can be found in Appendix B. We generally conduct experiments at a fixed generator power. For measurements of OH, we set the generator power close to the arcing point of the plasma in pure helium, which is independent of the generator power when using different generators and which typically occurs around 520 AE 10 V pp . Starting at this point also maximises the measurement range with respect to water content. The water content is varied while keeping the settings on the generator constant. The implications of this on the coupled power to the plasma will be further discussed in Section 2.4. At higher voltages, the plasma tends to extend around the powered electrode when operated in pure He, and transitions from a homogeneous glow-like discharge into a constricted ''arc'' mode at the electrode edges, which can damage the source. For the O measurements, the source is operated in a vacuum vessel, with limited options for visually identifying the plasma mode through a transparent vacuum flange. In this case measurements are carried out at a lower voltage of 470 V pp to avoid the 'constricted mode' and related damage of the electrodes. For the spatially resolved measurement of OH (presented in Section 4.1), a different plasma source, as described elsewhere, 22 has been used. It utilises the same design concept, i.e. the same gap size of 1 mm, but a slightly larger electrode area of (1.1  3) cm 2 , compared to the source described earlier. Since the surface-to-volume ratios for both designs are very similar, we assume the RS production to be comparable under similar operating conditions (gas flow, power density). Water vapor is admixed into the He flow using two mass flow controllers and a homemade bubbler, which consists of a domed glass adapter (Biallec GmbH) clamped to a KF40 flange. Two stainless steel pipes welded to the flange provide gas in-and out-lets. Both mass flow controllers are fed with dry He, while the outlet flow of one controller passes through the bubbler before being mixed with the other. By changing the ratio of the humidified to the dry He flow, the water vapor content of the total gas flow can be regulated. 34,43,45 With the humidity level in the He flow leaving the bubbler being saturated (see below), the vapor pressure p vap H 2 O can be calculated using the semi-empirical expression given below. 56 The total amount of water in the vapor phase can then be calculated using the vapor pressure of H 2 O (in bar) and the flow rate of the He through the bubbler F bubbler He , as described in ref. 34 where T w is the water temperature in 1C. To check that the He flow exiting the bubbler is saturated with H 2 O, the weight loss of the bubbler due to evaporation of the water is measured for different He flow rates as a function of time. The absolute water concentration in the He flow is given by the ratio of the two former quantities. The results are shown in Fig. 2. A small systematic drop of the measured water concentration with increasing He flow is observed. This may reflect a temperature drop of the water inside the bubbler, which is not temperature controlled, either because of an increased evaporation rate, or fluctuations in the laboratory room temperature, since measurements were taken over several days. The averaged result for the water concentration of (16.9 AE 2.0) g m À3 corresponds to a water temperature of (20 AE 2.0) 1C assuming full saturation. Since this temperature represents the typical 'room temperature' at which the measurements were taken, we can confirm that the He flow out of the bubbler is saturated with water vapor. The uncertainty of 2 1C would lead to an uncertainty of approximately 14% in the calculation of the water vapor content in the gas. VUV high-resolution Fourier-transform absorption spectroscopy Absolute line-averaged O atom ground state densities are measured at the DESIRS beamline at the synchrotron SOLEIL, 57 with its unique ultra-high resolution VUV Fourier-Transform spectrometer 58 able to cover the complete VUV spectral range down to 40 nm with a resolving power (l/Dl) of up to 10 6 . The atomic oxygen transition O(2p 4 3 P J=2 -3s 3 S 1 ) is investigated in this work. The measurement and analysis procedure is described in detail elsewhere. 22 The spectrometer yields a transmission spectrum S T , which includes the convolution of the plasma transmission T (accounting for Doppler and pressure broadening of the corresponding spectral line profile) with the sinc-shaped instrumental function Fðs 0 Þ ¼ sinðpðs 0 À sÞÞ pðs 0 À sÞ of the FT spectrometer where s is the wavenumber of the transition and S 0 the reference spectrum without absorber. Absolute densities are obtained from the transmission spectra using Beer-Lambert's law where A(s) is the absorbance, l the length of the absorbing medium (here defined by the width of the electrode as 8.6 mm), and k the absorption coefficient, which includes the ground state density, the spectral line ''Voigt'' profile, which takes into account the Doppler and pressure broadening of the spectral line, the statistical weights g J for the different states and the transition probabilities. For the evaluation of these transmission spectra, the different broadening mechanisms are taken into account as fixed values during the fitting process. The instrumental broadening is set to Ds I = 0.87 cm À1 as described elsewhere. 22 The Doppler width Ds D = 0.24 cm À1 is calculated for a gas temperature T g = 304 K. This value is determined as the OH rotational temperature from the absorption measurements by fitting the OH(X 2 P i , u 0 = 0) -OH(A 2 S + , u 00 = 0) rotational transitions using a spectral simulation (see results discussed in Section 4.2 and shown in Fig. 9). The detailed working principle of the simulation will be described in the next section. Rotational temperatures were obtained for different water contents ranging from 0.1 to 1.3%. A standard deviation of 2.2 K shows that the gas temperature stays fairly constant within the investigated range of H 2 O admixtures. Finally, the pressure broadening is determined as Ds L = 0.37 cm À1 from an average of several automated fits to the data using the previously specified values for Ds D and Ds I . This value is in reasonable agreement with Ds L = (0.46 AE 0.03) cm À1 for He measured by Marinov et al. 59 in the 'COST Reference Microplasma Jet' using Doppler-free TALIF, albeit for a different optical transition. A typical transmission spectrum is presented in Fig. 3. We only evaluate the strongest J = 2 transition of the O(2p 4 3 P J=2 ) triplet from the fine structure split ground state to the first electronically excited state because of the low signal-to-noise ratio of the weaker J = 0, 1 transitions. In order to estimate the total ground state density n O ¼ P J¼0À2 n J , the Boltzmann factor is applied, where E J is the energy of the state and k B is the Boltzmann constant. The main uncertainties in this technique lie in the estimated absorption length (uncertainty of 5%), and the accuracy of the transition probability A ik (r3% 60 ), which are included in the expression for the absorption coefficient in eqn (4). A change of the gas temperature within 10 K influences the Boltzmann factor calculated using eqn (5) by less than 2%. We therefore estimate the systematic error in all VUV-FTAS measurements presented here to be within 10%. UV broad-band absorption spectroscopy Absolute OH densities are measured in the same plasma source using UV-BBAS using two different experimental setups to ensure reproducibility. The experimental setup UV-BBAS I is presented in Fig. 4(a). Light from an ultra-stable broad-band plasma lamp (Energetiq EQ-99) is guided through the middle of the plasma channel and focused on the entrance slit of a 320 mm spectrograph (Isoplane SCT320) with a 2400 grooves per mm grating. Spectra are recorded using a photodiode array detector (Hamamatsu S-3904). The setup is described in detail elsewhere. 61 The second setup (UV-BBAS II), which is shown in Fig. 4(b), comprises several different components, mainly a UV LED (UVTOP-305-FW-TO18, Roithner Lasertechnik GmbH) as light source and a CCD camera (Andor Newton 940) in combination with a spectrometer (Andor SR-500i) as detector. For the UV-BBAS II setup, the plasma is mounted on an automated x-z stage, allowing for spatially resolved measurements in the plasma channel. The experimental setup is described in detail elsewhere. 24 To calculate the absorbance in eqn (4), four signals are required: plasma on and light source on (I P,L ), plasma on only (I P ), light source on only (I L ) and a background with both plasma and light source off (I 0 ). Each signal is integrated over a time period of 50 ms, with a plasma stabilisation time of 4 s beforehand. A schematic showing this sequence is shown in Fig. 5. The plasma transmission T in eqn (4) is calculated as TðsÞ ¼ I P;L ðsÞ À I P ðsÞ I L ðsÞ À I 0 ðsÞ An example spectrum of the OH absorbance A is shown in Fig. 6. Using two setups, an admixture range of 200-13 000 ppm humidity content is investigated. Measured OH rotational absorbance spectra of the transition OH(X 2 P i , u 0 = 0) -OH(A 2 S + , u 00 = 0) are fitted using a spectral simulation in order to obtain absolute OH(X 2 P i , u 0 = 0) densities. The fitting programme is based on a calculation of the Einstein coefficients and wavelengths for the individual transitions within the investigated rotation band, as described by Dieke and Crosswhite. 62 Based on the selection rules for the total angular momentum J = L AE S and the angular momentum L (without electron spin S = 1/2 for the OH radical), relative intensities are calculated for 12 possible branches, using expressions derived by Earls. 63 An experimental value for the radiative lifetime for a rotationless upper state F 1 ( J 00 = 0.5) has been determined as 0.688 ms 64 (here, F 1 donates the doublet component of the upper state with J = L + 1/2, in accordance with Diecke and Crosswhite 62 ). Therefore, all calculated relative Einstein coefficients can be normalised to this value. Our calculated values are in good agreement with those from Goldman and Gillis. 65 As in Dilecce et al., 45 the spectral fitting includes an instrumental function, whose width represents the spectral resolution of the spectrometer, which depends on the pixel size of the detector array, the optical grating and the width of the spectrograph's entrance slit. We assume the instrumental function to be Gaussian. Examples of measured and simulated absorbance spectra are shown in Fig. 6. Here, the instrumental width is 56 pm (UV-BBAS I) or 34 pm (UV-BBAS II), which is much larger than the Doppler (Dl D (304 K) = 0.098 cm À1 = 0.93 pm) and pressure broadening (estimated as Dl P (1 atm) = 0.07 cm À1 = 0.66 pm, as in ref. 66). The fitting programme is also used to calculate OH rotational temperatures. The main systematic uncertainties of UV-BBAS lie in the estimation of the absorption length (5%), and the accuracy of the calculated Einstein coefficients, which we estimate here to be within 10%. For the absorbance measured with the UV-BBAS II setup (featuring the LED), the standard deviation of the noise is in the order of 3  10 À4 , which places a lower limit on the measurable OH density at 3.6  10 13 cm À3 . For the UV-BBAS I setup (featuring the ultra-stable light source), the noise level of the measured absorbance is typically an order of magnitude lower, and therefore disregarded in the uncertainty estimation. The combination of the systematic error and a statistical error of 7% is shown as error bars in the results that follow. Determination of plasma power For accurate comparison between simulation and experiment the rf power dissipated in the plasma is a particularly important. Experimentally, this so-called plasma power is measured by determining current, voltage and phase shift using current (Ion Physics Corp. CM-100-L 1 V/A) and voltage probes (PMK-14KVAC). The probes are installed between the impedance matching unit and the plasma source. The time averaged power P is given by where U and I are the voltage and current amplitudes, respectively, and j is the phase shift between the two. Parasitic power losses, e.g. into the plasma source or the rf cables, are accounted for by measuring the power deposited in the system without a gas flow, so that the ignition of the plasma is inhibited. The subtraction method is then used 67,68 for a given current to determine the plasma power P d (I 2 ) = P on (I 2 ) À P off (I 2 ) The net power P d is the difference between the power measured with and without plasma, P on and P off , respectively. For a given plasma volume V plasma , the corresponding plasma power per unit volume p d is given by The instrumental phase shift of the measurement system (probes, BNC signaling cables, and digital oscilloscope) is determined using a variable air capacitor with known phase shift (MFJ 282-2018-1). For the calibration measurement, the plasma source with its rf cable to the matching box is replaced by this capacitor. Current and voltage waveforms are recorded by a fast oscilloscope (LeCroy WaveSurfer10, 10 GS per s sample rate). The voltage and current amplitude as well as the corresponding phase shift are determined by a Fourier analysis of this data. P d was found to be approximately constant (within 15%) as a function of feed gas water content at a constant generator power and matching settings. The average of P d over several different water contents is used as an input for the simulations over the whole range of water content. The average value of P d was determined as 2.8 W (E14 W cm À3 ) for the UV-BBAS measurements of OH (at approximately 510 V pp ), and 2.1 W (E10 W cm À3 ) for the measurement of O using VUV-FTAS (at approximately 470 V pp ). These values are used as the input for the simulations, unless otherwise stated. Power measurements are carried out separately from the density measurements using two power generators: coaxial RFG-150-13 (150 W maximum output power, same model as used for the OH measurements using the UV-BBAS II setup), and coaxial RFG-50-13 (50 W maximum output power, smaller range of powers for a better stability). We find a similar average power using these two setups, and that the power stays constant as a function of water content within one measurement set with a standard deviation of all points below 5%. We estimate a total uncertainty of 15% from repetitive measurements. These variations are small enough to not significantly influence measured species densities, particularly for OH, which we found to be only weakly dependent on applied voltage, and therefore power (increase of about 40% when the voltage is increased from 490 to 850 V pp , not shown here). Model description and reaction mechanism To better understand the dynamics of reactive species in cold atmospheric pressure plasmas, zero-dimensional plasma chemical kinetics simulations (global models) are often used. 51 In this work, experimental results are compared to those obtained using the GlobalKin code as described elsewhere. 50 GlobalKin solves the continuity equation for mass conservation for both charged and neutral species, taking into account particle production and loss through gas phase reactions and interactions with surfaces where N denotes the number density of heavy particles i, S V the surface to volume ratio, L D the diffusion length, D the diffusion coefficient, g the surface sticking coefficient, f the return fraction of species from walls, and S i the source term, which accounts for gas phase production and losses. In addition, the electron energy conservation equation is solved to calculate the electron temperature in the plasma by taking into account the balance of power input and loss of electron energy due to elastic and inelastic collisions with heavy particles d dt where n e is the number density of electrons, T e the electron temperature, m e and M i the electron and heavy particle masses, respectively, n mi the electron collision frequency, k the reaction rate coefficient and De l the electron energy gain/loss through inelastic collisions. GlobalKin also incorporates a two-term approximation Boltzmann solver, which updates the electron energy distribution function during the simulation, and calculates electron impact rate coefficients, using electron impact cross sections as an input. From the electron energy distribution function electron transport coefficients are also determined for the use in the continuity equation. In this work we apply a temporally constant power deposition corresponding to the time averaged power measured in the experiment. For rf APPs the electron heating is strongly modulated in time, leading to a power and electron impact rate coefficients, that vary during the rf cycle. 69,70 This effect is not captured in our model. However, Lazzaroni et al., 70 investigated the differences between a conventional global model, using a time averaged power deposition, and one that takes into account time-varying power deposition within the rf period. For their case, using a He/O 2 reaction mechanism, the densities of neutral species calculated by the modified model (O, O 3 , O*, O 2 *) were typically within a factor of 2 of those from the conventional model. The trends in the results of the two models were similar. Therefore, we expect that neglecting the time-varying power deposition in our model will only lead to a quantitative difference in the results, while the trends should remain valid. Gas temperatures are self-consistently calculated using the GlobalKin code using 50 d dt Here, N g is the gas density, c p the specific heat of the gas, T g the gas temperature, DH i the change of enthalpy for reactions with rate R i , k the thermal conductivity of the gas, and T s the surface temperature of the reactor wall. Therefore, GlobalKin balances gas heating via electron collisions (first term on right hand side), chemical reactions (second term), and heat exchange with surrounding walls (third term). Here, we assume T 0 g = 295 K (room temperature) as the initial temperature of the gas before entering the plasma channel. Coupled powers are typically small in this work, therefore it is assumed that the reactor wall is not significantly heated and T s is set to 295 K. This is in good agreement with previous observations, 24 where the electrode temperature was measured using an infrared thermometer in a very similar plasma configuration under a variation of plasma power. The model incorporates 43 species and 390 reactions. Table 1 contains the species in the mechanism. The plasma reaction mechanism is in Appendix A (Tables 5-8). At the surfaces, it is assumed that most neutral and negatively charged species (except electrons) do not react, while positive ions are neutralised with a probability of 1. The species assumed to react differently are listed in Table 4 in the Appendix A. A detailed discussion of the role of surface interactions in a similar simulation system is given elsewhere. 71 In this work, the model is solved for a channel length of 2.4 cm, with a gas flow rate of 5 slm, which corresponds to a gas velocity of about 11 m s À1 . Using a pseudo-1D plug flow, temporally computed densities are converted into spatially dependent quantities. Where species densities are presented as a function of humidity content, densities are extracted from the simulation at the axial centre of the source (at 1.2 cm), which is the position where measurements were made. From the plasma dimensions, the diffusion length L D , a necessary parameter for determining diffusion losses of particles, is calculated as 72 for a plasma with rectangular cross section (x  y) and length l. For the plasma source used here, L D = 0.0316 cm. (This is larger than the 'COST Reference Microplasma Jet' with L D = 0.0225 cm.) Pathway analysis The PumpKin software 73 is used to identify the production and destruction pathways for the selected neutral species. The reaction pathway of a species of interest results from (a) analysing the elementary reactions that contribute directly or in subsequence to the formation of this species and selecting the significant ones only, (b) algebraically summing up the formal notations of these reactions, and (c) eliminating shorterlived species, here the electrons and some ions, to end up with a simplified net reaction. The short-lived species are defined as those with a lifetime shorter than a lifetime set by the user, which we denote as t p . Note, that this so-called net reaction should not be mistaken as an elementary reaction, since it is specific to the choice of eliminated species. This approach is particularly useful for understanding the production and destruction of species which are formed via complex reaction pathways involving a chain of elementary reactions, as opposed to simply one or two. Among the neutral species considered in this work He* and He 2 * metastables have the shortest effective lifetime. For the pathway analysis, we therefore choose t p to be slightly shorter than the lifetime of these species, which is in accordance with previous work. 52 Table 2 shows the strong dependence of the simulated lifetime on the humidity admixture for specific plasma conditions. These findings support the conclusion of Niemi et al., 46 that the metastable character of these helium species at high pressure is significantly reduced in the presence of small admixtures or even impurity levels of molecular gases through Penning ionization under atmospheric pressure conditions. OH densities along plasma channel The density of OH along the plasma channel for an intermediate humidity content of 5400 ppm and a plasma power density of 18 W cm À3 is shown in Fig. 7. Experimental results show a rapid increase of the OH density over the first 2 mm of the channel. With increasing distance from the gas inlet the density stays approximately constant between 3.5  10 14 cm 3 and 4.0  10 14 cm 3 up to the end of the channel. A similar trend is also observed in the simulation. Absolute simulated and experimental densities agree well within around 25%, which is likely within the combined uncertainties of experimental data (as shown as error bars in Fig. 7) and simulations. Uncertainties in the simulation results would most likely occur due to uncertainties in used reaction rate coefficients, and considered reaction pathways, and were shown to be within a factor 10 for a He/O 2 reaction mechanism under similar plasma conditions. 74 To gain insight into the dynamics of OH formation, a pathway analysis is performed for the three regions highlighted in Fig. 7, which correspond to the fast build-up of OH at the entrance of the plasma channel (0-0.2 cm), a steady-state region (2-2.5 cm) and the decay of OH in the plasma effluent (3.3-3.5 cm). The dominant production and consumption pathways for OH, averaged over each region, are shown in Fig. 8. At the entrance of the discharge channel (0-0.2 cm), the gas consists mainly of the initial feed gas mixture plus some rapidly forming species such as ions and electrons. Therefore, the main production reaction for OH is through electron impact with water vapor, either via dissociation or dissociative attachment e + H 2 O -OH + H + e (60%) (14) e + H 2 O -OH + H + e (9%) The products of eqn (15) are ground state atomic hydrogen and OH in its excited OH(A) state, while the products of eqn (14) are both in their ground states. The percentage contribution for each reaction to the total production of OH is shown in brackets. Another production mechanism for OH is through the formation, and subsequent destruction, of charged water clusters, as was previously identified by Ding and Lieberman. 52 The formation of these clusters is typically a multi-step process. For positive clusters, this process usually begins through ionisation of H 2 O, either through electron impact or Penning ionisation with He*. These water ions then collide with water molecules to form the cluster ion H + (H 2 O), which accumulates additional water molecules through a series of reactions: Here, the reaction below the solid line represents the net reaction. Similar processes occur for negative ion clusters OH À (H 2 O) n , which are included in the reaction mechanism for n r 3. The main consumption pathway for OH in the first 0.2 cm of the jet is the formation of H 2 O 2 , and recombination to water through The rapid increase of OH density within the first two millimeters raises the question, if averaging the pathways over this region is a valid analysis. For both the production and consumption pathways, the contribution of each reaction does not change significantly, if evaluated for separate points within the first 0.2 cm instead of averaging over this region. However, the ratio of the total rate of production and consumption changes significantly, leading to the increase in OH density over this region. Further from the gas inlet, the rates of production and consumption equalise leading to an equilibrium OH density. In the quasi steady state region (2-2.5 cm), the previous pathways still dominate. However, additional species with intermediate lifetimes build up along the channel and begin to play a role in the formation of OH. For example, hydroperoxy radicals (HO 2 ) promote production of OH through reactions with H In the afterglow region (3.3-3.5 cm), a rapid decay of OH occurs both in experiment and simulation, as shown in Fig. 7. Short lived species such as ions and electrons recombine rapidly, while metastable species like He* and He 2 * are consumed through Penning ionization with water before reaching this region. Therefore, the chemistry in the plasma effluent is dominated by intermediate and long lived neutral species, where OH is produced mainly through reactions between H and longer-lived neutral species: In this region, consumption occurs at a higher rate than production, leading to a decrease in the OH density, with reactions (22) and (18) (collisions with H 2 O 2 and OH) dominating. OH densities under varying humidity content The density of OH measured by UV-BBAS in the centre of the plasma channel (at position 1.2 cm) as a function of the H 2 O content in the feed gas is shown in Fig. 9(a). The OH density increases sub-linearly with increasing H 2 O content, as previously observed. 34,43 Absolute densities obtained in this work also agree well with results obtained by others. 34,43 Absolute OH densities measured with the two different experimental setups agree well within the uncertainties in each measurement. The simulated OH densities at different feed gas humidity contents are shown in Fig. 9(a). In general, good agreement in the trends of experimental and simulation results is observed. Absolute OH densities agree particularly well at low H 2 O contents o2000 ppm. Towards higher H 2 O contents, simulated densities are higher than those measured experimentally. The largest difference is a factor 1.8 at the highest H 2 O content, which is reasonable agreement given the previously mentioned uncertainties. OH(X) rotational temperatures and gas temperatures calculated using GlobalKin are shown in Fig. 9(b), and found to be in good quantitative agreement with each other. In both experiment and simulations, temperatures stay fairly constant with increasing water content. While in the simulation, a very small decrease of the gas temperature is observed, the experimental data is more scattered, and a clear trend cannot be observed taking into account the uncertainties of the measurement. The main production and consumption pathways for OH at different H 2 O admixtures are shown in Fig. 10 (17)). At any stage of the clustering process, the clusters can be destroyed by dissociative recombination with electrons Towards higher H 2 O contents, this pathway is gradually replaced by direct electron impact dissociation or dissociative electron attachment with H 2 O (eqn (14) and (16)). OH is mainly consumed by reactions with other OH radicals (eqn (18) and (19)) and O Towards higher water admixtures, the contributions of these reactions to the consumption of OH decrease slightly, and reactions of OH with H, and more slowly forming species such as H 2 O 2 and HO 2 become more important. In both experiment and simulation, OH densities increase rapidly with increasing H 2 O at low H 2 O content, and less rapidly at high H 2 O content. The transition between these two regimes occurs at lower H 2 O content (around 2000 ppm) in the experiment compared to the simulation (around 3000 ppm). This leads to the increasing discrepancy between simulation and experiment at higher H 2 O contents where the experimental OH densities saturate and the simulated OH densities continue to slowly increase. The reason for this transition is investigated by looking at the most important formation pathways for OH, which are production by electron impact dissociation and dissociative attachment of H 2 O (eqn (14)-(16)), and consumption via reactions with OH to form H 2 O 2 and H 2 O (eqn (18) and (19)). As the gas temperature remains relatively constant with changing water content, rate coefficients for consumption of OH also stay approximately constant. The reaction rate coefficient for production of OH is dependent on the electron temperature T e , and the electron density n e . Fig. 11(a) shows the two quantities as a function of humidity content. T e and n e show opposite trends with increasing H 2 O admixture. T e , which is calculated from balancing the electron energy sources and losses (see eqn (11)), increases with increasing H 2 O content due to increasing electron energy losses in inelastic collisions with water molecules. For constant power input, the increased electron energy losses and T e are balanced by a decrease in n e with increasing H 2 O content. The effect of these changes on the total rate coefficient for dissociation k diss = k 14 + k 15 + k 16 and the dissociation frequency R = k diss n e is shown in Fig. 11(b). Due to the variation in T e , k diss increases with increasing humidity content, exhibiting a View Article Online similar trend to the OH densities shown in Fig. 9. Here, the transition from a fast to a slow increase also occurs around 3000 ppm. The dissociation frequency R exhibits a peak at this humidity content, which represents the optimum between the increasing T e and decreasing n e . Thus, the origin of the transition between fast and slow increase in OH density with increasing humidity content is a result of the transition between an increasing dissociation frequency below 3000 ppm H 2 O to a decreasing dissociation frequency above 3000 ppm H 2 O. Overall, the dissociation rate R  n H 2 O , and therefore the OH density, increases over the whole range due to the increasing value of n H 2 O . Based on this discussion, the differing H 2 O contents at which the transition occurs in the experiment and simulation may indicate that the rate of electron energy loss with increasing H 2 O content is misrepresented in the simulation. Another reason for the discrepancy between the simulated and measured trend in OH densities at higher water contents might be due to an additional consumption mechanism for OH, which is not taken into account in this work, such as the population of vibrationally excited states, which would also scale with T e . O densities as a function of humidity content The absolute O density measured by VUV-FTAS in the centre of the discharge (at x = 1.2 cm, triangles) as a function of H 2 O content in the feed gas is shown in Fig. 12 Simulated O densities are around a factor 2 lower than those obtained experimentally. The measured densities agree well with previous measurements in a similar source using twophoton absorption laser induced fluorescence. 34 A possible explanation for the difference in absolute O densities and trends between the experiment and simulation may be limitations in the global model, particularly the accuracy of the rate coefficients used, as discussed earlier. O is not directly produced from H 2 O due to electron or heavy particle impact dissociation in significant amounts at the electron temperature of interest. As a result, O must be formed in a process taking at least two steps, meaning that the uncertainties in multiple rate coefficients will play a role in determining the uncertainty in the simulated O density. As a result, the simulated O density is likely to have a larger uncertainty than the simulated OH density, whose dominant formation occurs directly from electron collisions with H 2 O. As shown in Fig. 13, the dominant production mechanism of O is via recombination of two OH molecules to form H 2 O and O. At lower H 2 O Fig. 11 (a) Electron density n e and temperature T e , and (b) combined rate coefficient k diss for electron impact dissociation and dissociative attachment and dissociation frequency R = k diss n e . Conditions are 5 slm total He flow and 14 W cm À3 plasma power. contents, O is also formed through processes involving positive ion water clusters: With increasing H 2 O admixture the formation of O 2 is also increased. As a result, electron impact dissociation of O 2 becomes a more important production pathway for O: Where the numbers in brackets represent the electron energy thresholds for these reactions. O is mainly consumed by reactions involving OH forming O 2 and H (eqn (27)). The measured and simulated O densities show an increased discrepancy towards smaller H 2 O admixtures o1000 ppm. A possible explanation for this might lie in the presence of unintentional air impurities in the experiment, which have been found previously to be able to influence the chemical kinetics in atmospheric pressure plasmas. [75][76][77] For the measurement of O, we use helium with a purity level of 99.999%, whereas the main impurities are H 2 O (3 ppm) and O 2 (2 ppm). Additional small impurities could arise from residual gases in the feed gas line. Simulations for two different non-zero O 2 impurity concentrations in the order of typical O 2 impurities originating from the feed gas supply are shown in Fig. 12 Numerical investigation of the production of longer-lived species The OH density reaches a steady-state value in the simulation well before the end of the plasma channel in both simulation and experiment. Particularly at higher H 2 O content, this is a result of OH being primarily produced by direct electron impact dissociation of H 2 O in a one step process (eqn (14)) and is consumed in interactions with other OH molecules. Atomic hydrogen behaves similarly, being produced mainly by electron impact dissociation and consumed via interactions with surfaces, 71 both of which occur relatively quickly. However, other species do not reach a steady state within the length of the plasma channel, and instead continuously increase in density up to the outlet of the plasma source. This is particularly true for slowly forming, long-lived species such as O 2 , H 2 O 2 and H 2 , as shown in Fig. 14. This finding suggests that the length of the plasma source, or the gas flow rate, and therefore the residence time of the gas, can be used to control the ratio of different species densities by taking advantage of the different timescales required for them to reach steady-state. First, we will discuss the formation of O in more detail. The density of O does not reach a steady-state value in the simulation within the plasma channel for most investigated conditions using a He-H 2 O gas mixture. Long timescales for simulations of atmospheric pressure He-H 2 O plasmas to reach steady-state have also been found by others. 78 This is in contrast to the case where similar sources are operated in He-O 2 mixtures. 79 In the work described in ref. 79, O densities approach steady-state towards the end of the plasma channel of the AAPPJ. In Fig. 14, O densities are increasing sharply within the first few millimetres of the channel, and then at a lower rate up to the end of the channel. Therefore, O densities follow a similar dependence as the OH densities also shown in Fig. 14. This is not surprising when considering that both the dominant production and consumption pathways are related to OH, i.e. production by reactions of two OH molecules to form H 2 O and O (27)). The fact that O is still building up within the channel, while OH approaches a steady-state value, is due to the continuous build-up of O 2 in the channel, also shown in Fig. 14. Electron impact dissociation of O 2 (eqn (29) and (30)) provides an additional formation mechanism for O further into the channel, although eqn (19) and (27) are still the dominant production and consumption pathways for O. Overall, this leads to a slow increase of the O density while the O 2 density continues to increase. The formation of species that reach steady-state on timescales longer than the residence time in the discharge channel are usually comprised of a complex multi-step processes. As an example, we demonstrate the dominant pathways for formation of O 2 , which is an important precursor for the formation of excited states of O 2 , such as O 2 (a 1 D). Since O 2 is a slowly forming species, we look at dominant production and consumption pathways for a longer timescale t p than those previously given in Table 2. The time scale of interest in the simulation is chosen so that only He, H 2 O, O 2 , O 2 (a 1 D), H 2 , and H 2 O 2 are treated as long-lived species, in accordance with previous studies. 52 The computational lifetimes of the shortest-lived species of these six are listed in Table 3 for different H 2 O contents. The two main net production reactions for the formation of molecular oxygen are found to be Many different pathways are possible in order to obtain these net reactions. A few examples of these pathways are Note that eqn (33) and (34) The cluster ions produced are consumed by dissociative recombination with electrons, and H, which is formed in that process, is lost by surface recombination. Other pathways exists that involve the formation of clusters, but are not explicitly discussed here. Eqn (35) has a different net production reaction than the others. In this case, OH produced from electron impact dissociation reacts with H 2 O 2 to form reactive HO 2 , which then forms O 2 during reactions with OH. Conclusions In this work, the chemical kinetics in an rf atmospheric pressure plasma with humidity are investigated using experimental and numerical techniques. By using this combination, computed species densities are benchmarked against experimental densities. The simulations are then used to reveal the dominant formation pathways of species of interest, here OH and O, and longer lived species such as O 2 , which is an important precursor for the formation of its excited states, such as O 2 (a 1 D). This work provides a detailed understanding of chemical kinetics in the active plasma. In many applications, reactive species will exit the plasma source and transit into an effluent region where they will mix with ambient air, where their chemical kinetics will differ. While this is not considered in this work, the results presented here provide a basis to be built on in future work to understand reactive species kinetics in this transition region. Absolute number densities of O and OH are determined experimentally using VUV high-resolution Fourier-transform absorption spectroscopy, UV broad-band absorption spectroscopy, and numerically by using the 0-D plasma chemical kinetics code GlobalKin. Absolute OH densities and formation pathways are investigated as a function of position in the discharge. Three different regions can be identified i.e. (a) a strong increase of OH density in the first few millimeters of the plasma channel, (b) a quasi steady-state region, and (c) a rapid drop of OH density in the plasma effluent region. During the fast increase and steady-state regions, OH is mainly produced via fast processes such as electron impact dissociation of H 2 O, and consumed predominantly via reactions with other OH molecules to form H 2 O 2 or H 2 O. These relatively simple chemical kinetics make it possible for OH to reach an equilibrium value within the plasma channel. Other species, whose densities have not been measured, are investigated numerically as a function of position in the plasma channel. Simulation results show that the H density approaches a steady-state value within the plasma channel, similarly to OH as discussed previously, as it is mostly formed directly via electron impact dissociation of water, and consumed at surfaces to form stable H 2 . However, most other species generated in the He-H 2 O plasma studied in this work do not reach a steady-state value within the length of the plasma channel due to more complex formation mechanisms. This has been shown using O 2 as an example. Therefore, the length of the plasma source could be used as a control parameter to tune the chemical composition of the gas at the end of the plasma jet for applications. Both OH and O densities are also investigated as a function of the humidity content in the He feed gas. It is found, both in experiments and simulations, that O and OH densities increase non-linearly with increasing feed gas humidity, offering the possibility of tailoring reactive species densities by changing the feed gas composition. The maximum OH density is on the order of 3-4  10 14 cm À3 (13-17 ppm). It is found that at very low water content, OH is mainly produced via reactions between H 2 O + and water molecules to form OH and protonated water clusters of the form H + Á (H 2 O) n , while electron impact dissociation of H 2 O becomes an increasingly important production pathway with increasing water content. The main loss channel for OH at all H 2 O contents is recombination to form H 2 O 2 . The maximum O density on the other hand is found to be in the order of 3  10 13 cm À3 (1.3 ppm). Recombination of two OH molecules is the most important production process for O at all H 2 O contents, while at very low water content, OH is also strongly produced via reactions between OH + and water molecules to form O and protonated water clusters. Since the dominant destruction pathway of O is recombination with OH to form O 2 and H, the formation of O is strongly coupled to the OH density in the gas flow. At higher H 2 O concentrations, electron impact dissociation of accumulated O 2 can also contribute to the production of O. It is also found that towards low H 2 O content, production of O from air impurities in the ppm range originating from the feed gas can increase the O density via direct electron impact dissociation of O 2 . Towards higher H 2 O admixtures, this effect becomes less significant due to increased production via collisions involving OH. Therefore, larger amounts of purposely admixed molecules lead to a better control of the plasma properties and reactive species than operating the source with small or no intentional admixtures. Conflicts of interest There are no conflicts to declare. Table 4 shows wall recombination coefficients and return species for the simulations used in this work. Table 5 shows electron impact reactions used in this work. Reaction rate coefficients are either taken from the literature, or calculated by the GlobalKin two-term Boltzmann equation solver. A. Reaction mechanism For the latter, reaction rate coefficients are indicated as f (E), and collisional cross sections are taken from the indicated literature. Electron impact cross-sections are taken from several databases, for 84 Although not all of the reactions from these databases are included in the plasma-chemical reaction mechanism shown here, they are still accounted for in the Boltzmann solver calculation for the electron energy distribution function and electron transport coefficients. Any other approach to obtain reaction rate coefficients is denoted by footnotes. Table 6 shows reaction rate coefficients for ion-ion recombination processes. It is generally known that ion-ion recombination processes can occur both as two-or three-body processes, depending on the gas pressure. Two-body reaction rate coefficients for several different gases have been obtained in ref. 85 at low pressure, and found to be in the order of 10 À13 m 3 s À1 or lower. Taking into account the He density at atmospheric pressure and at 315 K, and the rate coefficient for three-body ion-ion recombination proposed by Kossyi, 86 the effective two-body reaction rate amounts to a value in the order of 10 À12 m 3 s À1 . Due to this higher effective rate coefficient under our conditions, we only include three-body ion-ion recombination rate coefficients in this work. These reactions are found to be particularly important for the destruction of the higher-mass water clusters, which are abundant at higher H 2 O admixtures. Similar observations have been made by Liu et al., 48 who, after an analysis of the robustness of their chemistry model, only included a few ion-ion recombination reactions in their simplified models, a large fraction of which were three-body recombination processes for the collisions of higher mass cluster ions. We also found that under our conditions, ion-ion recombination between positive He ions and negative ions are negligible due to the rapidly decreasing He ion density with increasing water content, which again is in accordance with the findings of Liu et al., 48 and the fact that He ions undergo charge exchange reactions with most neutral species due to their high ionisation potential. Table 7 shows reaction rate coefficients for collisions between ions and neutrals. In this table a number of three-body processes are included. Three-body processes are typically characterised by a pressure dependence. The nature of these reactions mean that this pressure dependence normally takes the form of a curve exhibiting low-and high-pressure limits. In the low-pressure limit the effective rate coefficient (i.e. the three-body rate coefficient multiplied by the third body density) is linear with the third body density. In the high pressure limit the effective rate coefficient is independent of the density of the third body. In the region between the two limits the effective rate coefficient is non-linear with the third body density. For a number of reactions, this transition region occurs around atmospheric pressure, therefore effective rate coefficients must be calculated using available knowledge of the high and low pressure limits. The coefficients which have been explicitly calculated for 87 using an exponential fit, and using constant values n = 16, B = 5000, and k L = 10 À24 . f Value is listed as a lower limit in reference. g Estimated branching ratio. h Third body is H 2 O in reference. atmospheric pressure are marked as ''effective'' in Tables 7 and 8. Among these reactions is the formation and destruction of protonated water clusters H + Á(H 2 O) n , where the rate coefficients are given by Sieck et al. 87 Here, the expressions given in ref. 87 are used to calculate the effective rate coefficients for these reactions under our plasma operating conditions (atmospheric pressure, T g = 280-350 K). Results are fitted with an Arrhenius expression, where possible, in order to keep the temperature dependence for these reactions, since the formation of cluster ions is highly temperature dependent. 87 The rate coefficients for the formation of the two highest order clusters taken into account in this work are estimated by extrapolating the coefficients k 300 0 and A given by Sieck et al. using an exponential fit, and using constant values n = 16, B = 5000, and k L = 10 À24 (see Sieck et al. 87 for further description of these coefficients). Table 8 shows reaction rate coefficients for collisions between ions and neutrals. Similar to the description of ion-neutral reactions in Table 7, a number reactions rates in Table 8, including neutral reactions of several O and H containing species, are specified as ''effective''. Data to calculate rate coefficients for these reactions has generally been taken from the IUPAC chemical kinetics database. 88 For the calculation of ''effective'' decay rates, and generally for three-body processes, we have multiplied the three-body rate coefficient from the respective sources by a background gas dependent efficiency factor, if available, where the collider gas is different from He in the reference. This accounts for the fact that He is a less effective quencher compared to other to the more abundant protonated water clusters under all investigated conditions, and the fact that in this work, the focus lies on the investigation of the neutral particle dynamics. B. List of equipment The equipment used for the investigations in this work is listed in Table 9. 88 l a In m 3 s À1 and m 6 s À1 for two-and three-body collisions, respectively. b Value in an upper limit in reference. c Estimated value in reference. d Estimated branching ratio. e Branching ratios taken from Sanders. 168 f Third body is Ar instead of He in reference. The gas efficiency factor is assumed to be 1. g Third body is Ar instead of He in reference. The gas efficiency factor is assumed to be 0.65. This factor is calculated by dividing reaction rate coefficients for He and Ar as background gases for the same reaction measured by Zellner et al. 169 h Effective rate coefficients calculated from pressure dependent rates for 1 atm and fitted by an Arrhenius expression in the temperature range 280-350 K. i Third body is N 2 instead of He in reference. The gas efficiency factor is assumed to be 0.43. This factor is calculated by dividing reaction rate coefficients for He and N 2 as background gases for the same reaction measured by Hsu et al. 170 j Recommended rate coefficient in reference is for N 2 background gas instead of He. We apply a gas efficiency factor of 0.41 to the low-pressure limit reaction rate coefficient to account for this. This factor is calculated by dividing the room temperature rate coefficient from the given reference for He background gas (measured by Forster et al. 171 ) by the recommended value (measured by Fulle et al. 172 ). k Third body is Ar instead of He in reference. The gas efficiency factor is assumed to be 0.77. This factor is calculated by dividing reaction rate coefficients for He and Ar as background gases for the same reaction measured by Campbell and Thrush. 169 l Third body is N 2 instead of He in reference. The gas efficiency factor is assumed to be 0.61. This factor is calculated by dividing reaction rate coefficients for He and N 2 as background gases for the same reaction measured by Lin and Leu. 173
2018-09-16T08:12:25.310Z
2018-09-13T00:00:00.000
{ "year": 2018, "sha1": "3204348dad4a95e95119f8a8180905cd7e533154", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/cp/c8cp02473a", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "21b82c32736c907fa03b786823bb2538ca1beeb8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233595774
pes2o/s2orc
v3-fos-license
Navigating Religion Online: Jewish and Muslim Responses to Social Media : Although social media use among religious communities is proliferating, significant gaps remain in our understanding of how religious minorities perceive social media in relation to their faith and community. Thus, we ask how individuals use religion to frame moral attitudes around social media for Jews and Muslims. Specifically, how does social media shape understandings of community? We analyze 52 interviews with Jews and Muslims sampled from Houston and Chicago. We find that Jews and Muslims view social media as a “double-edged sword”—providing opportunities to expand intracommunal ties and access to religious resources, while also diluting the quality of ties and increasing exposure to religious distractions. These findings help us understand what it is about being a religious minority in the US that might shape how individuals engage with social media. Moreover, they suggest that social media may be transforming faith communities in less embodied ways, a topic that is of particular relevance in our pandemic times. Introduction The advent of social media technologies has made it more possible than ever for people to connect across different regions, demographics, and worldviews, in times of crisis and normalcy. The widespread impact of the global COVID-19 pandemic, in particular, has compelled many individuals and organizations to increase their online presence, forming alternative, less embodied ways of gathering. In the United States, despite the increased controversy over privacy concerns on social media platforms such as Facebook, media use remains constant (Perrin and Anderson 2019). Although some research has explored the interplay between religion and social media use generally (Campbell 2012b;Cheong 2017;Hjarvard 2016;McClure 2017), less research has examined patterns across religious traditions, especially traditions that have a significant presence within the United States but are still a numeric minority. About 20 percent of US adults share their faith identity online, and nearly half of US adults see religion shared regularly by others online (Pew Research Center 2014). Significant gaps remain, however, in our understanding of the interplay between religion and social media. Specifically, we have little understanding of how religion may shape individuals' moral attitudes about use of digital technology and gathering online or how social media may be transforming a sense of religious community for those religious minority groups who are often marginalized in society. For some religious communities, social media may present alternative ways to build or maintain community in the face of widespread societal marginalization. For religious minorities such as Jews and Muslims, in particular, social media may also be used as a safe haven distanced from the societal pressures of anti-Semitism and Islamophobia (Marizan 2016). Social media may also aggravate marginalization, as chat rooms and other platforms provide space for greater interreligious conflict and polarization (Alvstad 2010;Neumaier 2020). Yet, limited work interrogates the ways in which members of these minority religious traditions use digital media technologies and how they perceive social media in relation to their faith. Here, we address this gap by asking the following questions: To what extent do Jews and Muslims use religion to frame moral attitudes around social media? Moreover, how does social media shape understandings of community? We draw on data from a broader study, focusing particularly on interviews with 52 religious minorities (29 Jews and 23 Muslims), comprising both leaders and congregants, and sampled from religious organizations in Houston and Chicago. We find that Jews and Muslims perceive social media as neither exclusively providing social benefits nor exclusively evoking moral concerns. Instead, community perceptions of social media are complex. While social media allows for connectedness and greater access to religious resources, our respondents explain that it also presents new distractions and temptations that may test one's ability to stay on a moral path. Religion and Social Media In recent years, public discourse around the ways in which digital technology and social media is changing religion has amplified. Scholars have examined mediatization, or the long-term processes between media and social and cultural change (Hjarvard 2016;Lövheim 2011;Lövheim and Lynch 2011;Mishol-Shauli and Golan 2019;Rota and Kruger 2019). Mediatization not only examines media in relationship to long-term social and cultural processes, but it also provides theoretical explanations for the resurgence of religiously based social and cultural change through new digital media and the increased visibility of religion across many public institutions (Hjarvard 2016;Rota and Kruger 2019). As public religious figures, such as Pope Francis, join popular social media platforms such as Instagram (Stack 2016), there are greater opportunities for scholars to examine the effects social media has on both encouraging the adaptation of religion and challenging religious communities within an increasingly global society. McClure (2017, p. 481) draws on the idea of "tinkering" to describe the way digital media changes how people relate to religion. Previous studies used the concept of tinkering to refer to the way younger generations think about religion and how the advent of internet technologies causes people to recreate the way they view themselves (Berger et al. 1974;Turkle 1997;Wuthnow 2010). McClure (2017), however, extends this concept to examine the ways in which internet usage can potentially shift religious identities. In other words, the internet can be an apparatus through which religious individuals and groups may "tinker" with long-held conceptions of religion, providing the digital space for religious individuals to be more religiously inclusive (McClure 2017). The increase in social media use among religious people of diverse religious, racial, and ethnic backgrounds may foster increased interaction across religious and racial-ethnic boundaries and, in turn, decrease the likelihood of religious exclusivity (McClure 2017). There is also a push within scholarship on digital media and religion to distinguish between religion online and online religion. Frost and Youngblood (2014), for example, state that religion online describes the online functions of traditional religious communities. This includes the way that religious communities use digital media platforms to reiterate religious community, faith identity, and community outreach on behalf of communities that primarily exist within physical religious spaces (Campbell 2012c;Frost and Youngblood 2014). Online religion, however, refers to religious and spiritual practice that is done using the conduit of the internet (Frost and Youngblood 2014). These distinctions in the relationship between religion and digital media point to how online community spaces can serve as both a supplement and substitute to physical religious communities (Campbell 2012c). It is important to understand the relationship religious individuals and communities have to digital media because it may provide more insight about the benefits and challenges of maintaining an embodied religious community. The Virtual Religious Community According to Durkheim, the collectiveness of religious practice is what drives individual belief. The expression of religion on social media may shift the boundaries of the "collective", as internet technologies engage social networks in a way that usurps physical community and geographic boundaries (DiMaggio et al. 2001). Recent work on megachurches and virtual religious communities has already begun to complicate the notion of physical presence as a requirement to achieve a sense of community (Campbell 2012a;Campbell and Vitullo 2016;Hutchings 2017). Juergensmeyer et al. (2015, p. 43) argue that virtual mediums have begun to assist, if not replace, traditional "brick and mortar" religious institutions. The challenges of modernity and globalization have created new opportunities to expand the way religion is conceived and practiced within society (Juergensmeyer et al. 2015). This can be particularly true for some religious minorities because of the way the internet allows for underrepresented groups to create meaning from their marginalized societal position in the process of reinforcing religious identities. Warren (2018) describes the way that British Muslim women, in particular, use digital media to create content for other Muslim women through lifestyle media that specifically fosters a sense of "Muslim woman" identity formation. For these women, religious marginalization within British society leads them to respond with specifically curated platforms that emphasize their identity and sense of belonging. In this way Muslim women use social and digital media to forge new communities cross-nationally. Social media then can expand the definition of the religious collective and shift the way individuals within this collective are connected to each other, as well as how they relate to one another when they are meeting face to face. These examples also demonstrate the intersections between digital media, religion, and gender. For example, some scholars have written about how social media allows for greater communication internationally and therefore can redefine what it means to "do" feminism (Baer 2016;Salime 2014;Tsuria 2020;Zebracki and Luger 2018). This not only has implications for the shared experiences of individuals within a religious community but could also potentially undergird moral concerns about technologies that have facilitated these changes. While social media can expand connections and create religious identities within religious communities, social media can also expand interreligious contact between religious communities (Neumaier 2020). Social media can potentially connect individuals of diverse religious communities and increase interreligious dialogue. Scholars argue, however, about whether this potential for contact actually increases productive interreligious engagement or if it increases conflict between religious groups (Alvstad 2010;Neumaier 2020;Tsuria 2013). Despite the potential for internet technologies and social media to expand conceptions of community, some scholars caution against the conclusion that this potential for greater connectedness results in a more engaged public. Although social media provides individual users the opportunity to come together as a group in a "public" space, Kruse et al. (2018) argue that Millennial and Generation X users do not use social media in a way that engages the "public sphere". The public sphere is defined as the place where individuals meet openly to exchange ideas and ultimately fuel political change (Habermas 1991;Kruse et al. 2018). These generations of social media users avoid political and religious discourse on internet platforms (Kruse et al. 2018). This is due to factors such as increased online privacy concerns, fear of consequences from employers, increased engagement with others of similar political backgrounds, and the perception that social media is a space for positive interaction (Jenkins 2006;Kruse et al. 2018). These findings also allude to the idea that the internet as a public space restricts information access almost as much as it shares it. Internet algorithms on social media sites often promote certain clusters of social media users to see select sets of information based on their browser history (Jenkins 2006;Kruse et al. 2018;Loader and Mercea 2011;Trottier 2011). This suggests that the potential for a more expanded and engaged virtual religious community could be limited due to the constraints of being online. While religious communities may be able to virtually connect with individuals who are regularly involved in physical communities, they may struggle to connect with newer members or those who do not want to disclose their religion online. Additionally, individuals who lack the technological resources to connect with virtual religious platforms will be even further isolated. Social media has the potential to both promote and stifle engagement within religious communities. While social media can expand the broad reach of religious communities, the extent of its reach may depend on technology itself. Jews, Muslims, and Morality Online While the internet does not guarantee the expansion of the public sphere, the internet and social media provide the potential to build diverse social networks. This might create, however, ramifications for religious ethics and moral concerns. For religious communities, the internet's potential to steer believers away from the straight and narrow presents a challenge in fully embracing the platform. Although scholars have recently started looking at the effects of the internet on conceptions of community, religiosity, and adolescent spirituality (Bobkowski and Pearce 2011;Campbell 2012b;Juergensmeyer et al. 2015;Kruse et al. 2018), little research has specifically examined the effects of social media and internet technologies on religious morality, and even less research has explored such effects on minority religious communities. The minority religious communities we focus on-Jews and Muslims-may approach social media in different ways; namely how they engage with social media, how it complements their religion, or their moral attitudes toward it. Studies regarding Islamophobia on Facebook have found a general trend toward more hateful social media content over time (Auxier 2021;Awan 2016;Törnberg and Törnberg 2016). Similarly, Finkelstein et al. (2018) have documented the rise of anti-Semitism online, especially following high-profile political events. Although some research has explored the role of social media platforms as safe spaces for sexually marginalized youth (Lucero 2017), no such research documents online media as a similar platform for religious communities in the face of marginalization. Hitlin and Vaisey (2013) argue that sociology should begin to engage more in the study of morality, as religious communities each have their own moralities. Religious communities online create conceptions of morality aligned with their religious beliefs. For religious minorities, this challenge can be particularly salient. In her study of Muslim women's political activism in Indonesia, for example, Rinaldo (2013) argues that Muslim women approach the challenge of pornographic online content from their respective approaches to understanding Islam, as well as their differing historical and sociopolitical contexts. Muslim and Jewish moralities are not monolithic but are dependent on environment and religious frameworks. In a more recent study, Al-Rawi (2015) outlines the online responses of Arabic speakers on YouTube to controversial Danish cartoons of the Prophet Muhammed. Although Al-Rawi (2015) does not directly address the concept of religious morality online, his analysis of the responses demonstrates the distinct challenges that religious minority communities face as a marginalized group online. The Islamic religious moral injunction against depicting prophets presented a unique challenge for Muslim communities online and offline (Al-Rawi 2015). Muslim internet users utilized specific religious frameworks to virtually retaliate against what they considered to be immoral, while offline Muslim communities struggled to combat extreme interpretations and responses to this moral injunction by online and offline Muslims (Al-Rawi 2015). The presence of social media may provide specific challenges to religious individuals within individual communities, especially as they interact with competing moralities online. Research Questions The present study contributes to two core empirical gaps: first, it fills an enduring gap in the extant religion-science literature on religious minorities (see Vaidyanathan et al. 2016); and second, it addresses the gap in our understanding of the interplay between religion and social media in shaping understandings of community more broadly. This study asks two questions to fill current gaps in the religion-science literature. First, to understand the complicated relationship religious people may have with social media technologies, we ask the following: How does religion frame moral attitudes around social media for Jews and Muslims? Second, considering the pivotal and unique role the internet must play in connecting religious people: How does social media shape understandings of community? Coupled with more recent online data regarding how these communities responded to COVID-19, we find that social media is laden with both opportunities and risks. Jewish and Muslim communities use social media to build greater connectedness and bridge generational gaps, while also acknowledging potential moral distractions and temptations. Nonetheless, in the current era of social distancing, these communities find social media technologies essential for maintaining community. Results After examining the interviews with Muslim and Jewish respondents, we found a general overlap in our results. Responses tended to fit within one of three categories: perceived social benefits and drawbacks of social media, moral concerns, and shifting the boundaries of communities. Often, a respondent would touch upon all three throughout the interview, capturing the complicated nature of social media and religion. These themes surfaced for both Muslim and Jewish respondents, although at times they were expressed in ways unique to the dynamics of the particular community. Respondents saw potential benefits of social media through the way it could expand one's knowledge of religion. Both communities expressed concerns about social media's impact on the quality of community relationships and the influence on religious morality. Lastly, both Jewish and Muslim respondents believed that social media could expand the conception of community by increasing access to those who may not visit physical religious spaces. Perceived Benefits and Drawbacks of Social Media Respondents expressed that social media could lead to spiritual growth, if used in the right manner. This was the case both personally and organizationally. For individuals, technology and online media could be used to learn more about religion, as was the case with this Muslim respondent 1 : Actually, technology is helping [me] understand my faith better . . . As long as you are using technology for humane purposes, it's great. If you try to use technology to create harm to others, then there's a conflict. For this respondent, social media is a double-edged sword. So long as it is used for "humane purposes," such as education, technology and media can be helpful tools for growing deeper in one's faith. This respondent asserts that digital media can aid his understanding of his faith because of the ability he has to search religious texts online, listen to religious lectures, and connect with people within his faith community through different social media platforms. Outside of the individual religious experience, social media can also inform others. An illustration of this comes from another Muslim respondent 2 who noted that online media could be a tool to inform others about his religion: . . . [Y]ou can always go on YouTube and find . . . videos from Islamic speakers, kind of like invite people to Islam, talk a little bit about-even the scientific stuff you'll find on YouTube. This sentiment was echoed by numerous respondents, including both Jews and Muslims. One Jewish respondent 3 said that social media led people to be "connected in all kind of places . . . getting their Judaism . . . their insights, their questions answered, everywhere and anywhere." For these respondents, social media is viewed as a multifaceted tool, which allows for personal edification, proselytization, and advertisement. Social media allowed those interested in faith to learn more about religion through both resources available online and through social networks. As often as respondents praised social media for creating community, they also denounced it for diluting the quality of relationships. Both Muslim and Jewish respondents acknowledged that social media has completely changed interpersonal relationships and communication. People within religious communities may opt to communicate information via social media platforms, which may alienate some members. Not everyone within a single "community" may have a social media presence online. Additionally, respondents were critical of the way that community members may not always personally interact with each other and instead rely on virtual communication to build community. One Orthodox Jew in his 30s 4 said that social media both accelerates the rate of and lowers the quality of communication: . . . the irony of the lack of connectivity that we have, because of all of our lack of human interactions and relying on social media and texting, you know, I think has lowered the quality of relationships and general happiness in the world, in my opinion. This respondent does not disagree with the premise that social media allows for more communication; rather, he is arguing that that communication is simply worse than it once was. Social media, in his view, has replaced face-to-face interaction. His response signifies broader anxieties among some respondents who identified social media communication as an overall societal trend that devalued the significance of face-to-face interaction. A Muslim respondent 5 echoed that same argument when he said the following: . . . the time which people need to spend with each other, they spend on technology . . . it connects people, but it really does not create solid relationships. Because solid relationships require human interaction and human conversation and human dealings and that decreases and only technological connections, they increase. Although these responses are not specifically related to religion, the respondents are voicing disappointment in social media for its effects on human relationships more generally. This conveys tensions surrounding the importance physical interaction should and does play within religious communities. The benefits and drawbacks of social media are closely related. Social media creates the possibility for broadening the scope of community to become more inclusive to those demographics who may not have had the same access to physical religious spaces. The elderly, disabled, and frequent travelers may all gain some benefit from the adoption of a virtual community. In comparison, millennials may generally connect more easily and gain greater access to religious materials and resources online. Muslim and Jewish respondents alike saw social media platforms as a way of directly connecting congregants to online resources that could answer questions about their faith. In this way, Jewish and Muslim respondents "tinkered" with the idea of religious authority. Whereas traditionally faith leaders may be directly approached to answer questions, comment on polemical issues, or advise congregants, Muslim and Jewish congregants could seek answers via social media platforms such as YouTube or Facebook. For our respondents, this was mostly in addition to the leadership given within their communities. On a broader level, however, religious authorities online could have the potential to replace in-person religious authority in lieu of access to physical religious resources. Social media also increases the amount of choice one has in the types of religious leadership and perspectives they seek. Depending on the tradition, this has the potential to reform established mechanisms of obtaining religious knowledge and to reinterpret the meaning of religious authority. Moral Concerns about Social Media Other respondents talked about moral concerns with social media. More specifically, social media was said to expose individuals to morally objectionable content and distract them from religious experiences and expressions. For two Orthodox Jewish respondents, social media challenged conceptions of modesty. One Orthodox Jew 6 argued that social media exposed children to "immodesty" and "garbage": . . . [Children] don't need to surf the Internet . . . The Internet . . . uses tons of immodesty, and you never know what's going to pop up, and there's violence and things that they don't need to be exposed to. They shouldn't be. For this respondent, social media may pose problems when it exposes individuals to objectionable content. In a similar vein, another Orthodox Jew 7 said that social media makes people become egocentric and less modest: You know, we teach modesty. So, people normally think of modesty in terms of . . . dress, but modesty is also like, don't talk about yourself so much, and don't-there are things you share with people. But people get on Facebook now and they have two-hundred followers. ... I mean, there's nothing wrong with that, but how egocentric could you be? Do you really believe that two-hundred people want to hear about it? But this is our society. This respondent alludes to the broader issue of morality online. Because social media has the potential to expand one's social reach, it can also contradict the emphasis placed on humility or modesty within the Orthodox Jewish tradition. Social media becomes a virtual space in which everyone can call attention to themselves. This not only raises concerns about religious morality but may also complicate social relationships and issues of mental health. We of course are not arguing that these distinctions are based on the faith tradition per se, but from this excerpt and others in the interview, the respondent seems to be drawing on pieces of his faith tradition (in addition to other potential factors) to explain the morality of social media use. Three Muslim respondents explained that social media can be a distraction in direct competition with religious practices. One Muslim respondent 8 provided an example of social media-or technology more generally-disrupting otherwise meditative religious experiences. In some ways, I would say . . . [smart phones] limit our religious experience . . . because they're a constant source of distraction. In the mosque, always someone's phone is ringing despite, you know, many announcements. So, and if it rings or if it's in vibration mode, then still you are focused 'who is calling' you know? So, they do bring distraction, in terms of . . . devotion which religion requires in worships and meditations. Another Muslim respondent 9 echoed this sentiment while challenging the notion that social media can bring religious people closer to their religion: I think social media can distract from your religious practice . . . some people say that social media is great because it keeps me in touch with all this religious stuff and all these religious people that I would normally not have access to. And that can be great, but I think it's cheap. I think real religious enlightenment takes effort. It takes work. You have to engage in the [religious] texts. You have to challenge yourself mentally to do it. You can't just-I think social media can often reduce stuff like learning. These responses point to the way media technologies might impede religious worship for Muslims. Although social media itself may not restrict prayer, receiving multiple notifications from a social media app may quite literally distract one's focus from ritual prayer. The acknowledgement that multiple announcements are given to silence phones before communal prayer signifies the way that social media has the potential to change the dynamics of religious practice. Religious rituals that were traditionally practiced in particular ways may slowly shift to accommodate these changing dynamics. Finally, a few respondents expressed more unique concerns about social media. One respondent, a Muslim male in his 50s, 10 claimed that technology "can be misused and it can create the problem of authenticity of information." Another, this time a young Muslim woman, 11 expressed her fear of social media as a platform for bullying and hate, claiming that social media technologies do not foster dialogue but, instead, bigotry and one-sided rants. These respondents represent a spectrum of level of concern, and a wide variety of type of concern, related to social media technologies. Although some concerns-such as the fear that social media would expose children to objectionable content or distract from religious experiences-were explicitly religious in nature, others-such as the fear that social media can spread inauthentic information-were far from religious concerns. Changing Dynamics and Boundaries of Communities According to many of our respondents, social media had the power to shift community boundaries. By eliminating geographic barriers, social media provided a pathway toward transnational community building and connectedness. These global connections are important for religious minorities, who may be distanced geographically from their religious counterparts. As one Reform Jewish respondent 12 put it: [O]ne of the things that's great about social media is that it breaks down a lot of traditional boundaries, like geography, for example. People are able to connect with other people who live thousands of miles away in a way that they didn't used to before . . . For this respondent, social media allows connection with religious peers across the globe. In that way, social media has the power to form new relationships and build global religious ties. Social media has also changed the dynamics within existing religious communities. One Rabbi 13 made note of his recent decision to put his congregation's worship services on the internet: [W]e now stream all of our services, our worship services, which in some ways is great because we can reach more people. There are people who are in the hospital who have a chance to feel like they're part of it, or in the military, or at home incapacitated-lots of things. Members who go away to college or move out of town, I mean there's lots of ways that people can access it that they couldn't before. Such changes may come at a cost. Although online services, and social media more generally, allow for more connection, this connection may be less meaningful than it once was. For the Rabbi at hand, the "obvious flipside" is that "there is no sense of 'this is our community.'" Because physical boundaries have been nearly eliminated, he asks, "what does it mean to be 'us?'" Another Rabbi 14 agreed: In an age where we're always connected, studies show that we're more disconnected now than we've ever been. It's kind of a double-edged sword. We've talked a lot about "What does that mean?" Do we have an online community? . . . [H]ow do we approach this new online community? What does that mean for our congregation? An apparent tension arises between embodied and virtual community. Although a virtual community allows for a greater number of community members, some religious leaders believe that digital communication hinders the quality of community and of relationships. As religious rituals have historically occupied physical spaces, the gradual transition toward digital mediums may pose a unique challenge for the future of religion. In response to this tension, some religious communities construct new boundaries that make the importance of being a committed "member" more important to congregants. An example of this would be a Reform Jewish community 15 setting limits and password protections for certain online content, which helps answer the question, "What does it mean to be 'us?'" Another issue with respect to shifts in community that arose was regarding generational divides between "digital natives" and "digital immigrants". Several respondentsmostly older respondents-expressed anxiety that social media was a "young" phenomenon that they did not understand. Others questioned whether their congregation would be able to adapt to and "survive" social media. A final group of respondents expressed their willingness to adapt to social media in order to cater to younger populations. Of all those sentiments, the most candid were ones of confusion. One Muslim man in his late 50s 16 captured this quite succinctly: Right now . . . all the young people are very proficient with social media. The older generation have no clue what social media is and that is creating a big communication gap between the younger people and the older people. Another man in his late 50s, this time a Reform Jew 17 , said the following: You know, I'm an old guy . . . I don't know how all that stuff-I mean I know how it works; I understand the concepts-I don't use it . . . I understand on an intellectual level that . . . this is the preferred communication platform of today. Will I be adept at it? No. Responses such as these two often differed in their confidence to "understand" social media technologies. While some religious people view themselves as capable of adoption but unwilling, others were shut out of social media use because of lack of proficiency, creating what some congregants called a "big communication gap" between generations, with few opportunities to bridge that gap. Others painted social media technologies as just another generational shift. One Rabbi 18 told us that his Reform Jewish congregation was asking "Where are the millennials? Where is everybody going?" to which he responded as follows: I'm as worried as anybody else, but this isn't the first time we've had institutional shifts or sociological changes . . . we will have to listen and engage technology and social media and the debates of our times in a living thriving Reform Judaism. Social media, then, creates a generational divide that evokes different responses from different religious people. The questions of whether to engage, and if it is possible to engage with a younger generation and bridge the digital divide, endure in the minds of many of our respondents. Discussion Our findings suggest social media provides neither benefits nor concerns alone but is instead individually negotiated by both Jewish and Muslim respondents in ways consonant with their traditions, with their understandings of community, and with their moral framing of social media. At the individual level, desires to "keep up with the times" are negotiated in tandem with concerns of distractions and temptations on social media that respondents believe may test one's ability to stay on a righteous, moral path. These opportunities and challenges also arise at the community level, such that religious institutions actively seek ways to bridge the perceived generational divide and integrate digital technologies to become more accessible to an even wider community. Social media technologies have demonstrated the resilience of community. The virtual moral community has the capacity to bring together individuals from broader social circles, who all share similar understandings of faith, sacredness, and meaning. The human affinity for socialization and solidarity still exists without physical space. In many ways, the COVID-19 pandemic illustrates how the mediatization of religious communities is a necessary process to exist in an era of immense social and cultural change. Our respondents' congregations had already begun the shift towards establishing religious community online prior to the pandemic. Social media not only gives religious communities access to a global audience, but, in many ways, it validates religious communities' presence in an increasingly widespread, diverse, and digital society. This inclusivity, however, comes at the potential expense of traditional boundaries and identity of the religious community. Digital technologies and social media have afforded a new means to expand in-group ties, but not necessarily strengthen them. While there are new ways to reach community members over new mediums (such as live-streamed services or online small groups), those mediums carry the risk of diluting a sense of strong community. Our respondents engage in religion online (Frost and Youngblood 2014;Helland 2000). Even with-and perhaps because of-expanded virtual community, religious people still emphasize the need for embodied presence. Our respondents use digital media as an extension of their physical communities. Even for some subgroups, such as the elderly, who rely on online functions as a substitute for being in physical congregations, there is still a desire to think of online functionality within the context of the physical religious community. Additionally, larger communities have the potential to alienate local community members, as religious leaders struggle to meet the online demands of a broader group. Individual niche needs may become overlooked or not as readily addressed. Yet, the greater access provided by digital technologies such as social media affords religious communities the possibility of increasing participation in local communities while also creating a community for some in the absence of a physical space. Finally, although many Jews and Muslims identify moral concerns regarding social media, these concerns varied in the degree to which they were directly linked to their faith. For instance, while some expressed concerns about immoral content that may expose children to indecency or immodesty, others expressed more general concerns about the role social media plays in spreading false information or in facilitating cyberbullying. American opinions more broadly are consonant with religious minorities specifically, in showing that social media platforms do not adequately address bullying (Vogels 2021). These patterns demonstrate how, for our respondents, moral tensions were not always centered explicitly within the context of religion. Jewish and Muslim respondents may not always view the challenges of social media in terms of their faith but may situate their moral concerns in the context of broader social ethics. It is important, however, to note that there were also moral concerns that were unique to each community. While Orthodox Jewish respondents discussed concerns surrounding the way social media resulted in an aggrandizement of oneself that went against notions of modesty, Muslim respondents noted the ways social media notifications disrupted the ritual of prayer. Despite the myriad moral concerns expressed, embracing social media is often posited not only as an inevitability, but a necessity-both to sustain intergenerational connections within the synagogue or mosque and to remain a relevant feature of the evolving socio-cultural US landscape, more broadly. Our study provides a much-needed empirical addition to the sociological scholarship on religion and social media technologies. Our findings build on existing scholarship on religion and social media by demonstrating the way that religious people use social media and illustrating how social media technology can change the relationship religious individuals have with ritual aspects of religious communities. Social media may remove some barriers of access between religious leaders and congregants. This has implications for current research that examines the connections between religious practice online and in person (Campbell and Evolvi 2020;Campbell and Vitullo 2016). Our research also builds on literature regarding moral attitudes online. While religious minorities do share specific moral concerns based on their beliefs, most moral concerns are not based solely within religion but often with regards to family and children. This may point to the importance of more research that examines the connections between family, religion, and social media. Future work would benefit from greater analysis regarding the ways in which spatial context may also shape perspectives on religion and social media (e.g., urban center vs. rural area). Social media may lend well to some contexts over others, depending on a variety of factors including digital infrastructure, population density, and, for religious minorities, ethnic and religious diversity. This study demonstrates additionally that there are unique differences across generations that may have an impact on religious perceptions of social media. While social media may assist the elderly, who may be unable to physically access religious spaces, they may be unable to utilize certain social media platforms because of a lack of digital literacy. Data for this study were collected from 2011 to 2014. Social media has evolved rapidly since then. With the global COVID-19 pandemic, many of the communities we studied were compelled to increase and diversify digital and social media use. Future research should examine new social media technology and the change in use and functionality for religious communities during the global pandemic. Likewise, while this study focuses on comparing the perspectives of religious minorities across two distinct traditions (Judaism and Islam), we acknowledge that additional variation may arise across denominations. While some differences between Orthodox and Reform Jews emerged in this analysis, more work investigating Muslim Americans and other religious minorities would benefit from unearthing potential heterogeneity within these groups. Nonetheless, this study contributes to the burgeoning, yet still limited, literature on religion and social media by illuminating some of the ways religious minorities-Jews and Muslims specifically-understand and use social media. While extant literature on the topic tends to overlook minority religious traditions, this research brings to light both the distinct and similar ways that religious minorities view social media as a tool to practice religion, and their moral attitudes toward the platforms as well. Materials and Methods In order to address these research questions, we analyzed 52 in-depth interviews from a previous study focusing on how religious Americans perceive science and science-related technologies. This project involved mixed-methods data collected between 2011 and 2014. Study participants for the interview study were sampled through several non-random sampling strategies, including key-informant referrals, participant observations, and snowball sampling. As part of the full study, a total of 319 interviews were conducted with Evangelical Protestants, Mainline Protestants, Black Protestants, Jews, and Muslims. The larger data collection included: 40 interviews from three Reform Jewish synagogues, 24 interviews from two Orthodox Jewish synagogues, and 28 interviews from three Sunni Muslim mosques. These respondents included a range of perspectives, including Orthodox Jews, Reform Jews, and Sunni Muslims. Interviews were conducted in Houston and Chicago between 2011 and 2014. For the scope of this project, our analysis centered on the responses of 52 religious minorities (29 Jews and 23 Muslims), comprising both leaders and congregants, and sampled from religious organizations in Houston and Chicago. Respondents were specifically asked questions about the connection between religion and social media. Most Muslim communities in the United States are Sunni, and many of these communities are Arab and South Asian (Schmidt 2004). Our sample reflects this, with two Arab and 26 South Asian respondents. We understand that the category "Jewish" can refer to both a religious and ethnic category; therefore, for the purposes of this study, we specifically sought out the responses of practicing religious Jews. Interview participants were either recruited by congregational leaders or directly by researchers involved in the study. Other participants were recruited through snowball sampling. Researchers strove to have diversity with regards to gender, age, and socioeconomic status. Interviews were semi-structured and centered on questions related to the relationship between religion and science. For this article, we focused on the responses from questions related to perceptions of religion and social media. Out of the total number of Jewish and Muslim respondents, 52 respondents were asked the following: "Another topic we are interested in is social media. How do you see social media technologies in relation to your faith? For example, what kind of role do these technologies have on the sense of community within your congregation?" The interviews, on average, lasted for 66 min. Interviews were recorded, transcribed, and then coded for relevant themes using the software ATLAS.ti. After coding for overall themes, we looked for overlap and distinctions in the way Jews and Muslims talked about social media in relation to their specific community dynamics. Although themes mostly overlapped for both communities, we identified the way that certain themes were interpreted differently by some given the faith context. Author Contributions: E.H.E. acquired funding for the study, conceptualized the research study and developed the methodology for the study, contributed to analysis writing, review and editing. J.F. analyzed data and contributed to writing. C.R. contributed to writing of the paper. All authors have read and agreed to the published version of the manuscript. Funding: Data collection for this paper is part of the "Religious Understandings of Science" study, funded by the John Templeton Foundation, Grant #38817, Elaine Howard Ecklund, PI.
2021-05-04T22:05:20.036Z
2021-04-07T00:00:00.000
{ "year": 2021, "sha1": "eebac4c4e6184fc6531cbf1069e720ceae26d3e8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/12/4/258/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "870e84982926d9027bc3c35a10cd2dbedc6b7a2e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
67760051
pes2o/s2orc
v3-fos-license
Determinants of Stock Market Co-Movements between Pakistan and Asian Emerging Economies This study analyzes the determinants of stock market co-movement between Pakistan and Asian emerging economies for the period 2001 to 2015. Augmented Dickey and Fuller (ADF) and Philips-Perron (PP) tests are applied to check co-integration between their stock markets. Results of this study reveal that there is long-term integration between the stock market of Pakistan and the stock markets of China, India, Indonesia, Korea, Malaysia and Thailand. This study reports the driving forces of the co-movement between the Pakistan and Asian emerging markets where co-integration is found. Results of the panel data reveal that there are significant underlying forces of integration between Pakistan and each Asian emerging stock market. The findings of this study have significant implications for policy makers in Pakistan who are designing strategies for macroeconomic harmonization and stability of the country’s economy against financial shocks. Introduction The consequences of the financial crisis (2007)(2008)(2009)(2010) resulted in unexpected and immediate deterioration of wealth.The after-effects of the global financial storm are still evident and nearly all countries continue to suffer as a result.The World Bank has currently advised the G 20 nations of the occurrence of an extremely critical and damaging economic meltdown in the near future.As latest literature proposes, examining the tendency of one country to be affected by the global financial storm helps prevent future crisis.This feature has attracted the attention of academicians and practitioners towards the identification of the integration and determination of fundamentals that might describe how stock markets of different countries are correlated to each other (Pretorius 2002).Such rise of interests and inspirations can be described for several reasons, but the most appropriate of all includes the quest for likely benefits of risk management, especially portfolio diversification (Forbes and Chinn 2004). There has been a substantial increase in economic and financial linkages among economies.The main causes of these strong linkages between global economies are technological advances, removal of statutory controls, market liberalization and the growth of several emerging markets.These factors have contributed to more interlinked economies, which in turn are said to have given rise to a higher degree of stock market co-movement (Johnson and Soenen 2003).The integration of emerging stock markets remains an open question that has not been adequately addressed.Modern research in stock market integration has not focused enough on the determinants of stock market co-movement.It has been found that very few studies have attempted to uncover the determinants of stock market co-movement. Stock Market Co-Movement Stock market co-movement refers to a tendency of two or more stock markets to move simultaneously, so that their price movements are positively correlated.National stock markets are considered to be integrated if securities with similar risk features are priced the same, even if the securities are traded in different stock markets (Marashdeh and Shrestha 2010).Put differently, stock market integration is a situation where financial securities have similarity in return patterns. Theoretical Framework The overarching theory of research study is the Theory of Stock Market Co-movement.This theory primarily focuses on two leading approaches (Forbes and Rigobon 2002).One is called 'theories of non-contingent crisis' (fundamental approach) and the second is termed 'theories of contingent crisis' (behavioral approach). Theories of Non-Contingent Crisis This theory assumes that transmission mechanism after a crisis is not significantly dissimilar to those before the crisis.According to the theory of non-contingent crisis, excessive co-movements of two different markets are due to the continuation of crisis linkages.This is often termed as fundamental approach.Excessive co-movements, in this case, is the repercussion of strong bilateral trade, financial links and economic interdependence (Forbes and Rigobon 2002). According to the fundamental approach, the latest literature classifies fundamental driving forces of co-movements between stock markets as macroeconomic (bilateral trade, interest rate, inflation rate, industrial production growth, absolute changes in the bilateral exchange rate, volatility in the bilateral exchange rate), and financial (national equity market size, volatility across the world stock market). International Capital Goods Trade Hypothesis The hypothesis states that economies as well as stock markets of two countries are anticipated to be highly integrated due to their strong bilateral trade relationship.The stronger the bilateral trade relationship, the higher the level of co-movement between stock markets will be.Thus, the extent of bilateral trade between two different countries is expected to explain co-movement between the stock markets of these countries. Discounted Cash Flow Model (Convergence of Macroeconomic Variables) The discounted cash flow model states that similarities in the macroeconomic variables of the two different countries will lead to similarities in the performance of their stock markets.Put differently, convergence of macroeconomic variables will lead to convergence in stock market performance.On the contrary, divergence of macroeconomic variables, in the form of larger differentials, will lead todivergence in stock market performance.For example, larger differentials in interest rate, growth rate and inflation rates will cause a lower level of co-movement. Flow Oriented Hypothesis of Exchange Rate Determination The greater the volatility in the exchange rate, the greater will be the uncertainty in the economy as well as in the integration of stock markets.Consequently, volatility in the exchange rate must show a negative association with the co-movements of stock markets.Similarly, a larger exchange rate change will bring more benefit to the country with the depreciating currency.Therefore, the rate of change in the exchange rate must show a negative association with stock market co-movement. Research Gap The following research gaps are dealt with in this study: (1) The integration of Asian emerging markets remains an open question that has not been adequately addressed (Dhanaraj et al. 2017). (2) Modern research in stock market integration has not adequately studied the driving forces of stock market co-movement, especially in Asia and in Islamic emerging economies (Karim and Majid 2017). (3) Existing research on stock market co-movement has not adequately focused on whether the extent of bilateral trade between two different countries is expected to explain co-movement between the stock markets of these countries (Mobarek et al. 2016).( 4) Existing research on stock market co-movement has not adequately focused on whether or not the similarity in macroeconomic variables between emerging economies will result in the co-movement between these countries (Mobarek et al. 2016).( 5) Current research on stock market co-movement has not adequately focused on whether or not the volatility in the bilateral exchange rate and absolute changes in the bilateral exchange rate between emerging economies will result stock market co-movement between these countries (Mobarek et al. 2016). Research Questions The following research questions are dealt with in this study: (1) Are financial markets co-integrated? (2) Whether or not the similarity in macro-economic variables between two different countries will result in higher levels of stock market co-movement (3) Whether or notstronger bilateral trade relationship between two different countries will result in higher levels of stock market co-movement (4) Whether or notgreater absolute changes in the bilateral exchange rate will result in higher levels of stock market co-movement (5) Whether or notgreater volatility in the bilateral exchange rate will result in higher levels of stock market co-movement (6) Whether or not the volatility in the world stock market will result in higher levels of stock market co-movement Literature Review The existing literature presents numerous studies that show the existence of stock market interdependence, with the idea that stock markets have been showing tighter co-movements with each other.Academic literature allots this increased level of stock market integration to development in closer economic and financial linkages.However, it is obvious that very few studies have been conducted on the determinants of stock market co-movement, which makes it an interesting research area.Consequently, attention is directed towards research on the nature of links that lead the interdependence of international stock markets.Can the extent of integration be explained by the fundamental determinants?In other words, either the co-movement of stock markets is contagion in reality or can it be clarified by economic or financial fundamentals? Empirical Studies Showing the Cross Market Co-Movement Recently, results of different studies on stock market interdependence reveal a substantial degree of integration (Mobarek et al. 2016).Al Nasser and Hajilee (2016) studied stock market integration among emerging economies (Brazil, China, Mexico, Russia, and Turkey) and developed economies (U.S., U.K. and Germany).Results of the ARDL model revealed that short-term integration is found between the stock markets of growing and developed countries.It was further reported that only the German stock market is integrated with Brazil, China, Mexico, Russia and Turkey.Bashiri and Zadeh (2014) examined the interdependence between stock markets of Malaysia, Indonesia, Philippines, Japan, Turkey and those in the U.S. by using monthly data for the period of 1995 to 2010.Results revealed that integration is found between U.S. and Asian stock markets.It was further reported that the extent of integration between Japan and other Asian markets is low.In another study conducted by Deltuvait (2015), the integration of the Baltic stock markets was examined by applying cross-correlation analysis, Granger causality test.Results of different techniques showed the higher integration of Lithuanian and Estonian stock markets.Rua and Luis (2009) examined co-movement in the time-frequency space by applying the wavelet analysis.Results of the study emphasize the significance of taking into account the time and frequency varying properties of stock returns co-movement in designing portfolios at the international level. Empirical Evidences of Determinants of Stock Market Co-Movement Karim and Majid (2017) examined the fundamental driving forces of integration among 10 Islamic stock markets by applying Pooled OLS and found that all variables are insignificant in describing the integration.Results of the panel data estimation have found that only GDP growth differential and inflation differential are significant in explaining the co-movement between the stock markets of Islamic countries.Another study related to the driving forces of stock market co-movement was conducted by Mobarek et al. (2016) and reported that import dependence as well as size differential of stock markets are significant in explaining the co-movement between the returns of stock markets.In addition to these determinants, GDP growth rate differential and time trend also has a significant relationship with the co-movement of stock market.Guesmi and Teulon (2014) examined the underlying forces of stock market integration of Middle East countries (Turkey, Israel, Jordan and Egypt).The results revealed that domestic (inflation, rate of spread variation and exchange rate volatility) and global (global interest rate, world market returns, and world market dividend yields) factors are significant in explaining the integration between the stock markets of Middle East countries.Narayan et al. (2014) examined the integration of stock markets among emerging Asian economies and developed markets by applying the EGARCH-dynamic conditional correlations (DCC).Results of the study revealed strong correlations during the period of financial crisis.Results further reported that price differentials, exchange rate risk, global financial crisis, bilateral trade relations, openness variable and domestic market characteristics are underlying forces of stock market integration.Contrary to the above, Kose et al. (2003) found that results do not support the hypothesis that bilateral trade and stock market co-movement have a positive relationship.A research work by Bracker et al. (1999) analyzed the driving forces of stock market co-movement and reported that numerous factors like bilateral trade and size differentials of two marketsare notablylinked with the degree of the co-movement.In addition, a time trend and regional dummy variable was also significant in explaining the co-movement between the returns of two stock markets.Pretorius (2002) studied the driving forces of co-movement between stock markets and found that significant results were observed between the bilateral trade and the stock market co-movement.In addition, industrial production growth differential was also significant in describing the co-movement between stock markets of two countries.Another study related to bilateral trade and stock market co-movement was conducted by Johnson and Soenen (2003) and found that trade is significantly correlated with the degree of stock market co-movement over time.Bracker and Koch (1999) suggested that the extent of stock market co-movement is dependent on the extent of economic integration between two countries.Put differently, if the two countries have strong economic integration, then they must have a greater co-movement in their stock markets.The results point out that the extent of stock market co-movement (measured as the magnitude of the correlation structure) is positively associated with trend and volatility in the world market.In addition, the extent of stock market integration has a negative association with volatility in bilateral exchange rate, real interest rate differentials, term structure differentials and return on a world market index.Lin and Cheng (2008) analyzed the driving forces of the stock market co-movement and reported that volatility in the stock market, interest rate differentials, and the rate of change in exchange rate are significant in explaining the co-movement between the returns of stock markets. Hypothesis Hypothesis 1.There is co-movement between the stock markets of two countries. Hypothesis 2. The greater (lesser) the divergence between interest rate differentials, the lower (higher) the co-movement between the stock markets will be. Hypothesis 3. The greater (lesser) the divergence between inflation rate differentials, the lower (higher) the co-movement between the stock markets will be. Hypothesis 4. The greater (lesser) the divergence between industrial production growth rate differentials, the lower (higher) the co-movement between the stock markets will be. Hypothesis 5.The greater (lesser) the divergence between GDP growth rate differentials, the lower (higher) the co-movement between the stock markets will be.Hypothesis 6.The greater (lesser) the absolute changes in the bilateral exchange rate, the lower (higher) the co-movement between the stock markets will be. Hypothesis 7. The greater (lesser) the volatility in the bilateral exchange rate, the lower (higher) the co-movement between the stock markets will be. Hypothesis 8.The greater the volatility in the world equity market, the greater the co-movement between the stock markets will be. Hypothesis 9.The stronger the bilateral trade ties between the two countries, the higher the co-movement between the stock markets will be. Sample The purposive sample consists of Asian emerging economies by the MSCI Global Investable Market Indices.Table 1 shows the indices used for Asian stock markets. Sample Period The study period is from 1 January 2001 to 31 December 2015 and includes the global financial crisis that started suddenly in U.S. financial institutions in 2007 and escalated to other developed countries in the first six months of the year 2008. Data Collection Daily data of different emerging stock indices was collected from the Data Stream.Daily data was selected to evade the incorrect correlation problem (Patra and Poshakwale 2006).To manage missing data, Occam's razor technique is used in this study by filling in the last day price of the stock market (Majid et al. 2009;Hirayama and Tsutsui 1998).Secondary data for the determinants of stock market co-movement was collected from Data Stream, State Bank of Pakistan, World Bank, KSE website, and trading economics website. Correlation between Country i and j (Cor ij ) Correlation between daily rate of return of countries i and j during quarter t. Bilateral Trade The sum of the value of bilateral trade as a proportion of each country's total trade is used. where X ij and M ij is the exports and imports from country i to country j.X i and M i is the total export and total import of country i. Convergence of Macroeconomic Variables As direction of causality is not involved in case of correlation, it is important to use the absolute value of the interest rate differential, inflation rate differential, GDP growth rate differential and industrialproduction growth rate differential. Absolute Change in the Bilateral Exchange Rate Percent change in bilateral exchange rate during quarter t is calculatedby suggesting a possible indirect negative relationship between the absolute exchange rate changes and the co-movement of the two stock markets. Volatility in the Bilateral Exchange Rate Standard deviation in daily bilateral exchange rate during quarter t is calculated for volatility in the bilateral exchange rate, suggesting that the greater the exchange rate volatility, the lower the co-movement between the stock markets will be. World Market Volatility Standard deviation of daily world stock market index return in quarter t is calculated for volatility in the world market, suggesting a positive association between the volatility in the world market and the co-movement of the stock markets. Econometric Model for Fundamental Determinants of Stock Market Co-Movement The final regression model incorporates all of the determinants mentioned as earlier: Cor ij = Estimated correlation between daily returns in countries i and j during quarter t. ε ijt = disturbance term, assumed to be iid N (0, σ 2 ). Results of Stock Market Integration (Long Term Integration) Descriptive statistics for stock returns, unit root tests and co-integration test of data period 2001-2015 are in the tables below. Descriptive Statistics Table 2 presents descriptive statistics for daily stock market returns.We note that all emerging stock markets posted positive average performance during the time period.We examine the volatility of all countries market returns.It has been reported in modern literature that volatility is usually found in emerging stock markets.The stock index for Korea shows the maximum volatility among all the stock returns.The stock index for Malaysia shows the minimum volatility among all the stock returns. Unit Root Test Table 3 shows that all emerging stock market indices are integrated of order one, I(1).Results of the ADF test are hereby confirmed by the Philips-Perron test.As a result, we can move towards co-integration analysis, examining if there is a long-run relationship between Pakistan and selected emerging stock markets or not.It can be seen in Table 3 that all the variables are integrated of the same order i.e., I(1) stationary at 1st difference. Co-Integration Tests Results of pair wise Johansen and Juselius co-integration tests are reported in Table 4.They reveal that there is long-term integration between the stock market of Pakistan and the stock markets of China, India, Indonesia, Korea, Malaysia, Thailand and Turkey, as critical value do not exceed the trace statistics at 5% level of significance. Pakistan and China Determinants of stock market co-movement between Pakistan and China are reported in Table 5.The coefficient estimate for the volatility in the bilateral exchange rate between Pakistan and China is significant and negative.Such results are in line with Bracker et al. (1999), Lin and Cheng (2008).They report that volatility in the bilateral exchange rate is significant and negatively associated with stock market co-movement.In other words, greater volatility in the bilateral exchange rate between Pakistan and China results in lower co-movement between their stock markets.The coefficient estimate for the world market volatility between Pakistan and China is significant and positive.Such results are in line with the earlier research work of Bracker and Koch (1999).Putting it differently, world market volatility is positive and significant in explaining the correlation between two stock markets.The co-efficient estimate for the gdp growth rate, industrial production growth rate, inflation rate and interest rate between Pakistan and China is insignificant and will therefore have no effect on the returns of the two stock markets.The R-square of 0.308 indicates that 30.8% of the variation in the correlation coefficients is explained by the variables under study, which is indication of a reasonably good fit. Pakistan and India Determinants of stock market co-movement between Pakistan and India are reported in Table 6.The coefficient estimate for bilateral trade between Pakistan and India is significant and positive, showing that bilateral trade is significant in explaining the correlation between Pakistan and India.Such results are in line with Bekaert and Harvey (1997), Bracker et al. (1999), Johnson and Soenen (2003), Pretorius (2002), andWalti (2005).They report that trade is significantly and positively associated with stock market co-movement.In addition, our results are in line with Forbes and Chinn (2004), Lucey and Zhang (2010).They report that the extent of stock market co-movement is dependent on the extent of strong bilateral trade relationships.Putting it differently, stronger bilateral trade ties between Pakistan and India result in higher co-movement between their stock markets.The coefficient estimate for the interest rate differential between Pakistan and India is significant and negative.The results of this study are also in line with Bracker and Koch (1999), Bracker et al. (1999), andLin andCheng (2008).They report that interest rate differential of two countries is significantly and negatively associated with the co-movement of stock markets.Putting it differently, smaller interest rate differential promotes the integration of stock markets of two countries.The coefficient estimate for gdp growth rate, industrial production growth rate, inflation rate, world market volatility, absolute change in the bilateral exchange rate and volatility in the bilateral exchange rate is insignificant and will have no influence on the prices and returns of the stock markets of Pakistan and India.The R square of 0.223 indicates that 22.3% of the variation in the correlation coefficients is explained by the variables under study and also indicate the presence of other variables affecting the correlation. Pakistan and Indonesia Determinants of stock market co-movement between Pakistan and Indonesia are reported in Table 7.The coefficient estimate for the world market volatility between Pakistan and Indonesia is significant and positive.Such results are consistent with previous research work like Bracker and Koch (1999).In other words, world market volatility is positive and significant in explaining the correlation between the two stock markets.The R-square of 0.248 indicates that 24.8% of the variation in the correlation coefficients is explained by the variables under study and also indicate the presence of other variables affecting the correlation.The coefficient estimate for the bilateral trade, gdp growth rate, industrial production growth rate, inflation rate, interest rate, absolute change in the bilateral exchange rate and volatility in the bilateral exchange rate is insignificant and will have no influence on the prices and returns of the stock markets of Pakistan and Indonesia. Pakistan and Korea Determinants of stock market co-movement between Pakistan and Korea are reported in Table 8.The coefficient estimate for the GDP growth rate differential between Pakistan and Korea is significant and negative, showing that the lower GDP growth rate difference between Pakistan and Korea results in higher co-movement.Such results are in line with the results of earlier studies like Johnson and Soenen (2003), Mobarek et al. (2016).These researchers also report that the higher the GDP growth rate difference between the market pairs, the lower will be the co-movement between the two stock markets.World market volatility is the second important determinant of stock market co-movement between Pakistan and Korea.The coefficient estimate for the world market volatility between Pakistan and Korea is significant and positive.Such results are consistent with previous research work like Bracker and Koch (1999).The other determinant of stock market integration between Pakistan and Korea is volatility in the bilateral exchange rate.The coefficient estimate for the volatility is significant and negative.Such results are in line with Bracker et al. (1999), andLin andCheng (2008).They report that volatility in the bilateral exchange rate between two countries is significant and negatively associated with stock market co-movement.Absolute change in the bilateral exchange rate is another determinant of co-movement between Pakistan and Korea.The coefficient estimate for the absolute change in the bilateral exchange rate between Pakistan and Korea is significant and negative.Such results are in line with Bracker et al. (1999), andLin andCheng (2008).These researchers report that larger change in the bilateral exchange rate offers more benefit to the country with the depreciating currency by influencing bilateral trade conditions and influencing the returns of the two stock markets.The R-square of 0.316 indicates that 31.6% of the variation in correlation coefficients is explained by the variables under study, which is evidence of a reasonably good fit. Pakistan and Malaysia Determinants of stock market co-movement between Pakistan and Malaysia are reported in Table 9.The coefficient estimate for the volatility in the bilateral exchange rate between Pakistan and Malaysia is significant and negative.Such results are in line with Bracker et al. (1999), and Lin and Cheng (2008).They report that volatility in the bilateral exchange rate between two countries is significant and negatively associated with stock market co-movement.In addition, absolute change in the bilateral exchange rate between Pakistan and Malaysia is another determinant of stock market co-movement.The coefficient estimate for absolute change in the bilateral exchange rate between Pakistan and Malaysia is significant and negative.Such results are in line with Bracker et al. (1999), and Lin and Cheng (2008).These researchers also report that large change in the bilateral exchange rate leads to more benefit for the country with the depreciating currency by influencing bilateral trade conditions and influencing the stock prices of the two stock markets.The coefficient estimate for the bilateral trade, GDP growth rate, industrial production growth rate, inflation rate, interest rate, world market volatility is insignificant and will have no influence on the prices of the stock markets of Pakistan and Malaysia.The R-square of 0.432 point out that 43.2% of the variation in the correlation coefficients is explained by the variables under study, which is indication of a reasonably good fit. Pakistan and Thailand Determinants of stock market co-movement between Pakistan and Thailand are reported in Table 10.The coefficient estimate for the GDP growth rate differential between Pakistan and Thailand is significant and negative, showing that the lower GDP growth rate difference between Pakistan and Thailand results in higher co-movement.Such results are in line with the results of earlier studies like Conclusions Portfolio investors or speculators should diversify and pursue arbitrage opportunities.They need to consider this on priority and learn about the different mechanisms that enable co-movements among two countries, so they can make appropriate investmentdecisions.Policy makers also need to understand the mechanisms and changes of co-movements in order to make appropriate policy decisions; not taking these differences into account could result in policies that make the situation worse.The authors conclude that if policy makers want to enhance the international integration of equity markets, they need to focus on eliminating the fundamental causes of obstacles to cross-border settlements. An important practical implication of this research is that it has developed a proper understanding on how far the two Asian powers (China, India) are economically integrated with Pakistan.This study is very important in the Asian context because of the shifting of global economic power towards China and India. Policy makers need to improve their fundamentals to ensure financial stability.Investigating the dynamics of international stock market integration gives important insight to policymakers to design strategies that sustain the stability of the country's economy against global shocks.For example, by studying the Asian financial crisis (1997)(1998), Von Hagen and Ho (2007) emphasized that any systematic shock (e.g., financial crisis) can spread from one economic system to another, if two markets are integrated.Therefore, it is very important for policy makers to develop a proper understanding of the extent and strength of stock market integrations in order to remain vigilant and undertake pre-emptive measures to prevent systematic shocks. Table 1 . Description of Indices. Int it = Interest rate in country i during quarter t.Inf it = Inflation rate in country i during quarter t.Ind it = Industrial production growth rate in country i during quarter t.Gdp it = GDP growth rate in country i during quarter t.XRCH ijt = Percent change in bilateral exchange rate during quarter t.XRSD ijt = Standard deviation in daily bilateral exchange rate during quarter t.WMV t = Standard deviation of daily world stock market index return during quarter t. Table 2 . Descriptive Statistics for Asian Stock Market Returns. Table 3 . Results of Augmented Dickey Fuller Test and Philips-Perron Test. Note: * indicates that ADF statistics and PP statistics are significant for the first difference of all stock indices at 5% level of significance. Table 4 . Results of Co-integration Tests. Table 5 . Determinants of Stock Market Co-movement between Pakistan and China. Table 6 . Determinant of Stock Market Co-movement between Pakistan and India. Table 7 . Determinant of Stock Market Co-movement between Pakistan and Indonesia. Table 8 . Determinant of Stock Market Co-movement between Pakistan and Korea. Table 9 . Determinant of Stock Market Co-movement between Pakistan and Malaysia.
2018-12-31T16:18:08.409Z
2018-06-21T00:00:00.000
{ "year": 2018, "sha1": "f937e3911f1b2aef1a0659615790cb2957b3a984", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1911-8074/11/3/32/pdf?version=1529579687", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f937e3911f1b2aef1a0659615790cb2957b3a984", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
13829787
pes2o/s2orc
v3-fos-license
UK adults’ implicit and explicit attitudes towards obesity: a cross-sectional study Background Anti-fat attitudes may lead to stigmatisation of and lowered self-esteem in obese people. Examining anti-fat attitudes is warranted given that there is an association with anti-fat behaviours. Previous studies, mainly outside the UK, have demonstrated that anti-fat attitudes are increasing over time. Methods The study was cross-sectional with a sample of 2380 participants (74.2 % female; aged 18–65 years). In an online survey participants reported demographic characteristics and completed a range of implicit and explicit measures of obesity related attitudes. Results Perceptions of obesity were more negative than reported in previously. Main effects indicated more negative perceptions in males, younger respondents and more frequent exercisers. Attitudes about obesity differed in relation to weight category, and in general were more positive in obese than non-obese respondents. Conclusions This is the first study to demonstrate anti-fat attitudes across different sections of the UK population. As such, this study provides the first indication of the prevalence of anti-fat attitudes in UK adults. Interventions to modify these attitudes could target specific groups of individuals with more negative perceptions as identified here. Future work would be useful that increases understanding of both implicit and explicit attitudes towards obesity. Introduction Over the past 20 years the number of people classified as overweight and obese has increased [1]. Alongside the more obvious health and economic implications is a less obvious and potentially significant societal impact: the stigmatisation of obese people and the development of anti-fat attitudes. Indeed stigmatisation and discrimination of obese people has increased in parallel with obesity prevalence [2,3]. As might be expected, those who report anti-fat attitudes have a greater likelihood of stigmatising obese people which may occur in various settings [4][5][6]. It is suggested, for instance, that obese people are discriminated against in recruitment and promotion at work [5]. The increasing evidence for antifat attitudes presents considerable cause for concern as stigmatisation can result in elevated depression, general psychiatric symptoms, body image disturbance and lower self-esteem in obese people [7]. Research evidence for the prevalence of anti-fat attitudes comes mainly from the US [3] which might be expected as 68.8 % of adults are classed as overweight or obese, 35.7 % are obese and 6.3 % are morbidly obese [8]. However, obesity prevalence in the UK has increased and closely matches that observed in the US. In 2010, 42 % of males and 32 % of females are overweight and 26 % of all adults are classified as obese in England [9]. To date, studies of anti-fat attitudes in the UK have drawn small samples from narrow sections of the population, for instance exercise professionals [6]. Furthermore, the increase in overweight and obesity prevalence may have led to a normalisation process where overweight and obesity are viewed as the norm, resulting in less antifat attitudes over time. Alternately, greater exposure to overweight and obese people due to the increased prevalence may have led to greater anti-fat attitudes in the current UK population compared with previous years. Current UK Government policy relating to obesity fails to acknowledge the impact of obesity stigma and discrimination [10], yet research has identified that obesity stigma might hinder efforts to reduce obesity. Thus a more comprehensive investigation of anti-fat attitudes within the UK population that examines the impact of specific demographic factors is both timely and relevant. Research examining anti-fat attitudes in the UK population could provide pivotal information for policy makers and practitioners by directing anti-fat attitude interventions. Research has identified that anti-fat attitudes differ in relation to individual characteristics including gender, age, exercise frequency and body mass index (BMI). In adult populations, respondents who are male, younger, exercise frequently and have a lower BMI are likely to report higher anti-fat attitudes [6,[11][12][13]. Internalisation occurs largely at an implicit level. Thus in addition to employing explicit measures of obesity attitudes, implicit measures may prove informative in this line of research and may negate limitations associated with explicit measures [14,15]. Contemporary reports in the media depicting anti-fat attitudes, obesity stigmatisation and discrimination in the UK have increased over time; however, there is a paucity of empirical evidence to support these suggestions. This lack of evidence alongside previous research reporting detrimental links between ant-fat attitudes and behaviour with poorer body image and lowered selfesteem [7], suggests that examining obesity attitudes in the UK population is warranted. Thus, the present study aimed to examine anti-fat attitudes in a sample of UK adults (England, Ireland, Northern Ireland, Scotland, and Wales) and to compare attitudes in relation to gender, age, BMI and exercise frequency. UK adults were expected to report both implicit and explicit anti-fat attitudes (hypothesis 1). Higher levels of anti-fat attitudes were expected in males, younger participants, and more frequent exercisers (hypothesis 2). Design and measures This cross-sectional study was conducted online with data collection carried out over the course of a year. Participants reported their gender, age, height, weight, exercise frequency (hours per week) and perceptions of the words 'fat' (Q1: How insulting do you believe the word "fat" is?) and 'obese' (Q2: How insulting do you believe the word "obese" is?). To respond to Q1 and Q2 they used a 0-10 response scale, anchored by 0 = not at all and 10 = extremely insulting. BMI was calculated as weight (kg)/height (m) 2 and individuals were assigned to the categories underweight (<18.5), normal weight (18.5-24.9), overweight (25-29.9) and obese (≥30; see Tables 1 & 2). Participants completed online versions of the Attitudes Towards Obese Persons and Beliefs About Obese Persons scales (ATOP, BAOP) [16] that measure both positive and negative attitudes towards obese persons and perceived controllability of obesity, respectively. Previous research [17] has suggested that those who perceive obesity to be controllable are more likely to have antifat attitudes. ATOP scores range from 0-120 across 20 items, where low scores represent more negative attitudes. BAOP scores range from 0-48 across 8 items, where low scores represent a stronger belief that obesity is controllable. Participants also completed the Anti-Fat Attitudes Scale (AFAS) [18] that measures the magnitude of anti-fat attitudes via 5 items (scores range from 0 to 25 where higher scores represent stronger anti-fat attitudes), the 14 item F-Scale (Fat Phobia Scale short form) [19] that measures the degree to which individuals associate stereotypical characteristics with being fat (responses range from 0 to 5 where higher scores represent a perception that characteristics are associated with being fat), and the Implicit Association Test (IAT) [20] which was the only implicit measure used. The stimuli for in this computer-based measure of implicit attitudes towards fatness and thinness was previously used by Vartanian et al. [21]. The IAT does not directly measure attitudes but provides an indication of an implicit preference for fatness or thinness. Participants are presented with weight-related words and associate these as quickly as possible with different grouping categories as detailed below. In line with Lane et al. [22] the seven step procedure was employed, where participants respond to each of the following grouping categories: (1) pleasant or unpleasant; (2) fat or thin; (3) fat/pleasant or thin/unpleasant; (4) fat/pleasant or thin/unpleasant (stage 3 repeated); (5) thin or fat; (6) fat/ unpleasant or thin/pleasant; and (7) fat/unpleasant or thin/pleasant (stage 6 repeated). Only steps 3, 4, 6 and 7 are used to measure implicit attitudes; the remaining steps were practice stimuli to engage participants with the process. Participants associated the words that appeared in the middle of the screen with either of the grouping category in the top left or top right of the screen using the E or I keys, respectively (e.g. for happy pleasant is located in the top left and unpleasant in the top right). Response latency to different pairs of grouping categories is measured in milliseconds (msec). Positive scores represent stronger anti-fat or pro-thin bias. Procedures Ethical approval was obtained from Aberystwyth University Research Ethics Committee, UK, and potential participants were approached via 3 means of recruitment: (i) letters and emails distributed to UK businesses, councils, universities and higher education institutions (ii) social networking websites and (iii) conferences. Recruitment attempts were strategic to sample participants from as many counties across the UK as possible. Participants were asked to complete an online survey on attitudes and beliefs about obesity (as described above). Prior to completing all measures, participants were provided with information about the study and consented to participate. Measures were presented in counterbalanced order across participants to minimise order effects. No incentive was offered for participating in the study. Analysis Total or mean scores were calculated for all measures and used in the analyses except the IAT where IAT D scores were calculated representing the difference between total response latency for the pairings fat/pleasant and thin/unpleasant versus fat/unpleasant and thin/pleasant. IAT D scores were calculated as recommended by Greenwald et al. [23]: (1) delete responses greater than 10,000 msec; (2) delete participants' data where more than 10 % of responses have a response latency less than 300 msec; (3) compute the inclusive standard deviation for all responses in steps 3 and 6 and similarly in 4 and 7; (4) compute the mean latency for responses in steps 3, 4, 6 and 7; (5) compute the main differences (mean step 6 -mean step 3, and, mean step 7 -mean step 4); (6) divide each difference score by its associated inclusive standard deviation; and (7) calculate the D score as the equal weight mean of the two resulting ratios. D-scores range from -1000 to 1000 msec with Q1: How insulting do you believe the word "fat" is?; Q2: How insulting do you believe the word "obese" is?; IAT: Implicit Association Test positive scores indicative of anti-fat attitudes or prothin bias. Mean scores reported in previous research that have employed the explicit anti-fat attitude measures of this study were used to determine if current data are indicative of anti-fat attitudes, as no criteria exist for interpreting these scores. Thus, the mean scores reported previously that were claimed to demonstrate anti-fat attitudes were used for comparison as follows: 59.7 and 17.9, ATOP and BAOP respectively [16]; 3.03, AFAS [18]; and 3.6, F-Scale [19]. Study hypotheses were examined by a series of Multivariate Analyses of Variance (MANOVA) conducted on the data for each independent variable (gender, age, BMI, exercise frequency) with all attitude measures as dependent variables (see Tables 1 & 2). gender had two levels; age had four levels as did BMI in line with the World Health Organisation BMI categories [24], exercise frequency had three levels in line with recommended UK physical activity guidelines representing: below recommended (0-3 hours per week), recommended (4-7 hours per week) and above recommended levels (8+ hours per week; see Tables 1 & 2). Follow-up one way ANOVAs for each independent variable were employed with Welch correction to examine multivariate effects (except for gender where an independent t-test was used). Post-hoc tests with Scheffé correction were used to follow-up significant ANOVA effects. One way ANOVAs were used to compare IAT D scores across different levels of independent variables. For all analyses α was set at .05. Results Tables 1 and 2 present the descriptive statistics for all variables in relation to demographic and behavioural groups. Cronbach's α were satisfactory for all scales: for the ATOP (.8), BAOP (.7), AFAS (.8), and F-Scale (.8). Table 3 reports significant overall univariate effects with results of followup tests to explore these discussed below. The IAT D score (D = 147.8) indicated that, as anticipated, there was an overall anti-fat or pro-thin bias in the sample. Similarly, based on the criteria identified above, mean scores on explicit measures indicate negative attitudes towards obesity (see Table 1). The MANOVA demonstrated main effects in relation to gender (F(6, 2373) = 38.22, P < .01), age (F(18, 6707) = 6.59, P < .01), exercise frequency (F(12, 4.07 = 4.19, P < .01) and BMI (F(18, 6707) = 11.07, P < .01). All dependent variables contributed significantly (P < .05) to these main effects with the exception of Q2 for exercise frequency and ATOP, Q1 and Q2 for BMI. The results of follow-up ANOVAs are detailed in Table 3, indicating significant age differences for all dependent variables. All variables except Q1 and Q2 differed in relation to exercise frequency, and, significant differences were observed for all variables except ATOP, Q1 and Q2 in relation to BMI. Post hoc test results are discussed below. The follow-up tests on the gender main effect indicated significant differences on all variables (see below). Exercise frequency Participants who exercise 8 or more hours a week reported more negative attitudes towards obese people (ATOP; P < .01) and greater anti-fat attitudes (AFAS; P < .01) than those who exercise 0-3 hours a week. They also reported greater anti-fat attitudes (AFAS) than those who exercise 4-7 hours a week (P < .01), who in turn reported greater anti-fat attitudes (AFAS; P < .01) and fat phobia (F-Scale; P < .01) than those who exercise 0-3 hours a week. Overall, the explicit results demonstrate that males, younger respondents and more frequent exercisers reported more negative perceptions of obesity. BMI Anti-fat attitudes (AFAS) were greater in underweight and overweight than obese participants (P < .01) and in normal weight compared with overweight and obese participants (P < .01). Fat phobia (F-Scale) was lower in obese than underweight, normal weight and overweight participants (P < .01), and in overweight compared with normal weight participants (P < .01). Normal weight participants believed that obesity is more controllable (BAOP) than underweight and obese participants (P < .01), as did overweight compared with obese participants (P < .01). Correlations between explicit measures A number of correlations were evident between explicit measures (see Table 4). A positive correlation between ATOP and BAOP scores was observed, where more negative attitudes towards obese persons were associated with a stronger belief that obesity is controllable. A positive correlation between AFAS and F-Scale scores was also evident, where more anti-fat attitudes were associated with greater fat phobia. Other positive correlations were evident between BAOP and Q2, Q1 and Q2, and Q2 and F-Scale scores. This suggests that perceptions that the word obese is more insulting were associated with stronger beliefs that obesity is controllable, perceptions that the word fat is more insulting and greater fat phobia. A negative correlation was evident between ATOP and AFAS scores, where more negative attitudes towards obese persons were associated with higher levels of anti-fat attitudes. A negative correlation also observed between BAOP and AFAS scores, where stronger beliefs that obesity is controllable were associated with more anti-fat attitudes. BAOP and F-Scale scores were negatively correlated indicating that stronger beliefs that obesity is controllable are associated with greater fat phobia. Finally, negative correlations were also found between scores on the ATOP and Q2, ATOP and F-Scale, and BAOP and Q2. This suggests that more negative attitudes towards obese persons are associated with perceptions that the word obese is more insulting and with greater fat phobia, and that stronger beliefs that obesity is controllable are associated with perceptions that the word obese is more insulting. Discussion The study examined anti-fat attitudes in a cross-section of UK adults (England, Ireland, Northern Ireland, Scotland, and Wales) and compared attitudes in relation to gender, age, BMI and exercise frequency. Implicit and explicit antifat attitudes were evident in our sample of UK adults in line with hypothesis 1. Anti-fat attitudes were higher in males, younger participants and more frequent exercisers, in support of hypothesis 2. Our findings illustrate that in UK adults, anti-fat attitudes appear to be widespread. Given the stigmatisation that can result from pervasive anti-fat attitudes, interventions to modify anti-fat attitudes are required. Anti-fat attitudes appear to be robust and have proven difficult to modify [25]; however some promise has been reported in altering beliefs about the causes of obesity [26]. Current study findings suggest that particular groups could be targeted with attitude modification interventions: males, younger individuals, and frequent exercisers. There are plausible explanations for greater anti-fat attitudes in all these groups: males tend to be less empathetic than females [27], a heightened awareness of body appearance in younger individuals, and, the incidence and possible acceptance of weight-related criticism in exercise environments [28][29][30]. These are all modifiable factors suggesting that interventions targeting these may well be successful. Our descriptive data does not offer support for the explanations we propose. Thus they require confirmation in future work before being used to underpin interventions to address negative perceptions of obesity in these groups. Nevertheless, given that anti-fat attitudes can lead to the stigmatisation of obese people [31]; our findings highlight the need for anti-fat attitude intervention with UK adults. Our data reveal some interesting, although possibly contradictory, findings regarding perceptions of the controllability of obesity and of the descriptors fat and obese. Females and younger respondents tended to perceive obesity as more controllable and the labels fat and obese as more insulting than males and older respondents. For younger respondents this appears logical as they reported more anti-fat attitudes, thus they perceive labels associated with the condition as insulting. In addition, correlations from the current study that support previous research [32], suggest that these anti-fat attitudes are likely to derive partially from the belief that obesity is controllable and that obese people are responsible, indeed to blame, for Correlation is significant at the .05 level their condition. This interpretation does not explain the same pattern seen in females as they did not report particularly strong anti-fat attitudes. Thus it may be that the participants perception of the labels used to describe obese people are not directly related to, or derived from, their evaluative perceptions of obese people themselves. The differences observed in perceived controllability of obesity in relation to BMI are unclear. Obese respondents reported lower perceived controllability than normal and overweight respondents. This may serve as a self-protective mechanism in obese people to maintain self-esteem as they apportion less self-blame for their obesity [17]. Or, it may reflect their lived experience of being obese, as substantial evidence suggests a role for uncontrollable factors such as genetics in becoming obese [33], and, obese people are aware of their own exercise and nutrition habits, unlike external others. Less clear is the finding that perceived controllability was lower in underweight compared with normal weight respondents. Possibly underweight people recognise that weight at both extremes of the continuum is not always within the individual's control if they themselves suffer from an eating disorder or are not underweight through choice. These explanations are of course highly speculative given that our study did not seek to identify explanations for different obesity attitudes. Whilst they intuitively make sense future research is clearly warranted to examine these suggestions. Interestingly, despite the differences observed in the explicit measures, as discussed above, there was a null effect in relation to implicit attitudes when compared across the demographic factors. Current study findings demonstrate that UK adults have implicit anti-fat or pro-thin bias, but no differences were observed for almost all of the demographic factors. Previously it has been suggested that implicit measures counter some of the limitations of explicit measures, such as response bias and demand characteristics [14,15]. Thus, differences observed in explicit responses, may have been a result of participants reducing the extent of their anti-fat attitudes, whilst this was not observed via implicit measures. Thus the current study findings highlight the need to examine both implicit and explicit attitudes towards obesity. Regardless, our findings do underscore the importance noted previously of recognising the terms used to describe overweight and obesity [34]. Although medical professionals may use the term obese in an objective sense to describe a clinical condition, for our sample and in particular younger, female respondents, this was perceived as an insulting label. This finding reinforces previous suggestions that the term obese should be avoided [35]. Moreover the findings go beyond previous suggestions that have demonstrated that the term 'obese' should be avoided with obese patients, as our study demonstrates that the term is perceived as insulting in participants across BMI categories. Recently, guidelines have been developed for using language more sensitively to avoid objectification of the individual and placing the condition before the person, for instance the term 'diabetic' has been replaced by 'people with diabetes' [36]. Similar adjustments would seem appropriate when discussing obese people. Studies that compare perceptions of obese people when different labels are used to describe them would be simple to conduct but may produce illuminating findings to guide the somewhat complex issue of terminology use. Both fat phobia and anti-fat attitudes tended to be lower in overweight and obese respondents in line with previous research [7]. We might therefore suggest that obesity stigmatisation comes from non-obese people, which may serve to further alienate obese people. Interestingly though, regardless of BMI, when measured implicitly, all respondents reported an anti-fat or pro-thin bias. Even if not expressed explicitly, it appears that obese people in our sample have internalised the same anti-fat or pro-thin attitudes as have non-obese people. These findings present less apparent contradiction when we consider that self-reported attitudes are open to manipulation by the respondent, whether consciously or not [15]. In this instance, this manipulation could have occurred because obese people felt uncomfortable publicly denigrating themselves in explicitly reporting their attitudes towards obese people. Similarly, females' implicit attitudes did not differ from males' in their anti-fat or pro-thin bias but they explicitly reported less negative perceptions of obesity. This may reflect the greater social desirability tendency in females [37], or, as suggested above, greater empathy in females. Clearly, future studies are needed that replicate the implicit measure used here to tease out these individuals' 'true' responses. Whilst the sampling strategy has limitations, the sample was successful in other ways. For example, the sample included respondents from every country across the UK and is the first study to obtain perceptions from a large group of participants from the UK. This was made possible due to the online sampling method that offers alternative benefits, for example, internet-based studies provide an opportunity to achieve a greater diversity in their samples [38]. These authors also argue that preconceptions about internet-based research are incorrect. For instance that the resultant sample will be younger, but the sample is often similar to that observed in traditional university based samples. They also note that there is no evidence that results of internet-based research are confounded by false data or repeat responders, nor do internet-based questionnaires diminish the psychological properties reported for pen-and-paper versions, both common preconceptions. Furthermore, whilst the sampling method means the researcher is not present during data collection, some respondents did make contact with the researcher to address queries. We do however acknowledge that there are inherent biases to this approach, which may have resulted in the greater proportion of respondents who were white, middle class, more highly educated and of a higher social economic status. The majority of respondents were female (74.2 %), aged 18-25 years (57.7 %) and were students (47.2 %). As we might expect with a volunteer, opportunistic sample, our sample composition does not exactly match that of the UK population [39]. Despite attempts to sample a varied population, a more strategic sampling approach to ensure sub-groups were more equally represented might have strengthened the conclusions drawn from these data. Our sample composition does not match the demographic profile of the UK population [39], which impacts the generalizability of the data. Nevertheless, our findings reflect those obtained with similar population subgroups, such as more anti-fat attitudes in males [10]. Thus it is likely that if a 'representative' sample were examined, findings would be similar to those obtained here. The reader should be aware of these limitations when considering our findings but given the paucity of current evidence from UK samples, we offer an initial contribution to stimulate further study. It is also important to highlight that the implicit measure we employed represents both a strength and a limitation of our study. Its strength lies in offering a measure of what some authors have described as 'true' attitudes [15] but given the format of Implicit Association Tests responses can only indicate anti-fat or pro-thin bias and not an absolute level of anti-fat attitude. The current study is the first to comprehensively examine obesity attitudes in the UK population, demonstrating that UK adults report both implicit and explicit anti-fat attitudes. To date, obesity stigmatisation and discrimination is not included in UK health policy such as the Department of Health's Obesity and Health Eating policy [10]. Based on the current study findings, we suggest that obesity stigmatisation and discrimination is incorporated into the policy as an action. This appears to be particularly relevant with previous research suggesting that obesity stigmatisation and discrimination may be a barrier to engaging in some of the actions that are already present such as physical activity [28,40]. Conclusions The current study is the first to examine obesity attitudes across different sections of the UK population and in doing so highlight population groups with higher anti-fat attitudes. The present results extend the growing body of literature indicating that rising levels of obesity present challenges not only at an individual but also at a societal level, as anti-fat attitudes appear pervasive, albeit not to the same degree, across the different groups we sampled. A novel contribution of this study is that this is the first large scale examination of UK adults' perceptions of obesity and how these differ between population groups. This study is also the first to demonstrate that perceptions of obesity are similar to those reported in other countries, predominantly the US. Subsequently, the findings of our research call for anti-fat attitude intervention in the UK. Education about the uncontrollable causes of obesity can reduce anti-fat attitudes [25], and given that our study demonstrates strong beliefs that obesity is controllable in UK adults, future research should consider this when designing interventions for certain population groups. Building on present study findings, future research could examine the efficacy of interventions to modify both implicit and explicit anti-fat attitudes and identify explanations for differences in obesity perceptions in subgroups of the population.
2016-05-12T22:15:10.714Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "d91bec5232d32fbca8764045b648194e6958f6ee", "oa_license": "CCBY", "oa_url": "https://bmcobes.biomedcentral.com/track/pdf/10.1186/s40608-015-0064-2", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "db46f34f30ae5de99940cd9aace99d8acd5008b9", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
224847497
pes2o/s2orc
v3-fos-license
CORRECTION OF POSTURAL DISORDERS OF MATURE AGE WOMEN IN THE PROCESS OF AQUA FITNESS TAKING INTO ACCOUNT THE BODY TYPE The purpose of the article is to develop and test the effectiveness of aqua fitness exercise program for the posture improvement of women of the first period of a mature age with different body types. Materials and methods. The pedagogical experiment involved 46 women of the first period of a mature age who had previously consented to participate in the study. The used methods include the analysis and generalization of scientific and methodological literature, Internet data, and mathematical statistics methods, including Fisher’s angular criterion, which allows sampling by distribution of the trait (Byshevets et al., 2019). The research included the assessment of the posture condition of women based on the method of visual screening of posture with determination of the total score (Kashuba et al., 2016). A surgeon was also involved in assessing the posture of women. In the process of factor analysis, the data of anthropometric studies, physical fitness assessments, and motor activity level were analyzed. Results. The distribution of women of the first period of a mature age by types of postural disorders and its changes under the influence of aqua fitness classes taking into account the body type has been established. The study involved 46 women of the mentioned category; 73.9% of them were women of normosthenic body type, 15.2% – of asthenic type, and 10.9% – of hypersthenic type. The research has established that women who were engaged in aqua fitness with regard to body type had positive changes in posture. The proportion of women of asthenic body type with a normal posture increased by 28.6%, of normosthenic type – by 20.6%, and of hypersthenic type – by 20.0%. Changes occurred in the level of the bio-geometric profile of the posture. Conclusions. The study confirms the effectiveness of the use of aqua fitness in the process of health-promoting classes to prevent and correct postural disorders. At the same time, researchers state the deterioration of the physical condition and the decline in the level of health of women of reproductive age caused by a sedentary lifestyle, Study participants The research involved 46 women of the first period of a mature age, who gave their written consent to participate. The distribution of participants by age showed that the majority of women involved in the pedagogical experiment were aged 26 to 30 years, which amounted to 58.7% (n = 27). At the same time, there were 23.9% (n = 11) participants up to 25 years, and 17.4% (n = 8) older than 30. 15.2% of them had asthenic (ectomorphic), 73.9% -regular (mesomorphic), and 10.9% -hypersthenic (endomorphic) body type. At the beginning of the study, 23.9% of participants had normal posture, 23.9% had round back, and 19.6% had round-concave back, and scoliotic posture -32.6%. Considering the mathematical rules of setting up and conducting a pedagogical experiment (Borisova, 2019), a sample of women of the first period of a mature age engaged in aqua fitness was made in a simple random way, i.e. each of the subjects had the same probability to participate in the experiment. This selection of subjects ensured the representativeness of the sample taking into account the nature of social objects (Tulebaeva, 2010). On the other hand, the further distribution of women by body type, taking into account the natural variability that characterizes this contingent of women (Adler, 2016), confirmed that the proportion of women by body type generally corresponds to scientists' views about the body type of women of the first period of a mature age. All women had previously consented to participate in the study. The pedagogical experiment was conducted in fitness club "Yunist" in Kyiv. Research methodology To achieve the goal of the research, the following methods were applied: analysis and generalization of data of scientific and methodological literature, data of the Internet, anthropometry, pedagogical methods of research, the method of visual screening of posture with determination of the total score (Kashuba, Bondar, Goncharova, & Nosova, 2016), and estimation of the level of motor activity by the method of Framingham study (Krutsevich, Vorobyov, & Bezverkhnya, 2011). The research involved three stages of implementation. At the first stage, we analyzed scientific and methodological literature in the field of physical fitness of women of the first period of a mature age; also, we made the research of the level of physical development, physical fitness, and level of motor activity of 46 women of the first period of a mature age. Anthropometric studies included the determination of body weight (kg), longitudinal body sizes (cm), circumferential body sizes (cm), body diameters (cm), size of skin and fat folds (mm) by conventional methods, with subsequent determination of body type (Martirosov, Nikolaev, & Rudnev, 2006). The posture of women was assessed using an improved map of express control of the bio-geometric profile of posture (Kashuba et al., 2016). The distribution of the studied by the levels of the bio-geometric profile of the posture was carried out taking into account 11 indicators in the frontal (5) and sagittal (6) planes. The evaluation of each indicator was performed according to a three-point scale by comparing the individual posture in the photo and the standard graphical options. Score "1" corresponded to the grade "bad", "2" -"satisfactory", "3" -"good". The maximum amount of points that corresponds to the normal state of posture is 33 points. The study of the level of motor activity of women involved fixing the time to perform motor activity of different levels (high, medium, low, sedentary, basic) (Krutsevich et al., 2011). Goncharova, N., Kashuba, V., Tkachova, A., Khabinets, T., Kostiuchenko, O., & Pymonenko, M. (2020) Assessment of physical fitness of women included determination of static force endurance of muscles while maintaining static postures, and spine flexibility and elasticity of hamstrings (Earl & Baechle, 2012). Also at the second stage of the research, we performed the analysis of the obtained data using the methods of mathematical statistics with the determination of the most informative indicators. The the significant influence of indicators of body composition and condition of musculoskeletal system in the factor structure of indicators of physical development, physical activity and physical fitness of women of the first period of a mature age. They were defined as the criteria for the effectiveness of aqua fitness activities. At this stage, a program of aqua fitness for women of the first period of a mature age was developed. At the third stage of the research, the influence of the developed training program on the posture of women of the first period of a mature age with different body types was determined. The study involved 46 women of this age. The duration of the training program was nine months. Statistical analysis In the course of the research, mathematical statistics methods were applied, including Fisher's angular criterion, which allows sampling by distribution of the trait (Byshevets, Denysova, Shynkaruk, Serhiyenko, Usychenko, Stepanenko, & Syvash, 2019). If the conditions of application of Fisher's angular criterion were not met when comparing the shares of women according to the level of the state of the bio-geometric profile of posture before and after the pedagogical experiment, Fisher's exact test was used. To compare the overall assessment of the bio-geometric profile of the posture of the subjects before and after the experiment, Student's t-test was used for dependent samples in the case when according to the Shapiro-Wilk test the initial data were subject to the normal distribution law and Wilcoxon test in the opposite case. In the process of factor analysis by the method of principal components with varimax rotation, the data of anthropometric studies, physical fitness assessments, and motor activity level were analyzed. The total number of indicators for factor analysis was 56 variables, which allowed identifying six factors (60.03% of the total variance) in the structure of the studied indicators. Data processing was performed using programs MS Excel (Microsoft, USA), Statistica 8.0 (StatSoft, USA). Results The results of factor analysis of indicators of physical development, physical fitness and physical activity allowed defining the factor structure of six factors. Тhe first factor is "Physical fitness" with 20.32% of the total variance, the second factor is "Fat component of body composition" -12.56%, the third factor is "Chest development" -8.34%, the fourth factor is "Posture condition" -7.80%, the fifth and sixth factors respectively were the "Posture type" (5.56%) and the "Body type" (5.46%). These factors include indicators that are the most informative in relation to the physical development, physical activity, and physical fitness of women. The factor analysis was further refined by differentiation of the factor structure depending on the body type (Table 1). For example, analyzing the results of factor analysis, the factor structure of women of the first period of a mature age with regular body type was detailed as the most representative of the studied contingent. It has been established that the structure of physical development, motor activity and motor abilities of women of the first period of a mature age of regular type includes 5 factors, which describe 58.77% of the total variance. The general factor "Physical fitness and basic motor activity" with a share in the total variance of 23.37% was made up of flexibility indicators, such as the magnitude of maximum bending (r = 0.86; p < 0.05) and the value of forward bending with folded leg (r = 0.78; p < 0.05), which has an inverse correlation with the duration of holding the shoulders up lying on the abdomen (r = -0.81; p < 0.05), holding the legs up lying on the abdomen (r = -0.83; p < 0.05), holding the torso lying on the back (r = -0.81; p < 0.05), holding the torso lying on the back with bent knees at an angle of 90 (r = -0.85; p < 0.05); lifting the shoulders lying on the abdomen, arms near the chest (r = -0.81; p < 0.05), equilibrium (r = -0.89; p < 0.05) and basic physical activity index (r = -0.73; p < 0.05), as well as sitting body length (r = -0.80; p < 0.05) and shoulder width (r = -0.78; p < 0.05). The unipolar factor two "Skin and fat folds" with 14.47% in the total variance includes the size of skin and fat folds on the shoulders' back (r = 0.78; p < 0.05), on the thighs (r = 0.77; p < 0.05) and on the back under the shoulder blade (r = 0.77; p < 0.05) and indicate an increase in the size of the folds on the thighs and on the back as they increase on the shoulders. The factor "Longitudinal body sizes" includes body length (r = 0.71; p < 0.05, ridge body diameter (r = 0.71; p < 0.05) and chest circumference with maximal inhalation (r = 0.74; p < 0.05) and explains 8.34% of the total variance. The fourth factor "Body diameters" with 6.51% includes the transverse diameter of the distal part of the thigh (r = 0.77; p < 0.05) and the shin (r = 0.75; p < 0.05) indicating an increase in one indicator along with an increase in another. Factor five "Posture condition" includes independent indicators of the state of bio-geometric profile of women in the frontal (r = 0.77; p < 0.05) and sagittal (r = 0.70; p < 0.05) planes and explains 6.09% of the total variance. In the defined factor structure of women of the first period of a mature age with different body types, there are general patterns of separation of informative indicators. Among the studied groups of women of asthenic and hypersthenic body types, the first two factors separate the indicators of physical development, as those with the greatest impact. Instead, among women of regular body type, physical fitness and physical development have a leading position. Factors that make a significant contribution to the overall sample variance include indicators of women's posture. The study of the factor structure of indicators of physical development, motor activity and physical fitness of women of the first period of a mature age revealed the leading directions of intended influence on the body of women. The observed patterns were taken into account when developing the structure of the means of influence in the process of health-improving activities for women of the first period of a mature age. The shift of the accents of the content of aqua fitness exercise towards the means of prevention and correction of postural disorders is conditioned by the inherent motor regime of modern women of a mature age, working conditions and peculiarities. The studied contingent is characterized by a large number of women with manifestations of functional disorders of the musculoskeletal system, especially among the representatives of the asthenic body type. The training program included a combination of distance swimming, aqua fitness, and performing special tasks, conducting theoretical lessons on a healthy lifestyle, and enhancing motivation. The training program was implemented over a period of nine months. The structure of the training program included three sessions per week, lasting forty-five minutes, with variant modeling of physical activity and the ratio of physical education means. Methodological features of the classes were determined for women with different types of posture disorders and included different ratio of aqua fitness and distance swimming, exercising in different conditions (near the pool side, without fixation, in deep and shallow water). The key provisions of the training program included taking into account the functional imperfection of the trunk muscles as a whole and the violation of the symmetry of the tone of the muscles of the individual groups. For example, the women with round shoulders and round back are characterized by the weakened muscles of the torso and a slightly reduced tone of the muscles of the shoulder girdle. The women with round concave back have weak gluteal and back thigh muscles, and general functional imperfection of the muscles of the abdomen. The women with flat back are characterized by low muscle tone of the back and shoulder girdle. The women with functional disorders of the posture in the front plane have unequal muscle tone on the right and left side of the torso (Swede, 2018). The structure of motor activities for women of the first period of a mature age in the aquatic environment was determined in regard to the type of postural disorders and in accordance with the peculiarities of the location of individual body bio-links under the influence of functional disorders of the musculoskeletal system. For women with round shoulders and round back, backstroke was used; for women with flat and concave backs -crawl and butterfly strokes. For the contingent of women with scoliotic posture, the content of the classes was supplemented by swimming with symmetrical motor actions (breaststroke, butterfly stroke). In the case of excessive lumbar lordosis in women of the first period of a mature age, a swimming board was placed under the abdomen. Aqua fitness exercises were implemented in the training program according to the type of postural disorders and individual characteristics of women's physical development, and were determined by the body type (Table 2). Namely, within one session, the means directed on the development of no more than two motor qualities were used. According to these principles, the following combinations were proposed: exercises for the development of strength and flexibility; coordination and power exercises; exercises for the development of endurance only, power and speed-power exercises. In the course of the research, we examined the posture of women of the first period of a mature age under the influence of aqua fitness, taking into account the body type. Thus, according to the obtained data, before the experiment, 23.9% of the participants had normal posture, 23.9% had round back, 19.6% had round-concave back, and scoliotic posture prevailed with the share of 32.6%. After the experiment, the distribution changed as follows: normal posture -45.7%, round back -19.6%, round-concave back -15.2%, scoliotic posture -19.6%. Additional calculations using the Goncharova, N., Fisher's angular test made it possible to prove that the proportion of women with normal posture under the influence of aqua fitness training, taking into account the body type, increased statistically significantly (p < 0.05). A more detailed study of medical cards data allowed to determine the peculiarities of posture of women of the first period of mature age, depending on their body type before and after the experiment, and to evaluate the impact of aqua fitness on the posture of the subjects. The research has established that both before and after the introduction of the authors' aqua-fitness program, the type of posture without violations prevailed among women of regular type. Before the experiment, their share was 26.5% and after the experiment -47,1%. It has been found that by the start of the experiment, asthenic-type women were 6.5% (p > 0,05) less likely to have a normal posture type and hypersthenic-type women 12.2% (p > 0,05) less than regular-type women were. However, the highest proportion of women with round back was found among women with hypersthenic type: it was 40%, and the smallest proportion, which was 14.3%, was recorded among women of asthenic type. These changes had a favorable trend but were not statistically significant. In addition, among women of hypersthenic type, the largest proportion, amounting to 42.9%, was characterized by scoliotic posture. At the same time among women of asthenic type this disorder was found in 20%, and among women of regular type -in 32,4%, and statistically significant (p > 0.05) proportions of women with this type of disorder did not differ. The study has showed the positive impact of aqua fitness on the posture of women of the first period of a mature age, regardless of the type of body. Thus, the proportion of asthenic-type women with normal posture increased by 28.6% (p > 0,05), regular -by 20.6% (p > 0,05), and hypersthenic -by 20.0% (p > 0,05). Therefore, taking into account the body type, aqua fitness has the maximum impact on women of the first period of a mature age of asthenic type. In addition, the study has showed that after the introduction of the authors' program among the participants of the experiment, the proportion of hypersthenic-type women with round back, regular-type women with scoliotic posture, as well as asthenic-type women with round back and scoliotic posture decreased the most. Their shares were 20.0%, 14.8% and 14.3% respectively (p > 0,05). However, the proportion of women of asthenic and hypersthenic type with roundconcave back did not change. The obtained data make it possible to testify to the positive dynamics that occurred in the posture of women of the first period of a mature age under the influence of the authors' aqua fitness program. The further analysis of changes in posture of women of the first period of a mature age under the influence of the proposed means of aqua fitness was carried out through the distribution of the studied contingent by levels of posture and direct scoring of visual screening of the posture by the method (Kashuba et al., 2016). Statistical processing of the experimental material allowed establishing the following changes: asthenic body type • assessment of the level of the bio-geometric profile of the posture in the frontal plane statistically significantly (p < 0.05) increased by 26.9% from (7.43; 1.62) points to (9.43; 2.07) points; Table 2. Distribution of aqua fitness exercises according to the body type of women of the first period of a mature age • an increase in the sagittal plane by 10.5% from (8.14; 2.79) points to (9.00; 2.71) points, however, no statistically significant (p > 0.05) differences were detected; • overall assessment of the posture bio-geometric profile statistically significantly (p < 0.05) increased from (15.57; 4.39) points to (18.43; 4.65) points; regular body type • with respect to the frontal plane, the indicator of the level of the posture bio-geometric profile after the experiment was statistically significantly (p < 0.05) higher by 24.7% than at the beginning ((10.09; 1.90) against (8.09; 1.58) points); • a statistically significant (p < 0.05) increase of 10.4% (10.03; 2.34) points to (9.09; 2.59) points was registered in the sagittal plane; • a statistically significant (p < 0.05) increase from (17.18; 4.06) points to (20.12; 4.06) points in the overall assessment of the bio-geometric profile of the posture was recorded; hypersthenic body type • the indicator of the level of the bio-geometric profile of the posture in the frontal plane statistically significantly (p < 0.05) increased (by 27.0% from (7.40; 1.52) points to (9.40; 2.07) points); • notwithstanding the absence of statistically significant (p > 0.05) differences between the index in the sagittal plane, there was a positive tendency for its growth, where the increase was 17.1%, and the indicator increased from (8.20; 3.27) points to (9.60; 2.07) points; • overall assessment of posture bio-geometric profile statistically significantly (p < 0.05) increased from (15.60; 4.56) points to (19.00; 4.06) points. The research has established that the proportion of women with average level increased by 5.88% and the proportion of women characterized by a high level of bio-geometric profile of posture increased by 17.65% women regular body type. This increase was due to the changes among women with low posture bio-geometric profile, the proportion of whom decreased by 23.53%. At the same time, among the surveyed asthenic body type, there was an increase by 28.57% in the proportion of women with a high level of posture bio-geometric profile due to a decrease by 14.29% in the proportion of women with average and low levels in each case (Fig. 1). Asthenic During the experiment, asthenic-type women underwent changes in the distribution of the bio-geometric profile of the posture: • the proportion of women with low and average levels decreased by 14.29%, respectively; • the proportion of women with high levels increased by 28.57%. For women of regular body type, such changes in the distribution of the bio-geometric profile of the posture were characteristic: • the proportion of women with low level decreased by 23.53%; • the proportion of women with average level increased by 5.88%; • the proportion of women with high level increased by 17.65%. The following positive changes in the distribution of the bio-geometric profile of the posture occurred among women of the hypersthenic body type: • the proportion of women with low level decreased by 40%; • the proportion of women with average level increased by 20%; • the proportion of women with high level increased by 20%. Discussion The results of the analysis of scientific and methodological literature (Tomilina et al., 2018;Kashuba, Andrieieva, Goncharova, Kyrychenko, Karp, Lopatskyi, & Kolos, 2019b indicate a significant number of posture disorders in adults, which is confirmed by the results of this research, in which 76.1% of women in the first period of a mature age had posture disorders. In addition, the results ob- Fig. 1. Distribution of women of the first period of a mature age according to the bio-geometric profile of the posture Goncharova, N., tained during the study are consistent with the results Bibik (2013), according to which only about 25 % of women of the first period of a mature age have normal posture, and the most common violation of posture among this contingent is a violation in the frontal plane -scoliotic posture. Regular posture is an indicator that affects the activity of other organs and systems of the body (Ivchatova, 2010), which is confirmed by the results of factor analysis of physical development, physical fitness and motor activity of women in this study, where "Posture status" is 7.80% of the total variance of the sample. In the course of the research the peculiarities of posture disorders in women with different body type were clarified, the largest number of posture disorders was observed in women with asthenic body type. These results confirm the research of other scientists (Ivchatova, 2010). The analysis of scientific and methodological literature (Ivchatova, 2010;Ivanchykova et al., 2018;Hakman et al., 2020) allowed defining means of physical education, which are applied in the course of physical fitness for the contingent of women of the first period of a mature age for the purpose of prevention and correction of posture disorders. Aqua fitness occupies a special place among these means. In particular, there is evidence of the effective use of aqua fitness in the physical education of students (Zhuravlev & Malikov, 2019). For example, Usova et al. (2014) states that aqua fitness exercises not only provide a higher health effect than other types of fitness, promotes the resistance of female students to the effects of temperature fluctuations, but also have significant health and strengthening effect in preventing spine distortions and forming the correct posture. Considering health-promoting aqua aerobics as a means of hydro rehabilitation for students of special medical groups, Balamutova et al. (2011) draws attention to the effectiveness of its use in order to strengthen virtually all muscle groups and correct posture without the risk of injury. The peculiarity of the proposed authors' approach to the aqua fitness classes is the differentiation of means depending on the posture of women of the first period of a mature age and their body type, the effectiveness of which has been proven in the process of pedagogical experiment. Conclusions and perspectives of further research The results of factor analysis of the factor structure of physical development, physical fitness, and motor activity of women of the first period of a mature age indicate the presence of six factors (60.03% of the total variance). The "Posture status" of women (7.8%) and "Body type" (5.46%) make a significant contribution to the content of the factor structure, which determines the directions of influence during the programming of physical fitness activities for this contingent. The research has proposed the program of aqua fitness classes, the content of which is differentiated in accordance with the peculiarities of posture and body type of women of the first period of a mature age. The study of the effectiveness of an aqua fitness program based on the body type of women of the first period of a mature age shows that women had positive changes in posture. If, at the beginning of the experiment, only 23.9% woman had normal posture, then after the experiment their proportion increased statistically significantly (p < 0.05) to 45.7%. It should be noted that both before and after the experiment, the proportion of women with normal posture was the highest among women of regular body type, namely 26.5% at the beginning and 47.1% at the end of the pedagogical experiment. Moreover, the most common violation recorded in the examined women was scoliotic posture, 32.6% among all study participants. This study has shown that the proportion of subjects with normal posture in the total sample significantly increased at the end of the experiment (p < 0.05), however this changes in each of the groups of women with different body types were statistically insignificant (p > 0.05). The analysis of the dynamics of the bio-geometric profile of posture confirms the improvement of posture in representatives of different somatotypes: for women of asthenic type, the assessment of the bio-geometric profile of posture increased statistically significantly (p < 0.05) from (15.57; 4.39) to (18, 43; 4.65) points, for women of regular type there was a statistically significant (p < 0,05) increase in the indicator from (17,18; 4,06) to (20,12; 4,06) points, and in representatives of hypersthenic type, a statistically significant (p < 0.05) increase from (15.60; 4.56) to (19.00; 4.06) points. Therefore, it can be argued that aqua fitness has a positive effect on the posture of women of the first period of a mature age, regardless of their body type.
2020-10-19T18:09:54.191Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "bdf155dbb3c6905461695fafb3c4366fe5ddaa23", "oa_license": "CCBY", "oa_url": "https://tmfv.com.ua/journal/article/download/1361/1371", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d19824a5c25c1745f015fe9a4450df1f97cc1546", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
12235483
pes2o/s2orc
v3-fos-license
Uni-MUMAC: A Unified Down/Up-link MU-MIMO MAC Protocol for IEEE 802.11ac WLANs Due to the dominance of the downlink traffic in Wireless Local Area Networks (WLANs), a large number of previous research efforts have been put to enhance the transmission from the Access Point (AP) to stations (STAs). The downlink Multi-User Multiple-Input Multiple-Output (MU-MIMO) technique, supported by the latest IEEE amendment-802.11ac, is considered as one of the key enhancements leading WLANs to the Gigabit era. However, as cloud uploading services, Peer-to-Peer (P2P) and telepresence applications get popular, the need for a higher uplink capacity becomes inevitable. In this paper, a unified down/up-link Medium Access Control (MAC) protocol called Uni-MUMAC is proposed to enhance the performance of IEEE 802.11ac WLANs by exploring the multi-user spatial multiplexing technique. Specifically, in the downlink, we implement an IEEE 802.11ac-compliant MU-MIMO transmission scheme to allow the AP to simultaneously send frames to a group of STAs. In the uplink, we extend the traditional one round channel access contention to two rounds, which coordinate multiple STAs to transmit frames to the AP simultaneously. 2-nd round Contention Window (CW2nd), a parameter that makes the length of the 2-nd contention round elastic according to the traffic condition, is introduced. Uni-MUMAC is evaluated through simulations in saturated and non-saturated conditions when both downlink and uplink traffic are present in the system. We also propose an analytic saturation model to validate the simulation results. By properly setting CW2nd and other parameters, Uni-MUMAC is compared to a prominent multi-user transmission scheme in the literature. The results exhibit that Uni-MUMAC not only performs well in the downlink-dominant scenario, but it is also able to balance both the downlink and uplink throughput in the emerging uplink bandwidth-hungry scenario. value of the 2-nd round Contention Window (CW 2nd ) to obtain the highest system throughput. 4) With the optimized CW 2nd and other properly configured parameters (e.g., the number of aggregated frames and the queue length of the AP), Uni-MUMAC is then extensively evaluated through simulations in the downlink-dominant and the down/up-link balanced traffic scenarios in IEEE 802.11ac WLANs. The rest of the paper is organized as follows. First, Section 2 investigates the MU-MIMO MAC proposals in the literature. Then, Section 3 explains the modified frame structure, Uni-MUMAC operating procedures and all designing details. After that, Section 4 gives the considered scenarios to evaluate Uni-MUMAC, the maximum theoretical throughput, simulation results and observations. Finally, Section 5 concludes the paper and discusses the future research challenges. Related Work Most previous work has put efforts on adjusting MAC parameters or extending MAC functions to improve the performance of WLANs. In the downlink, the spatial multiplexing technique has recently gained much attention. To support it, many proposals in the literature adopt the following MAC procedure. The AP firstly sends out a modified Request to Send (RTS) containing a group of targeted STAs, then those listed STAs estimate the channel, add the estimated Channel State Information (CSI) into the extended Clear to Send (CTS) and send it back. As soon as the AP receives all successful CTSs, it precodes the outgoing signals and send multiple data frames simultaneously. Cai et al. in [9] propose a distributed MU-MIMO MAC protocol that modifies RTS and CTS frames to estimate the channel, based on which, the AP is able to concurrently transmit frames to multiple STAs. Kartsakli et al. in [10] consider an infrastructured WLAN and propose four multi-user scheduling schemes to simultaneously transmit frames to STAs. The results show that the proposal achieves notable gains compared to that of the single user case. Gong et al. in [11] propose a modified Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol with three different ACK-replying mechanisms. The authors claim that the proposed protocol can provide a considerable performance improvement against the beamforming based approach when Signal-to-noise Ratio (SNR) is high. Zhu et al. in [12] investigate the required MAC modifications to support downlink MU-MIMO transmissions focusing on the fairness issue. The proposed Transmit Opportunity (TXOP) sharing scheme not only obtains a higher throughput but also is more fair than the conventional mechanism. Cha et al. in [13] compare the performance of a downlink MU-MIMO scheme with a Space Time Block Coding (STBC) based frame aggregation scheme. The results show that the former produces a higher throughput than the latter if transmitted frames are of similar length. The uplink enhancement is getting more attention as the popularity of P2P applications and cloud services increases. In general, there are two broad categories of uplink MU-MIMO MAC enhancement, namely, the un-coordinated access and the coordinated access. The former utilizes the MAC random mechanism to decide which STAs are allowed for data transmissions, while the latter employs the AP to schedule STAs' uplink access. Some of the un-coordinated uplink access schemes are sampled as follows. In [14], Jin which divides the MAC procedure into two parts, namely, the random access and the data transmission. The random access terminates when the AP receives a predefined number of successful RTSs, and then the data transmission follows. In [20], Zhang et al. further extends the two contention rounds to multiple rounds, which enable more STAs to be involved in parallel uplink transmissions. The proposed protocol can fall-back to the single-round mode automatically on condition that the traffic is low and the single-round scheme can provide higher throughput. In [21], Jung in the downlink and multiple control frame receptions (e.g., CTSs or ACKs) in the uplink, while simultaneous data transmissions from multiple STAs are not considered. In [24], Jin et al. focus on the unbalanced throughput problem between downlink and uplink, where a Contention Window (CW) adjustment scheme and a random piggyback scheme are proposed to increase the downlink throughput ratio. Uni-MUMAC Operation Uni-MUMAC is based on the IEEE 802.11 Enhanced Distributed Channel Access (EDCA), which relies on the CSMA/CA mechanism to share the wireless channel. EDCA can operate in either the basic access mode or the optional RTS/CTS handshaking one. In this paper, Uni-MUMAC adopts and extends the RTS/CTS scheme for the following reasons: 1) The AP can notify the uplink contending STAs about the number of available antennas by using a modified control frame; 2) The AP can estimate the CSI from the RTS/CTS exchanging process; 3) The distributed STAs can be easily synchronized for simultaneous uplink transmissions from the RTS/CTS exchanging process. Frame Structure The PHY frame structure of IEEE 802.11ac is shown in Figure 1 In the uplink, all frame modifications are limited to the AP side to reduce STAs' computing consumption. These modified frames are Ant-CTS (CTS with the antenna information), G-CTS (Group CTS) and G-ACK (Group ACK), as shown in Figure 3. An Antenna Information field is added to Ant-CTS, which is broadcast by the AP to announce the number of available antennas and the start of the 2-nd contention round. G-CTS and G-ACK have the identical frame structure, where the receiver address field is removed and replaced by the Group-ID field in the IEEE 802.11ac PHY frame, while a transmitter address field is added to indicate the AP address. The G-CTS frame is used to inform STAs the start of the data transmission, and G-ACK is used to indicate the successful reception of data frames. After the channel has been idle for an Arbitration Inter Frame Space (AIFS), a random back-off (BO) drawn from CW starts to count down and is frozen as soon as the channel is detected as busy. Successful Downlink Transmissions Suppose the AP first wins the channel contention and sends a MU-RTS. Then, the STAs who are included in the Group-ID field reply with MU-CTSs sequentially as the indicated order. Those STAs who are not included in the MU-RTS will set the Network Allocation Vector (NAV) to defer their transmissions. After a MU-CTS is received, the AP will measure the channel through the training sequence included in the PHY preamble, and then uses the estimated CSI to precode the simultaneously-transmitted frames. As being precoded, the frames destined to different STAs will not interfere with each other. Finally, STAs send MU-ACKs at the same time to acknowledge the successful reception of data frames. 00 00 00 00 11 11 Note that, the uplink channel is assumed to be the same as the downlink one in this paper. In other words, the implicit CSI feedback, namely, the AP estimates the channel using the training sequence included in the MU-CTS, is adopted. The reason is that the explicit CSI feedback will need more computing capability at STAs and require an extra field with substantial volume in the MU-CTS to include the measured CSI, which may not be suitable for STAs in some capacity or power constraint scenarios. Successful Uplink Transmissions In the uplink, a standard RTS is sent to the AP by the STA that won the 1-st round channel contention. Instead of replying a CTS, an Ant-CTS is broadcast by the AP with two functions: 1) to notify the STA about the successful reception of the RTS, and 2) to inform other STAs that the number of available antennas and the start of the 2-nd contention round. The STAs who have frames to send will compete for the available spatial streams in the 2-nd contention round. A new random BO (BO 2nd ) drawn from CW 2nd starts to count down, and a RTS will be sent if BO 2nd of a STA reaches 0. The number of available antennas of the AP decreases by one each time an uplink RTS is successfully received. The 2-nd contention round finishes as: 1) all available antennas of the AP are occupied or 2) a predefined duration of the 2-nd contention round elapses in case there are not enough contending STAs (the maximum duration of the 2-nd contention round is set to CW 2nd slots). As soon as the 2-nd contention round finishes, a G-CTS is sent by the AP to indicate the readiness for receiving multiple frames in parallel. The G-CTS frame includes the addresses of STAs who have successfully sent RTSs during both 1-st and 2-nd contention rounds. When the G-CTS is received by the targeted STAs, they are synchronized to send data frames to the AP simultaneously. Finally, the AP acknowledges the received data frames with G-ACK. An example of a successful uplink transmission is shown in Figure 5, where the AP has 3 antennas, 00 00 00 00 11 11 11 11 00 00 00 00 11 11 11 11 00 00 00 00 11 11 11 11 00 00 00 00 11 11 11 11 00 00 00 00 11 11 11 11 00 00 00 00 11 11 11 11 00 00 00 00 11 11 11 11 00 00 00 00 11 11 It is important to point out that the RTSs sent by STAs in the 2-nd contention round could collide with G-CTS sent by the AP. For example, the RTS sent by a STA who claims the AP's last available antenna is not heard by some STAs (hidden terminals), which therefore believe that the AP still has available antennas. Then, after a Short Inter Frame Space (SIFS) interval, the G-CTS sent by the AP and RTSs sent by the hidden STAs would collide. To avoid this unexpected scenario, STAs are forced to wait for a Multi-User SIFS interval (MU-SIFS, an interval that is longer than SIFS but shorter than AIFS) in the 2-nd contention round, which gives the AP a priority to send the G-CTS. Frame Collisions Collisions will occur in both 1-st and 2-nd contention rounds if more than one STA choose the same random back-off value. On sending a RTS, EDCA specifies that the STA has to set a timer according to Equation (1) to receive the expected CTS, where T CTS represents the transmission duration of a CTS frame. If CTS is not received before the timer expires, the STAs who previously sent RTSs assume that collisions occurred. These RTS-sending STAs will compete for the channel access after the expiration of the timer. For the RTS-receiving STAs, none of RTSs can be decoded correctly. Therefore, after the collision time, the receiving STAs will wait for an Extended Inter Frame Space (EIFS, as shown in Equation (2)) interval to compete for the channel access together with those RTS-sending STAs. As shown in Figure 6 Other Considerations In IEEE 802.11 EDCA, a STA renews its BO if the channel contention was successful. For the STAs who did not win the contention, the frozen BO is used for the next channel contention. In this paper, BO of the 1-st contention round is renewed after collisions in the 1-st round or if the STA is the initiator of the two-round process. Although both STA 1 and STA 2 participate in the transmission as shown in Figure 7, STA 1 is considered to be the initiator. In other words, STA 1 will have a new random BO in the followed 1-st contention round, while STA 2 will use the frozen BO. It is more straightforward regarding BO 2nd . Each STA draws a fresh BO 2nd from CW 2nd as soon as a new 2-nd contention round starts. G-CTS will be sent out by the AP depending on whether the number of available antennas reaches zero or the duration of the 2-nd contention round drains. As soon as the Ant-CTS is sent, the AP sets the G-CTS timer according to Equation (5). Performance Evaluation Uni-MUMAC is implemented in C++ using the Component Oriented Simulation Toolkit (COST) library [27] and evaluated in the SENSE simulator [28]. Table 1. An example to calculate the duration of a MU-RTS frame and a data frame using these parameters is given in Equation (6) detailed calculation of the frame duration can be found in [29]. Considered Scenarios and Maximum Throughput The theoretical maximum saturation throughput of the downlink and the uplink of Uni-MUMAC are given in Equations (7) and (8) to compare with what can be obtained from simulations. The maximum throughput is calculated by assuming: 1) no collisions in both contention rounds; 2) only one-way traffic is present; 3) the number of STAs M is always higher than the number of antennas at the AP, which enables all AP's antennas to be fully utilized. In the case of N = 4 and N f = 1, the maximum downlink throughput and the uplink throughput from Equations (7) and (8) System Performance against CW 2nd In this sub-section, the performance of Uni-MUMAC is evaluated by increasing CW 2nd , with the goal to find a suitable CW 2nd that maximizes the system performance. Two traffic conditions are considered: 1) the saturated one, as shown in Figure 9, and 2) the non-saturated one, as shown in Figure 10. Note that the saturated condition refers to that both the AP and STAs always have frames to transmit. Obviously, there is no 2-nd round channel access when the AP has 1 antenna, which is why the results keep constant as N = 1. When the WLAN is in the saturated condition (i.e., both downlink and uplink are saturated), the impact of increasing CW 2nd on the downlink throughput (AP's throughput) is very small. However, for the uplink, a clear advantage of using the higher number of antennas and the importance of choosing an appropriate CW 2nd are observed. For example, the uplink throughput (STAs' throughput) approaches its maximum when CW 2nd ∈ [8,12] as M = 8 (Figure 9(a)) and when CW 2nd ∈ [12,16] as M = 15 ( Figure 9(b)). In the non-saturated condition, we set the traffic load for each STA and the AP to 1.4 Mbps and 11.2 Mbps, respectively. In Figure 10(a), the downlink throughput when the AP has 2 and 4 antennas obtains the highest value when CW 2nd ∈ [4,8] and then decreases as CW 2nd keeps increasing. The reason for that is the continuous increase of CW 2nd leads to longer uplink transmissions that harm the downlink. Figure 10(b) shows that the average delay increases as CW 2nd increases. Note that, the average delay remains at a relatively low level when the system is in the non-saturated condition, for example, the average delay of STAs when CW 2nd ∈ [4, 34] and the average delay of the AP when N = 4 and CW 2nd ∈ [4,8]. However, the average delay of the AP (for N = 4) increases sharply as the downlink traffic approaches saturation. It is also observed that the downlink throughput, as the network becomes saturated, is much lower than both the uplink one and the theoretical one. The reasons are as follows. First, the AP bottle-neck effect. It is caused because the AP manages all traffic to and from STAs in a WLAN, while it has the same probability to access the channel as the STAs due to the random back-off mechanism of CSMA/CA. In addition, the inherently high traffic load at the AP results in that the downlink is saturated most of the time. Thirdly, a favorable value of CW 2nd for the uplink does not mean the same benefit to the downlink. For example, as shown in the Figure 9, the uplink obtains the highest throughput when CW 2nd is closer to the number of STAs, while the downlink transmission prefers a value of CW 2nd as small as possible. In order to mitigate the AP bottle-neck effect and compensate the downlink disadvantage when STAs choose a big CW 2nd , we set the maximum number of frames the AP can aggregate to be the number of STAs (N f ≤ M ), while keeping the number of frames aggregated by each STA to 1 in the following simulations. Also, the queue length of the AP is set to quadratically increase with the number of STAs (Q ap = M 2 ) to statistically guarantee that there are enough frames destined to different STAs [29]. In Figures 11 and 12, the performance of Uni-MUMAC is evaluated in the same condition as done in to avoid the extreme-low downlink throughput when the system is saturated (Figure 11) and keeps the downlink transmission always in the non-saturation area (Figure 12(a), which is not achieved in Figure 10(a)). The average delay of the AP (Figure 12(b)) is much lower compared to that of the AP in 10(b), which is because the system remains in the non-saturated condition by employing the frame aggregation scheme. respectively. Therefore, the optimum value of CW 2nd is fixed to M in the following simulations. 1. Downlink-dominant: This is the traditional WLAN traffic scenario, where the AP manages a much heavier traffic load compared to that of STAs. Therefore, the traffic load of the AP is set to be 4 times higher than that of each STA. For example, in case the traffic load of a STA is 0.8 Mbps and there are 5 STAs, the traffic load of the AP will be 4 · 0.8 · 5 = 16 Mbps. 2. Down/up-link balanced: This is one of the WLAN traffic types that not only includes P2P applications, which have already been around for some years, but also includes those emerging content-rich file sharing and video calling applications, where the traffic of downlink and uplink is balanced. Therefore, the traffic load of the AP is set to be the same as that of each STA. In this case, if there are 5 STAs, and each STA has 0.8 Mbps traffic load, the traffic load of the AP will be 0.8 · 5 = 4 Mbps. traffic scenario. The advantage of employing a higher number of antennas at the AP is obvious. The downlink throughput is much higher than the uplink one before the system gets saturated. The reasons for that are twofold: 1) the AP traffic load is inherently higher than that of STAs, and 2) the AP adopts grow significantly as the downlink or the uplink traffic approaches the saturation. After the system gets saturated, the average delay becomes steady. It is worth pointing out that the average delay of STAs is higher than that of the AP when M becomes bigger. The reason for that is that the transmission duration of the AP gets longer as M increases (due to the frame aggregation scheme), which makes STAs waiting longer to access the channel. Figure 15 shows that the 1-st round collision probability of the AP and STAs increases with M and converges when the system becomes saturated, which confirms the down/up-link saturation trend as discussed in Figures 13 and 14. It is interesting to note that the collision probability of STAs is higher than that of the AP when the system is non-saturated. The reason for that is a STA transmits less frequently than the AP in the non-saturated condition, which results in a lower conditional collision probability for the AP. It can be clearly explained by Equation 9, where p ap and τ ap (p sta and τ sta ) are the 1-st round collision probability and the transmission probability of the AP (or a STA). Figure 16 shows the 2-nd round collision probability against M . It is clear that the 2-nd round collision probability is higher when the system traffic load is higher. In the low number of STAs area, the 2-nd round collision probability when the AP has 2 antennas is sometimes lower than that when the AP has 4 antennas. The reason is that, a higher number of antennas at the AP usually means a longer duration of the 2-nd contention round, which increases the chances of collisions in the 2-nd round. For example, in a case that the AP employs 2 antennas, the 2-nd contention round finishes as soon as a STA successfully wins the still-available antenna of the AP; while in a case that the AP employs more than 2 antennas, the 2-nd contention round continues, therefore increasing the 2-nd round collision probability. Conclusions & Future Research Challenges In this paper, a unified MU-MIMO MAC protocol called Uni-MUMAC, which supports simultaneous downlink and uplink transmissions for IEEE 802.11ac WLANs, is proposed and evaluated. By analyzing the simulation results, we observe that the 2-nd round Contention Window CW 2nd , which is tuned to optimize the uplink transmission, is however not bringing the same benefit to the downlink one. An adaptive frame aggregation scheme and queue scheme are applied at the AP to offset this disadvantage. By properly setting all the parameters, the results show that a WLAN implementing Uni-MUMAC is able to avoid the AP bottle-neck problem and performs very well in both the traditional downlink-dominant and emerging down/up-link balanced traffic scenarios. The results also show that a higher system capacity can be achieved by employing more antennas at the AP. Uni-MUMAC gives us insight about the interaction of down/up-link transmissions and how different parameters that control the system can be tuned to achieve the maximum performance. Based on the study of this paper, we considered the following aspects as the future research challenges or next steps for Uni-MUMAC. 1. Adaptive Scheduling Scheme: As discussed in the paper, a parameter that optimizes the uplink could be unfavorable to the downlink. Therefore, an adaptive scheduling algorithm that takes several key parameters into account and compensates those STAs whose interests are harmed would play a significant role on obtaining the maximum performance while maintaining the fairness. As implied from the paper, these parameters include: the key parameters controlling down/up-link transmissions, the spatial-stream/frame allocation, the number of nodes/antennas, the size of A-MPDU and the queue length. 2. Traffic Differentiation: Another future research challenge is to provide new traffic differentiation capability in the uplink besides the one defined in IEEE 802.11e amendment [30]. One option could be to limit the number of STAs that can participate in the 2-nd contention round to those with a higher priority traffic. The other option could be to create a table at the AP with information about the the priority of each traffic flow and the queue length of each STA or Access Category, and then to utilize this table to control the 2-nd contention round. 3. Multi-hop Mesh Networks: In multi-hop wireless networks, the hidden-node problem needs also to be considered. To find mechanisms that efficiently solve the collisions caused by hidden nodes is still an open challenge. For example, a collision-free scheme [31] could be a good option in wireless mesh networks. In addition, MAC protocols have to consider that all nodes may have the same number of antennas, and therefore, having the ability to support both multi-packet transmission and multi-packet reception at the same time. Thirdly, MAC and routing protocols need to be jointly designed. There could be multiple destinations involved in a MU-MIMO transmission and some destinations could be out of the one-hop transmitting range, in which case, routing strategies should be able to forward multiple packets to different nodes in parallel.
2014-09-22T03:08:15.000Z
2013-09-19T00:00:00.000
{ "year": 2013, "sha1": "abdf2f42bbd9a3bcd1b6713e6018fe6356ef91d6", "oa_license": null, "oa_url": "http://repositori.upf.edu/bitstream/10230/44237/1/liao_wirenet_mumac.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "abdf2f42bbd9a3bcd1b6713e6018fe6356ef91d6", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
198984734
pes2o/s2orc
v3-fos-license
Low-cost, high-speed near infrared reflectance confocal microscope : We have developed a low-cost, near-infrared (NIR) reflectance confocal microscope (RCM) to overcome challenges in the imaging depth and speed found in our previously-reported smartphone confocal microscope. In the new NIR RCM device, we have used 840 nm superluminescent LED (sLED) to increase the tissue imaging depth and speed. A new confocal detection optics has been developed to maintain high lateral resolution even when a relatively large slit width was used. The material cost of the NIR RCM device was still low, ~$5,200. The lateral resolution was 1.1 µm and 1.3 µm along the vertical and horizontal directions, respectively. Axial resolution was measured as 11.2 µm. In vivo confocal images of human forearm skin obtained at the imaging speed of 203 frames/sec clearly visualized characteristic epidermal and dermal cellular features of the human skin. smartphone c were remaini wavelength = The color fil efficiency, wh speed could le In this pa microscope to described and of the human Low-cos The schemati luminescent l nm; bandwidt an aspheric s grating (gratin = 30 mm) and on the tissue wavelength. A objective lens in the effectiv Confoca We have deve the detection relatively larg has a wide sp bandwidth is spectral bandw and subsequen the sLED with With the increase of t between the p Fig. 2 Imaging performance test Lateral resolution of the low-cost NIR confocal microscope was measured by imaging a USAF resolution target. FWHM of the line spread function (LSF) was calculated along the spectrally-encoded and slit-length directions. Axial resolution was measured by translating a mirror along the objective lens optical axis with a motorized stage and calculating the FWHM of the axial response curve. The source power was attenuated during the resolution measurement to ensure that the pixel values were not saturated. Tissue imaging performance was evaluated by imaging human forearm in vivo at different imaging depth levels. The forearm skin surface was placed parallel to the focal plane of the objective lens. Ultrasound gel with a similar refractive index to that of water was applied between the forearm and objective lens. The exposure time was set at 4.8 msec and the resulting frame rate was 203 fps. The microscope was translated relative to the forearm using a motorized stage. The motor speed was set to 1 mm/sec and the scan range 500 µm. The maximum acceleration of the motor was 4 mm/sec 2 , which produced the acceleration time of 0.25 sec and deceleration time 0.25 sec. At the center of the axial scanning, the uniform speed of 1 mm/sec was maintained over 250 µm range. Within the uniform speed region, the axial step size between frames was 5 µm. The skin surface was located at the beginning of the uniform velocity region. A bi-directional axial scan was conducted. The resulting 3D volume acquisition rate was 1.33 volumes/sec. Images were saved as an AVI file using a custom LabVIEW code (National Instruments, Austin, TX). At the end of each axial scanning, the confocal FOV was manually moved to a new imaging location and the axial scanning was conducted at the new imaging location. After image acquisition, the AVI file was segmented into multiple image stacks with each stack representing one axial scan. The image stacks were analyzed in ImageJ [18]. The background intensity level was measured and subtracted. 3D rending of the image stacks was conducted using 3D Slicer [19]. The speckle noise contrast was calculated by analyzing dermis images and dividing the standard deviation of the intensity values by the mean value at four 100 × 100-pixel regions that exhibited grossly uniform reflectivity without observable cellular features. The speckle noise contrast was measured at three different imaging depth levels. Results A photograph of the low-cost confocal microscope is shown in Fig. 4. The confocal microscope had a dimension of 15 cm (W) × 16 cm (H) × 4.5 cm (D), and the weight was 0.57 kg. The material cost for the confocal microscope was $5,188. The optical power on the specimen was 2.2 mW. A confoca pattern in gro length directi resolved alon 0.05 µm and respectively. T as 11.24 ± 0.1 the theoretica Figure 6 s had an image shows high r visualizes ker imaging depth melanin-conta (yellow asteri (Fig. 6(d)), t region), whic More dermal 6(e)), the fibe as dark areas network and was measured respectively. Discussio In this paper, low-cost NIR cellular featur rate of 203 fr microscope a imaging speed 5 large-area imaging of the entire skin lesion within a short procedural time. We also expect that the low cost of the device will facilitate a wide adaption of the device in various clinical settings. There were several remaining technological challenges found during the preliminary testing. Even though a relatively wide slit was used, the speckle noise was still prominent in confocal images, which hindered the image interpretation. Use of the wide slit degraded the axial resolution. In the future development, we will address these two issues by using a highpower LED, which has a significantly reduced spatial coherence and therefore allows for use of a narrow slit width. The volumetric imaging rate was limited to 1.33 volumes/sec mainly due to the acceleration and deceleration of the axial scanning stage. A piezoelectric transducer (PZT)-based scanner can be used to achieve higher volumetric imaging rate. In the new confocal detection optics, the CMOS sensor is located on the same side as the tissue, which will make it challenging to image certain anatomical locations such as back or face. A fold mirror can be used between the grating and camera lens to move the CMOS sensor away from the tissue and allow for imaging of a wider range of skin locations. In the future study of imaging suspicious skin lesions, we will evaluate the image quality of our microscope in comparison with the commercial confocal microscope and evaluate feasibilities of large-area scanning and real-time 3D imaging. Funding National Institutes of Health/Fogarty International Center (R21TW010221).
2019-07-14T07:01:37.403Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "1467956e49d97219a28968f9baa8b1e58f7e14db", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/boe.10.003497", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b7f00603ed16724f5f0a6c80b387dc165c26bb47", "s2fieldsofstudy": [ "Engineering", "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
244715988
pes2o/s2orc
v3-fos-license
Successful Milk Oral Immunotherapy Promotes Generation of Casein-Specific CD137+ FOXP3+ Regulatory T Cells Detectable in Peripheral Blood Background Oral immunotherapy (OIT) is an emerging treatment for cow’s milk protein (CMP) allergy in children. The mechanisms driving tolerance following OIT are not well understood. Regulatory T cells (TREG) cells are key inhibitors of allergic responses and promoters of allergen-specific tolerance. In an exploratory study, we sought to detect induction of allergen-specific TREG in a cohort of subjects undergoing OIT. Methods Pediatric patients with a history of allergic reaction to cow’s milk and a positive Skin Pick Test (SPT) and/or CMP-specific IgE >0.35 kU, as well as a positive oral challenge to CMP underwent OIT with escalating doses of milk and were followed for up to 6 months. At specific milestones during the dose escalation and maintenance phases, casein-specific CD4+ T cells were expanded from patient blood by culturing unfractionated PBMCs with casein in vitro. The CD4+ T cell phenotypes were quantified by flow cytometry. Results Our culture system induced activated casein-specific FOXP3+Helios+ TREG cells and FOXP3- TEFF cells, discriminated by expression of CD137 (4-1BB) and CD154 (CD40L) respectively. The frequency of casein-specific TREG cells increased significantly with escalating doses of milk during OIT while casein-specific TEFF cell frequencies remained constant. Moreover, expanded casein-specific TREG cells expressed higher levels of FOXP3 compared to polyclonal TREG cells, suggesting a more robust TREG phenotype. The induction of casein-specific TREG cells increased with successful CMP desensitization and correlated with increased frequencies of casein-specific Th1 cells among OIT subjects. The level of casein-specific TREG cells negatively correlated with the time required to reach the maintenance phase of desensitization. Conclusions Overall, effective CMP-OIT successfully promoted the expansion of casein-specific, functionally-stable FOXP3+ TREG cells while mitigating Th2 responses in children receiving OIT. Our exploratory study proposes that an in vitro TREG response to casein may correlate with the time to reach maintenance in CMP-OIT. Background: Oral immunotherapy (OIT) is an emerging treatment for cow's milk protein (CMP) allergy in children. The mechanisms driving tolerance following OIT are not well understood. Regulatory T cells (T REG ) cells are key inhibitors of allergic responses and promoters of allergen-specific tolerance. In an exploratory study, we sought to detect induction of allergen-specific T REG in a cohort of subjects undergoing OIT. Methods: Pediatric patients with a history of allergic reaction to cow's milk and a positive Skin Pick Test (SPT) and/or CMP-specific IgE >0.35 kU, as well as a positive oral challenge to CMP underwent OIT with escalating doses of milk and were followed for up to 6 months. At specific milestones during the dose escalation and maintenance phases, casein-specific CD4 + T cells were expanded from patient blood by culturing unfractionated PBMCs with casein in vitro. The CD4 + T cell phenotypes were quantified by flow cytometry. Results: Our culture system induced activated casein-specific FOXP3 + Helios + T REG cells and FOXP3 -T EFF cells, discriminated by expression of CD137 (4-1BB) and CD154 (CD40L) respectively. The frequency of casein-specific T REG cells increased significantly with escalating doses of milk during OIT while casein-specific T EFF cell frequencies remained constant. Moreover, expanded casein-specific T REG cells expressed higher levels of FOXP3 compared to polyclonal T REG cells, suggesting a more robust T REG phenotype. The induction of casein-specific T REG cells increased with successful CMP desensitization and correlated with increased frequencies of casein-specific Th1 cells INTRODUCTION Cow's milk allergy (CMA) affects close to 0.6% of children under 2-years of age (1,2). Up to 80% of children are expected to outgrow CMA by adulthood (3), but persistent CMA is a major risk factor for anaphylaxis due to accidental milk ingestion in school age-children (4). Cow's milk oral immunotherapy (CM-OIT) is emerging as an effective experimental approach to induce tolerance to milk protein, with up to 75% of patients successfully achieving desensitization (4)(5)(6)(7). However, there are still a number of patients who fail to achieve sustained unresponsiveness to CMP, lose their state of desensitization to CMP during the maintenance period or discontinue treatment despite the demonstrated clinical efficacy of CM-OIT (8). Furthermore, successful CM-OIT requires rigorous patient compliance, any deviation in protocol may prolong the length of time required to reach maintenance or increase the risk of developing an allergic reaction the scheduled CMP doses (9). Undoubtedly, individual differences in immunity can also contribute to the variable clinical outcomes observed in CM-OIT studies. Many efforts have been made to identify clinically relevant biomarkers that predict individual CM-OIT outcomes, none of which have been successful thus far (10,11). Since the clinical response to CM-OIT is highly variable, developing biomarkers that successfully predict ability to achieve desensitization, time to reach maintenance or risk of developing adverse events during therapy would enable the individualization of CM-OIT and increase safety of the procedure. Recently, investigators have focused on examining the upstream cellular mechanisms implicated in oral tolerance to food. Regulatory T cells (T REG ), a class of CD4 + T cells expressing the transcription factor Forkhead box P3 (FOXP3), have been of particular interest given their key roles in induction and maintenance of peripheral tolerance to a plethora of self and non-self antigens (12). Allergen-specific T REG cells can suppress both innate and adaptive arms of an allergic response, preventing mast cell activation, IL-4 production, Th2 cell development and IgE production by B cells (13). T REG cells can be readily measured in the peripheral blood and defects in their abundance and function have been implicated in the pathophysiology of food allergy (14). Indeed, mutations within the FOXP3 locus are associated with the development of severe food allergies due to a widespread loss of tolerance to innocuous antigens (15). Children with IgE-mediated food allergy have significantly lower FOXP3 expression compared to healthy controls (16,17), and decreased frequencies in circulating T REG cells after allergen exposure (18)(19)(20). In patients with peanut allergy, OIT increases both the abundance and suppressive function of T REG cells as well as induces epigenetic changes such as hypomethylation of the FOXP3 locus required for maintenance of a stable suppressive T REG cell phenotype (21). In children with milk allergy, those who tolerate baked milk have a higher frequency of peripheral blood casein-specific suppressive FOXP3 + CD25 + CD127 -T REG cells compared to children who do not, and this correlates with a higher likelihood of achieving milk tolerance (14). Similarly, children who outgrow their milk allergy have higher levels of peripheral CD4 + CD25 + T REG cells and lower in vitro T-cell proliferative responses to ßlactoglobulin than those who do not (22). However, while the frequencies of antigen-specific T REG cells and their secreted cytokines (IL-10, TGFb) increase during OIT (23), neither successfully predict OIT outcomes (10). In addition to potential disease heterogeneity and methodological variations that may have contributed to failed prediction of OIT outcomes in these studies, lack of reliable human T REG cell markers is a significant limitation. T REG cells are a functionally heterogenous population (24,25) and traditional markers like CD25, CD127 and FOXP3 do not adequately discriminate between T REG from T EFF cells particularly in settings of T cell activation like allergy (25,26). Most commonly used T REG markers are also inducible on effector T cells (T EFF ) upon TCR-mediated activation, blurring the distinction between human T REG and activated T EFF cells, increasing the functional heterogeneity of the population and confounding the interpretation of results (25). Importantly, we have previously shown that expression of the transcription factor Helios alongside FOXP3, can reliably discriminate stablysuppressive T REG cells from T EFF cells in activated immune settings (25). Moreover, the differential expression of CD137 (4-1BB), a direct target of FOXP3, and CD154 (CD40 ligand) can further discriminate recently activated, functionally suppressive T REG from activated T EFF cells in human peripheral blood (27). In this pilot CM-OIT clinical study, we performed in-depth, phenotypic characterization of CD4 + T cell subsets specific to casein, the major protein allergens in cow's milk. We aimed to evaluate whether CM-OIT induced casein-specific, stablysuppressive FOXP3 + Helios + T REG cells and whether this cellular response correlated with successful OIT. Here, we characterized casein-specific T REG and T EFF cell phenotypes, based on differential CD137 (4-1BB) and CD154 (CD40L) expression, respectively, at several time-points during CM-OIT in 7 pediatric patients that successfully achieved CMP desensitization. We hypothesized that successful CM-OIT would require the expansion of casein-specific CD137 + T REG cells rather than the polyclonal expansion of total peripheral blood T REG . Here, we propose that peripheral casein-specific CD137 + T REG responses during CM-OIT can be used to identify patients likely to achieve successful CMP desensitization and may correlate with CM-OIT time to reach maintenance. Human Subjects Seven patients were recruited from a prospective randomizedcontrolled trial aiming to compare adverse events in patients undergoing CM-OIT to patients that continued to avoid CMP. This study was conducted at the Pediatric Allergy and Clinical Immunology Department of the Montreal Children's Hospital (MCH) in Montreal, Quebec, Canada (4). Informed consent was obtained for every patient and the study was approved by the Research Ethics Board of the McGill University Health Center (PED-12-090). Whole blood samples were obtained from 7 children who successfully completed CM-OIT (defined as successful challenge to 200 ml milk or 8000 mg milk protein) and from one healthy non-allergic control for comparison (26-year-old male), depicted in Figure 4. Briefly, for each study patient, IgE-mediated CMA was diagnosed by compatible clinical history and positive skin prick testing (SPT) with commercial CMP extract (≥3 mm over saline control) or positive serum casein-specific IgE levels (>0.35k U/L). Placebo-controlled single-blinded oral challenge to CM was used to confirm CMP allergy, and patients were assigned in a 1:1 ratio to either CM-OIT or CM avoidance for 1 year with crossover at the end of this period. The CM-OIT protocol started with rush desensitization and was followed by an early escalation phase (E; dose escalation from 6 ml to 25 ml of CM), a late escalation phase (L; dose escalation from 125 ml to 200 ml of CM) and a maintenance phase (M; maintained 200 ml of CM) (illustrated in Figure 1A). Blood samples were taken before OIT (baseline or B), during the E phase, the L phase, and 6 months after reaching the M phase (4). Peripheral Blood Mononuclear Cells and Lymphocyte Isolation Whole blood samples were collected at B, E, L, M phase timepoints as well as from the healthy non-allergic control, as described above. PBMC were isolated from heparinized blood using Ficoll-based density gradient centrifugation. Isolated lymphocytes were labelled with CTV (Cell Trace Violet) or CFSE (carboxyfluorescein diacetate succinimidyl ester) and distributed into 96-well flat-bottom plates at a concentration of 5 × 10 5 cells/well. Casein was dissolved in sodium hydroxide for 12 hours and adjusted to a pH of 7.3-7.4 with HCl before use. Lymphocytes were incubated with prepared casein protein (500mg/ml) or medium alone (RPMI 1640 supplemented with 10% Nu-serum) and cultured at 37°C in a 5% CO 2 humidified incubator for 10 days, fresh media was replenished twice daily. IgE and IgG Detection Milk/casein-specific serum immunoglobulins were measured by ELISA. The 96-well polystyrene plates were coated with casein or capture antibodies for IgE or IgG4. Casein was dissolved using 1M NaOH for 4 hours. The protein concentration was adjusted with coating buffer to 20 ug/ml. Capture antibodies were diluted 1:3000 with coating buffer (pH 9.6). The coated plates were incubated overnight at 4°C. Coated plates were washed twice with PBS-T containing PBS (pH 6.8) and 0.05% Tween 20. The plates were blocked with 1% bovine serum albumin (BSA) in PBS-T for 2 hours at room temperature (RT), washed, and 50 ul of milk OIT participant serum diluted in blocking buffer was added to the plates and incubated for 2 hours at RT. Each participant serum sample was added in duplicate. Serial dilutions of known concentrations of IgE or IgG4 standard were added to wells coated with IgE or IgG4 capture antibodies. Blank wells, wells containing only blocking buffer, and well containing serum from non-milk allergic healthy volunteers were used as negative controls. Following four washes with PBS-T, the plates were incubated for one hour at RT with biotinylated goat anti-human IgE antibody diluted 1:3000 or biotinylated mouse anti-human IgG4 antibody diluted 1:250 in blocking buffer. The plates were then washed twice with PBS-T then incubated for one hour at RT with Streptavidin-HRP. After four washes with PBS-T, 50ul of tetramethylbenzidine (TMB) was added to each well then incubated for 15 minutes at RT. The reaction was stopped with 50ul of 1M phosphoric acid. The optical density was measured at 450nm with a reference wavelength of 570nm. Values were converted from ng/mL to kU/L by dividing by a factor of 2.4. Statistical Analysis A non-parametric one-way ANOVA followed by a Dunn's Multiple Comparison post-test was used for longitudinal comparisons of parameters across more than two phases of the study (SPT wheal size, casein-specific sIgE and sIgG levels, changes in the proportions of peripheral T REG subsets), while a Wilcoxon Signed Rank test was used for longitudinal comparisons across two phases only (frequencies of peripheral Th1 and Th2 cells). To determine correlations between CD137 + T REG cells and cytokine-producing T EFF cells or number of escalation days, we conducted a Pearson correlation. For comparisons of cell proportions or protein expression (MFI) between two or more T cell populations within a single phase of our study, a Wilcoxon Signed Rank was employed. Parametric unpaired student's t-test or two-way ANOVA with Tukey's post-testing were used to determine significance in in vitro experiments completed in triplicates from a single individual. A two-sided p-value of <0.05 was considered statistically significant. Statistical analyses were performed using Prism 7 Software (GraphPad, San Diego, CA). Successful OIT Patients Show Decreased Cow's Milk SPT and Increased Casein-Specific IgG4 Responses The details of the global trial design were recently published and is depicted in Figure 1A (4). Seven children from this cohort who successfully achieved CMP-OIT maintenance dosing were randomly selected for this study. Baseline demographics and clinical characteristics of all subjects are outlined in Table 1. The mean age was 12 years and 4/7 were female (57%). All patients reached the target maintenance dose of 200 ml with an average escalation period of 266 days (range: 168-504, IQR=98). The mean cow's milk SPT was 10.5 mm (range: 8-15, IQR=1.75) at study entry and 4.79 mm (range: 0.5-9, IQR=4) after 6 months of CM-OIT maintenance, representing a significant decrease from baseline (p=0.03) ( Figure 1B). Casein-specific sIgE were available in all 7 patients but sIgG4 levels were only available for 6/7 patients. No significant changes in casein-specific IgE levels were detected during the study period (p=0.15) ( Figure 1C), whereas casein-specific IgG4 increased in all patients by the M phase (p=0.0071) ( Figure 1D). There was no Desensitization Is Associated With Casein-Specific T EFF Cells With Altered Cytokine-Secreting Potentials PBMC from each study subject was cultured with casein or Tetanus Toxoid (TT) for 10 days before T cell profiles were evaluated by flow cytometry. CM-OIT dose escalation was associated with the increased expansion of IFN-g-producing Th1 (CD4 + Foxp3 -) cells following in vitro casein challenge (Figures 2A, C, P=0.0625). In contrast, IL-4-producing Th2 cell expansion following casein challenge tended to decrease during CM-OIT dose escalation ( Figures 2B, D, P=0.0625). Correspondingly, the ratio of Th1 to Th2 cells increased between E and L phases ( Figure 2E, P=0.0625), albeit not significant. Analysis of Th1 and Th2 cells were only completed on 5 patients during E and L phases due to sample availability. Our data demonstrates a deviation in circulating Th2 responses towards Th1 immunity over the course of CM-OIT. Casein-Specific Expansion of Stably-Suppressive FOXP3 + Helios + T REG Cells To evaluate a potential increase in immunoregulation with CM-OIT, we aimed to characterize T REG cells both ex vivo and in our in vitro casein re-stimulation system. We compared the phenotypic definition of T REG cells using traditional markers (CD25 High CD127 Low ) to T REG cells defined by FOXP3 and Helios co-expression in a representative CMA patient before and after reaching maintenance dosing ( Figures 3A, B). Indeed, we have previously shown that FOXP3 + Helios + T REG cells represent a stably suppressive population of T REG in healthy individuals (24,25). Ex vivo and following in vitro stimulation with TT (antigenspecific T cell activation), the CD25 High CD127 Low gating excluded more than half of the FOXP3 + Helios + T REG cells (Figures 3C, D). In contrast, after aCD3 stimulation (strong polyclonal T cell activation), the FOXP3 + Helios + gating was more stringent than CD25 High CD127 Low gating with the latter definition also including FOXP3 -T EFF cells and FOXP3 + Helios -T REG cells alongside FOXP3 + Helios + T REG cells ( Figures 3C, D). Thus, we decided to define T REG cells as FOXP3 + Helios + in both CM-OIT and our in vitro culture systems. In healthy, non-allergic control conditions, casein stimulation elicited a weak FOXP3 + Helios + T REG proliferative response compared to stimulation with TT ( Figures 4A, B). However, in subjects with CMA, stimulation with casein elicited a robust proliferative response in FOXP3 + Helios + T REG cells ( Figure 4C), suggesting the presence of casein-specific T REG cells circulating in these patients. Differential Expression of CD137 and CD154 Distinguish Casein-Specific T REG Cells and T EFF Cells, Respectively Recently, it was suggested that CD137 and CD154 differential expression can identify antigen-specific T REG and T EFF cells in human PBMC, respectively (27,28). Hence, to evaluate the presence of casein-specific T cells in our in vitro culture system, we utilized these markers. Proliferating T REG cells were characterized by a significantly higher expression of CD137 than their non-proliferating counterparts ( Figures 4C, D); similarly, proliferating T EFF expressed higher levels of CD154 than nonproliferating T EFF cells (Figures 4C, E). These results show that within all casein-specific T cells, CD137 expression is confined to proliferating T REG cells whereas CD154 expression is confined to expanding T EFF cells. CD137 + is a marker of proliferating caseinspecific T REG cells, whereas CD154 + is a marker of proliferating casein-specific T EFF cells. We then evaluated the difference between CD137 + T REG and CD137 -T REG in terms of FOXP3 and Helios expression levels ( Figure 5). While CD137 + T REG cells expressed higher levels of FOXP3 at each timepoint (E, L, M) ( Figures 5B, C), Helios was differentially expressed between CD137 + T REG and CD137 -T REG at the L and M phase ( Figures 5D, E). Induction of Casein-Specific CD137 + T REG Cells Correlates With Milk Sensitization, an Attenuated Th2 Response and Predicts the Length to Maintenance Phase Since all patients successfully achieved the target CM-OIT maintenance dose, we sought to determine whether T REG or T EFF responses could be used as a marker of milk desensitization. Using the T REG cell markers FOXP3 and Helios alone was insufficient to identify any differences in T REG responses to in vitro casein challenge from PBMC isolated during E, L and M phases ( Figures 6A, B). However, when stratifying T REG cell responses based on CD137 expression, we observe that proliferating FOXP3 + Helios + CD137 + T REG cells steadily increased during successful CM-OIT ( Figure 6C). The proportion of FOXP3 -Helios -CD154 + T EFF cells remained constant throughout the E, L and M phases ( Figure 6D), suggesting that in vitro CD137 + T REG cell induction rather than a reduction in antigen specific CD154 + T EFF cell is associated with casein desensitization. Moreover, we found patients who reached maintenance phase under 36 weeks had highest frequency of FOXP3 + Helios + CD137 + T REG than patients with more than 36 weeks to maintenance phase at M ( Figure 6C), suggesting higher frequency of FOXP3 + Helios + CD137 + T REG may be related to patients reaching M earlier. In early and late phases, the induction of FOXP3 + Helios + CD137 + T REG cells correlated with an increase in the frequency of T EFF cells with a Th1 phenotype and Th1/Th2 ratio in vitro ( Figures 6E, G). There was also a modest negative correlation between FOXP3 + Helios + CD137 + T REG and the frequency of T EFF cells with a Th2 phenotype, albeit not significant ( Figure 6F). Lastly, there is a negative correlation between the proportion of FOXP3 + Helios + CD137 + T REG and the number of escalation days required to reach maintenance at E ( Figure 6H), this is also observed for L and M, albeit non-significant ( Figures 6I-J). This suggests that FOXP3 + Helios + CD137 + T REG at E may correlate with individual time to reach maintenance. DISCUSSION Cow's milk OIT is an effective treatment for inducing oral tolerance in milk-sensitized individuals. However, its clinical applicability is limited by the inability to predict the probability of achieving successful desensitization or sustained unresponsiveness. In this exploratory proof-of-concept study, we suggest that stably-suppressive, casein-specific CD137 + FOXP3 + Helios + T REG may be a good candidate biomarker for identifying patients most likely to achieve successful CMP desensitization and be useful to predict time to reach maintenance in patients undergoing CM-OIT. We characterized the immune parameters of 7 children with successful CM-OIT at several timepoints during treatment. We began by evaluating the standard published biomarkers, namely SPT to cow's milk, casein-specific sIgE levels, casein-specific sIgG4 levels, as well as peripheral casein-specific Th1 and Th2 cells. As expected, casein-specific sIgE levels remained relatively stable during the study period, cow's milk SPT size decreased and casein-specific sIgG4 levels increased with successful desensitization. Most patients maintained a positive SPT to cow's milk and casein-specific sIgE levels in the maintenance phase, demonstrating an ongoing potential for reactivity to CMP despite clinical induction of desensitization. Since allergen-specific T cell subsets are emerging as a potential prognostic indicator of OIT outcomes, we then examined at casein-specific T EFF and T REG subsets at each phase of our study. To identify casein-specific T cells, we labelled PBMC with either CTV or CFSE proliferation dyes to identify expanding (CTV low or CFSE low ) subsets upon exposure to casein. We observed an expansion of IFN-g-producing T EFF (Th1) cells from culture with casein, with a modest corresponding decrease in IL-4-producing T EFF (Th2) cells between E and L phases, but this was not seen across the entire study period. This observation is in keeping with previous reports that CM-OIT induces a shift away from the predominant Th2 response to milk protein early during the desensitization process (3). Mechanisms of tolerance likely differ between dose escalation and maintenance phase which may explain why Th1 prominence only increased significantly during dose escalation in our study. Although T EFF subsets may change during OIT, predictive thresholds, appropriate timing of sampling and robust correlations with clinical phenotypes are lacking, and further studies are required to validate their clinical usefulness (10). Of note, we did not find any correlation between T EFF subtypes and the time to reach maintenance. Induction of allergen-specific T REG cells has classically been shown to be a later effect of OIT, and product of local differentiation of conventional T cells into allergen-specific T REG cells following allergen exposure. These induced T REG cells (iT REG ) are less stable than their thymic-derived natural T REG (tT REG ) counterparts and have the potential to lose their suppressive phenotype under specific inflammatory contexts (29). Although the mechanisms of OIT mediating allergen tolerance have not been completely elucidated, stable T REG induction seems to be central for the achievement and maintenance of CMP desensitization and loss of suppressive function or possible conversion of these cells to a Th2 cell phenotype could be associated with OIT failure (30). Previous studies have routinely evaluated T REG in the clinic to predict OIT responses, but have been limited by the availability and choice of relevant surface markers to identify functional T REG phenotypes (10). While both iT REG and tT REG cell subsets may be engaged in milk OIT, our results indicate that the emerging casein-specific T REG cells express Helios, a transcription factor more frequently associated with T REG cells of thymic origin (tT REG ). Recently, however, Helios expression has also been shown to reflect T REG stability and suppressive function, rather than mere T REG lineage, as Helios acts to maintain the chromatin structure required for the induction and maintenance of the T REG developmental program (31). Therefore, we interpret enhanced Helios expression as a marker of functionally suppressive T REG . CD4 + T REG cells have classically been defined by their expression of intracellular FOXP3, high cell surface expression of CD25 and low surface expression of CD127. However, CD25 and CD127 can be transiently modulated on CD4 + T EFF cells upon immune activation and FOXP3 can be transiently expressed in T EFF cells upon T cell receptor (TCR) ligation (32,33). Furthermore, although FOXP3 reliably identifies T REG in their resting, non-activated state, not all CD25 + CD127 low FOXP3 + T REG clones are functionally suppressive (24). Thus, traditional markers of T REG cells are not sufficient to identify functional and dysfunctional T REG phenotypes. Differential expression of a transcription factor of the Ikaros family, Helios, has been shown to reliably distinguish suppressive Helios + FOXP3 + T REG from non-suppressive Helios -FoxP3 + T REG clones (25). However, CTV low CD4 + FOXP3 + Helios + T REG did not vary significantly during early, late and maintenance phases of CM-OIT in our study indicating that Helios may not be sufficient to identify allergen-specific T REG . Next, we sought to evaluate CD137 (4-1BB), a T REG co-stimulatory receptor and a direct target of FOXP3 which has lately been identified as a robust marker of recently activated, antigen-specific, functionally suppressive iT REG (27). Since effective T REG suppression is antigen-specific, we hypothesized that successful CM-OIT would correlate with the expansion of casein-specific FOXP3 + Helios + CD137 + T REG cells (CD137 + T REG ) rather than polyclonal T REG activation or decrease in allergen-specific T EFF . In keeping with this hypothesis, we did observe that proliferating CD137 + T REG significantly increase during early, late and maintenance phases of CM-OIT. Moreover, we found that the induction of CD137 + T REG correlated with an increase in the frequency of T EFF cells with a Th1 phenotype and a modest Th1/ Th2 ratio suggesting that CD137 + T REG suppress Th2 immune responses in CM-OIT. The negative correlation between frequencies of CD137 + T REG cells and number of escalation days, and the finding that individuals with higher frequencies of CD137 + T REG cells during the M phase needed less time to reach maintenance suggests that CD137 + T REG may be useful for predicting time to reach maintenance during CM-OIT. To ensure that casein tolerance was possibly driven by CD137 + T REG induction rather than a decrease in antigen-specific T EFF cells, we compared proliferative T EFF responses at each CM-OIT timepoint. Using CD154 as a marker of recently activated, antigen-specific T EFF cells (27,28), we found no significant difference in terms of proliferating CD4 + FOXP3 -Helios -CD154 + T EFF cells (CD154 T EFF ) throughout the study period. Since a higher level of FOXP3 and Helios expression has been associated with increased suppressive potency and stability of the T REG phenotype (25), we sought to determine differential expression of these two markers on CD137 + and CD137 -T REG cells. Indeed, casein-specific CD137 + T REG cells exhibited a higher level of FOXP3 expression than their CD137counterparts at each timepoint, whereas Helios was only differentially expressed between CD137 + T REG and CD137 -T REG at the M phase. These observations suggest that the circulating casein-specific CD137 + T REG cells acquire a stable and more suppressive phenotype throughout CM-OIT, and that Helios expression, thus far not described in the OIT literature, may be utilized as a marker of successful OIT. In summary, we have performed an exploratory CM-OIT study and identified a potential clinically useful biomarker to identify patients most likely to achieve successful CMP tolerance and sustained unresponsiveness during CM-OIT. This remains a pilot study and our conclusions will be The induction of CD137 + proliferative T REG correlated with an increase in the CD4 + IFN-g + T EFF cells from culture with casein and the ratio of CD4 + IFN-g + T EFF to CD4 + IL-4 + T EFF during Early and Late phase. (F) There was also a trend of correlation between CD137 + proliferative T REG and CD4 + IL-4 + T EFF cells from culture with casein, although there is a no significance. (H) There is a negative correlation between the proportions of CD137 + proliferative T REG at (E) and escalation days to maintenance. (I, J) There was also a trend of correlation between the proportions of CD137 + proliferative T REG at Late and Maintenance phase. and escalation days to maintenance, albeit no significance. Each symbol represents 1 subject. Of 7 patients, 5 patients from E and L phase are involved in analysis/figure (E-G). Yellow symbols represent data at Early phase Blue symbols represent data at Late phase. Red symbols represent data at Maintenance phase. P-values in (A-D) were determined using a one-way ANOVA with Dunn's multiple comparisons and in (E-J) with a Pearson correlation (*p < 0.05, n.s, not significant). Bars represent the mean ± s.d. validated in larger cohorts of patients which will include additional age appropriate non-allergic controls and patients having failed CM-OIT. The clinical utility of CD137 + T REG quantification during CM-OIT merits further investigation and validation in larger cohorts. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by IRB of the McGill University Health Centre. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS YZ, LL, GG, DK, SB, NP, DL, T-AA-A, and BT: sample processing, experimental design, assay development and execution, data analysis/reporting, and/or figure/manuscript preparation MB, BM, and CP: trial design, experimental design, data analysis and reporting, figure preparation, and manuscript preparation. All authors contributed to the article and approved the submitted version.
2021-11-30T15:06:14.386Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "82c9cc213c40a44ecf0332141bb468e10b9cd753", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.705615/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82c9cc213c40a44ecf0332141bb468e10b9cd753", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }