id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
231869818
pes2o/s2orc
v3-fos-license
Microbiota-derived short-chain fatty acids do not interfere with SARS-CoV-2 infection of human colonic samples ABSTRACT Microbiota-derived molecules called short-chain fatty acids (SCFAs) play a key role in the maintenance of the intestinal barrier and regulation of immune response during infectious conditions. Recent reports indicate that SARS-CoV-2 infection changes microbiota and SCFAs production. However, the relevance of this effect is unknown. In this study, we used human intestinal biopsies and intestinal epithelial cells to investigate the impact of SCFAs in the infection by SARS-CoV-2. SCFAs did not change the entry or replication of SARS-CoV-2 in intestinal cells. These metabolites had no effect on intestinal cells’ permeability and presented only minor effects on the production of anti-viral and inflammatory mediators. Together our findings indicate that the changes in microbiota composition of patients with COVID-19 and, particularly, of SCFAs do not interfere with the SARS-CoV-2 infection in the intestine. Introduction COVID-19 is a pandemic disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-Cov -2), characterized as respiratory disorder with clinical changes ranging from no symptoms to severe pneumonia and death. 1,2 After an incubation period, most patients with COVID-19 develop mild-to-moderate disease with typical symptoms including fever, chills, fatigue, dry cough, sore throat, sputum production, shortness of breath and headache. 2,3 In addition, recent studies showed that 17.6% of patients with COVID-19 present gastrointestinal symptoms that occurred more frequently in severe patients. 3,4 Interestingly, the presence of SARS-CoV-2 in fecal samples was associated with changes in gut microbiota composition. 5 Numerous experimental and clinical observations suggested that the gut microbiota plays a key role in the pathogenesis of sepsis and acute respiratory distress syndrome suggesting that SARS-CoV2 might also have an impact on the gut microbiota and vice-versa. [5][6][7] Loss of gut bacteria diversity leading to dysbiosis is associated with the development of many diseases. [5][6][7] This also seems to be the case for SARS-CoV-2 infection. A recent study reported an increase of opportunistic bacteria such as Collinsella aerofaciens, Collinsella tanakaei, Streptococcus infantis and Morganella morganii and a reduction of Parabacteroides merdae, Bacteroides stercoris, Alistipes onderdonkii and Lachnospiraceae bacterium1_1_57FAA) in patients with high SARS-CoV-2 infectivity signature compared to patients with low or no SARS-CoV-2 infectivity. 5 Functionally, this change in microbiota composition was associated with a reduction in short-chain fatty acids (SCFAs) production and increased synthesis of nucleotide and amino acids and carbohydrate metabolism. Another study pointed out to a reduction in bacterial groups (e.g., Faecalibacterium, Fusicatenibacter and Eubacterium hallii) involved in the production of the SCFA butyrate in fecal samples of COVID-19 patients compared to healthy controls. 6 Thus, there is evidence that the presence and/or infection of SARS-CoV-2 in the gut is associated with changes in microbiota including reduction in SCFAs-producing bacteria. However, no study addressed whether this effect on SCFAs is relevant for the infection. Butyrate and other SCFAs are key molecules mediating host-microbiota interaction. Previous studies reported the ability of these molecules to regulate the production of antimicrobial peptides and mucus, intestinal permeability and mucosal immune system activation. 8 The gastrointestinal tract deserves special attention, in particular the potential role of the gut microbiota in the development and management of this disease. Therefore, we hypothesized that a reduction in SCFAs' production would affect SARS-CoV-2 entry and response of intestinal cells. Treatment with SCFAs does not affect the entry of SARS-CoV-2 or the response of the intestinal tissue to infection We used human colon biopsies obtained from healthy individuals for investigating the interaction between SARS-CoV-2, microbiota-derived metabolites and intestinal cells (Table 1). Colonic biopsies are an attractive model for this type of study because they allow us to analyze the impact of infection in a well-preserved tissue architecture that includes the colonic epithelium and its lamina propria. To reduce the effect of technical and biological aspects associated with the tissue, we used samples obtained from the same individual that were treated and infected ex vivo in the same experimental conditions. Biopsies were maintained in culture for up to 7 h and presented normal histological features after this period of incubation. Immunofluorescence staining revealed that the cells from colonic biopsies expressed the SARS-CoV-2 receptor, the angiotensin-converting enzyme-2 (ACE2, in red), and were efficiently infected by the virus, as shown by the spike staining (green) (Figure 1b). This later finding was confirmed by the measurement of virus load ( Figure 1c). Colonic biopsies treated with different concentrations of SCFAs presented the same viral load as the control condition, indicating that these metabolites do not interfere with virus entrance in cells ( Figure 1c). Previous studies in human intestinal organoids infected with SARS-CoV-2 reported increased production of type-I and III interferon (IFN), cytokines that are relevant for the antiviral response. 9-11) Therefore, we evaluated the expression of these cytokines and of inflammatory-related genes in the colonic biopsies. We observed an increase of DDX58, a gene which encodes the viral receptor RIG-I (retinoic acidinducible gene I), and of IFN beta, in infected biopsies compared with noninfected ( Figure 1d). When compared to the infected biopsies, we verified a significant reduction of DDX58 and the type III IFN receptor, IFNLR1, in biopsies treated with SCFAs at the higher concentration (SCFAs 1, Figure 1d). We also observed a reduction in the expression of the serine protease TMPRSS2, a protein that is important for SARS-CoV-2 entry into target cells. 12 The expression of other antiviral and inflammatory genes was not modulated by the SCFAs (Figure 1d). We next investigated the effect of SARS-CoV-2 and SCFAs on isolated intestinal epithelial cells (Caco-2). For that, we used Caco-2 cells cultivated for 2-3 weeks in transwell inserts. Under this condition, cells differentiate and form a polarized monolayer, whose permeability/integrity can be measured by the transepithelial electrical resistance (TEER). In these experiments, we did not observe any effect of SARS-CoV-2 infection or the SCFAs on transepithelial resistance of Caco-2 monolayers ( Figure 2c). We also measured the amount of virus released in both apical and basolateral surfaces of infected cells and did not find any effect of SCFAs in these parameters (Figure 2a and b). Taken together, our results indicate that SCFAs do not affect the entry, replication or the intestinal cells' response to SARS-CoV-2 infection. Discussion Patients with severe forms of COVID-19 frequently manifest gastrointestinal symptoms such as diarrhea, vomiting and abdominal pain. [13][14][15] Moreover, gut . Noninfected (NI) biopsies were used as negative controls of the experiments. Results are presented as mean ± SEM (n = 10 individuals/group). (d) Gene expression in colon biopsies infected or not with SARS-CoV-2 and incubated in the presence or absence of SCFAs. The expression of genes related to the entry of SARS-CoV-2 (TMPRSS2), inflammation (IL1b and TNF), virus recognition (DDX58) and response (type III interferon and its receptor -IFNL2, IFNL3 and INFR1, respectively -type I interferon -IFN beta and IFN alpha), and IFN target genes related to virus elimination (OASL) were analyzed by RTq-PCR. Results were normalized by the NI condition and are presented as mean (n = 9-12 individuals/group). *p < .05 compared to SARS-CoV-2. microbiota composition is altered in most COVID-19 patients, and it is neither known if this could worsen the clinical course of the disease, nor if the microbiota modulation could help to restore a balanced immune response against this viral infection. 16,17 Many studies have already been carried out looking for the effects of SCFAs in the treatment of infections, including viral airway infections. The consumption of a high-fiber diet or oral supplementation with acetate protected mice from infection by the respiratory syncytial virus (RSV) through GPR43 activation and IFN beta production in the lung epithelial cells. 18 Butyrate, as well as treatment with a high-fiber diet, was shown to protect mice from influenza infection by modulating their immune response. 19 Treatment of vascular endothelial cells with SCFAs decreased the expression of VCAM-1 and ICAM-1, resulting in reduced adhesion of infected monocytes and virus transfer to the endothelium. 20 Acetate treatment during Influenza infection was effective in reducing secondary bacterial pulmonary infections. 21 Based on this evidence gathered before the SARS-CoV-2 pandemic, many researchers indicated that the reestablishment of SCFAs endogenous production could be useful for the prevention and treatment of COVID-19. 16,17,22 However, it is worth mentioning that detrimental effects of SCFAs on virus infections have also been reported. A recent study demonstrated that butyrate increases cellular infection by H1N1 influenza A virus, reovirus and human immunodeficiency virus 1 (HIV-1). This effect was associated with suppression of specific antiviral interferon-stimulated genes. 23 Another study reported an exacerbation of arthropathy-induced by Chikungunya virus in mice after treatment with high-fiber diet or butyrate. 24 Using colon biopsies from patients who were diagnosed with SARS-CoV-2 a few days after colonoscopy, it was possible to observe that intestinal cells are infected with the new coronavirus. 25 Other studies involving human intestinal organoid experiments confirmed the mechanism of viral entry into intestinal cells, as well as the molecular expression pattern associated with the viral invasion in a context of intestinal inflammation (patients with inflammatory bowel diseases). 26,27 Differences in the expression of molecules related to viral entry depended on the analyzed intestinal segment, ileum or colon. 27 In the present study, the treatment with a mixture of acetate, propionate, and butyrate did not alter the viral load of intestinal biopsies or intestinal epithelial cells. These findings do not exclude the possibility that the SCFAs have a significant effect on SARS-CoV-2 infection. The antiviral effects promoted by the microbiota and its metabolites may depend on the interactions with different cell types and further studies are needed to understand these mechanisms during SARS-CoV-2 infection. One of the characteristics of COVID-19 disease is the exacerbated inflammatory response that occurs in a second phase of the disease. Thus, one of the main investigations that has been carried out around the world is to dissect how the infection occurs in each tissue and systems and how that tissue reacts to the presence of this infection, especially in patients who already have an inflammatory condition, such as obesity. 23,28,29 A study with intestinal and pulmonary epithelial cell lines showed that SARS-CoV -2 infection alters the expression of inflammatory cytokines and anti-viral molecules such as IFNα and IFNβ in lung cells. Their findings suggested the pre-activation of IFN-I signaling pathway as a potential therapeutic and prophylactic management for COVID-19 30 . In our study, the treatment with SCFAs reduced the transcript levels of genes important for the detection of viral molecules, control of viral entry and replication, such as RIG1, TMPRSS2, and the IFNλ receptor. However, the viral load of SCFAs-treated samples did not differ from the nontreated infected biopsies indicating that these effects are not sufficient or may be counteracted by other effects of SCFAs on these cells. Some limitations of our study should be noted, such as the small sample size and the lack of intestinal biopsies from patients with COVID-19. However, the use of human samples, even of noninfected patients, provides a relevant contribution to establish a potential role of SCFAs in this pandemic disease. Our results need to be validated in vivo, but indicate that changes in microbiota composition of patients with COVID19 5,6 and, particularly, of SCFAs do not interfere with the SARS-CoV-2 infection in the intestine. It is worth mentioning that SCFAs can also have systemic effects, which may be relevant for SARS-CoV -2 infection in different contexts. 31 Patient and sample selection Left colon mucosa biopsies were collected from patients who underwent colonoscopy examination for diagnostic purposes and who presented no endoscopic abnormalities. All subjects were recruited at the Gastrocenter's Colonoscopy Unit of the Clinic Hospital from University of Campinas (Unicamp) and included in this study after having signed a written informed consent form. Table 1 shows the clinical and demographic characteristics of the 12 patients without comorbidities who participated in the study. Culture of intestinal biopsies Immediately after the mucosa biopsies were collected during the colonoscopy examination, they were washed and included in culture. Culture of intestinal biopsy specimens was performed in RPMI-1640 medium (Sigma-Aldrich, Germany) without L-glutamine and supplemented with 10% fetal calf serum and antibiotic/antimycotic mixture (Gibco Invitrogen). The samples were divided in four different conditions: noninfected (medium only), infected with SARS-CoV-2 and treated with short-chain fatty acids at two different concentrations (SCFAs-1 [acetate 16 mM, propionate 4 mM and butyrate 2 mM] or SCFAs-2 [acetate 1.6 mM, propionate 0.4 mM and butyrate 0.2 mM]), and infected with SARS-CoV-2. The ratio of SCFAs (acetate, propionate and butyrate) used in the study was similar to what was described in a previous study that measured these metabolites in fecal samples. 32 The concentrations of SCFAs were chosen based on experiments performed with Caco-2 cells in which we found that incubation for 24 h with SCFAs did not affect their viability. All infections were performed with 10 5 PFU of SARS-CoV-2 for 1 h at room temperature (20-25° C) with continuous and gentle agitation. After viral adsorption, samples were washed three times with PBS 1x (0.15 M) and incubated for 6 h at 37°C and 5% of CO 2 atmosphere with related media conditions. The experimental design of culture and different treatments are illustrated in Figure 1a. Cell culture Human colon cancer cells (Caco-2) seeded 2 × 10 4 cells per inserts into transwell 24-well plate (0.4 µm polycarbonate membrane with 0.33 cm 2 area, Costar). Cells were maintained in Dulbecco's modified Eagle medium (Gibco) supplemented with 20% fetal bovine serum (FBS) and 1% Penicillin-Streptomycin at 37°C and 5% CO 2 atmosphere for up to 21 d with changes of medium every 2 d. The medium volume in the up chamber was 0.2 mL and in the basal chamber was 0.5 mL. After 21 d of differentiation, cells were pretreated for 1 h with SCFAs (SCFAs-1 [8 mM de acetate, 2 mM de propionate and 1 mM de butyrate] and SCFAs-2 [4 mM de acetate, 1 mM propionate and 0.5 mM butyrate]) or medium alone. Cells were then infected with MOI of 1 at room temperature for 1 h with continuous and gentle agitation. Before viral adsorption, SARS-CoV-2 inoculum was removed, cells were washed three times with PBS 1x and then maintained with related media. Transepithelial resistance was measured immediately after infection (time 0), 24-and 48-h post-infection, as previously described. 33,34 RNA extraction and quantification Total RNA was extracted from colonic mucosa samples and culture supernatants using RNeasy Mini Kit (Qiagen, USA) according to the manufacturer's instructions. For qPCR analysis, RNA purity and concentration were determined by UV spectrophotometry at 260 nm using the BioTek Eon Microplate Spectrophotometer and Gen5 v 2.0 software. Viral load quantification Viral RNA was detected and quantified by Charité protocol of one-step RT-Qpcr3 34 Immunofluorescence Colon biopsies were fixed in paraformaldehyde 4% for 24 h and then embedded in paraffin. Five-micrometer-thick sections were prepared for immunoflurescent detection of ACE2 and viral spike protein. Samples were deparaffinized by two 10 min-incubation with Xylol, followed by an incubation with xylol:ethanol (1:1) solution for 10 min, followed by incubations with different concentrations of Ethanol solution (ethanol 100%, ethanol 95%, ethanol 85% and ethanol 70%, respectively, all diluted in DEPC), for 5 min each, and finalizing with water DEPC for 5 min and two times PBS 1x pH 7.4 for 5 min. To avoid autofluorescence, the tissues were treated with 2% H 2 O 2 methanol for 30 min, washed with PBST, and treated with glycine 0.1 M in PBST for 10 min at room temperature. The samples were then washed and treated with 1% bovine serum albumin (BSA) solution in PBST for 30 min, to block nonspecific epitopes. Tissues were incubated with SARS-CoV-2 Spike S1 Antibody (HC2001) (GenScript -A02038) and ACE2 Antibody (Rheabiotec -IM-0060, both diluted 1:100 in BSA 1% solution in PBST, and incubated overnight at 4°C in a humid box. The slides were then washed and incubated with anti-human IgG Alexa 488 (ThermoFisher -A11013) and antirabbit IgG Alexa Fluor 594 (ThermoFisher -A21207), both diluted 1:500 in BSA 1% solution in PBST for 2 h at room temperature in a humid box, protected from the light. The samples were washed again, incubated DAPI (Santa Cruz Biotechnology -SC-3598) diluted 1:1000 in BSA 1% solution in PBST for 5 min at room temperature protected from the light, and mounted in an aqueous mounting solution for confocal imaging. Microscopy images were acquired with a Zeiss LSM880 with Airyscan on an Axio Observer 7 inverted microscope (Carl Zeiss AG, Germany) with a C Plan Apochromat 63x/1.4 Oil DIC objective, 4x optical zoom. Prior to image analysis, raw. czi files were automatically processed into deconvoluted Airyscan images using Zen Black 2.3 software. For DAPI were acquired conventional confocal image using 405-nm laser line for excitation and pinhole set to 1 AU. Statistical analysis Analyses were performed using GraphPad software 8.0 (San Diego, CA, USA). Results are presented as mean ± standard error mean (SEM) and "n" represents the number of samples, as indicated in the corresponding figure legend. Differences were considered significant for p < .05. Results were compared by non-parametric Mann-Whitney test.
2021-02-11T06:16:37.519Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4c696590df4b9a9398f60b1a0caae6acf1fa9c08", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/19490976.2021.1874740", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11466ef3f438fb06dfffad9e9ffb7cb2b46ad9be", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14029837
pes2o/s2orc
v3-fos-license
RC J0311+0507: A Candidate for Superpowerful Radio Galaxies in the Early Universe at Redshift z=4.514 A strong emission line at 6703A has been detected in the optical spectrum for the host galaxy (R=23.1) of the radio source RC J0311+0507 (4C+04.11). This radio galaxy, with a spectral index of 1.31 in the frequency range 365-4850 MHz, is one of the ultrasteep spectrum objects from the deep survey of a sky strip conducted with RATAN-600 in 1980-1981. We present arguments in favor of the identification of this line with Ly\alpha at redshift z=4.514. In this case, the object belongs to the group of extremely distant radio galaxies of ultrahigh radio luminosity (P_{1400}=1.3 x 10^{29}W Hz^{-1}). Such power can be provided only by a fairly massive black hole (~10^9M_\sun}) that formed in a time less than the age of the Universe at the observed z(1.3 Gyr) or had a primordial origin. Introduction The radio source RC J0311+0507 (the RATAN Cold Catalog; Parijskij et al. 1991Parijskij et al. , 1992 was dis covered in 1980-1981 observations during the first deep survey of a sky strip with the RATAN-600 multifrequency complex (Berlin et al. 1981). The catalog included more than 1145 radio sources with a flux density limit higher than 10 mJy at 7.6 cm. The RATAN-600 observations at various azimuths allowed a positional accuracy of ∼15 ′′ to be obtained. This accuracy is not enough for deep optical identi-fications, but is quite sufficient for deep VLA observations. The absence of catalogs with an adequate sensitivity in those years made it difficult to identify them with known objects. The first catalog of a high positional accuracy with a sensitivity up to 200 mJy was the UTRAO (Texas) Catalog at ∼80 cm (Douglas et al. 1996). Douglas kindly provided us data on our surveyed area long before the publication of this catalog. This allowed us to identify at least the objects of the RC catalog with fairly steep spectra. There were about one-third of these sources. RC J0311+0507 was one of them. Since its spectral index (∼ ν −α ) is α ≈ 1.2, it was included in the subsample of candidates for distant objects of the Big Trio Project (Goss et al. 1994;Kopylov et al. 1995;Parijskij et al. 1999;Verkhodanov et al. 2001). Note that RC J0311+0507 is a fairly bright low-frequency radio source. It was first detected at a frequency of 85 MHz (Mills et al. 1958) and was then reliably recorded at 178 MHz (Gower et al. 1967) as the object 4C+04.11 with a flux density of 5.5 Jy. Röttgering et al. (1994) independently selected RC J0311+0507 to be included in their sample of objects with steep radio spectra (365B B0309+049). However, subsequently they did not study it in the optical range, possibly because of an uncertain spectral index. RC J0311+0507 also closely corresponds in its parameters to the objects of the sample of steep-spectrum radio sources by Tielens et al. (1979). Analysis of this sample revealed the then most distant radio galaxy, 4C+41.17 (z=3.80, Chambers et al. 1990). Figure 1: Radio spectrum of RC J0311+0507 constructed from the data accumulated by 2005. The spectral index in the frequency range 365-4850 MHz is 1.31. The spectrum flattening toward the low frequencies suggests that the components of the radio source are compact. We have studied the object on VLA with a resolution of 1".4 at 21 cm as part of the Big Trio Project (RATAN-VLA-BTA). The radio source turned out to be compact, about 2", with an AD (Asymmetric Double) structure. The VLA archival data with a resolution of 0".4 at 6 cm show the presence of a third, very weak component of small angular size. Below, we provide the main data on this object, including the radio data and optical studies with the 6-m BTA telescope of the Special Astrophysical Observatory (SAO) (identification, multicolor photometry, and spectroscopy). Figure 1 shows the radio spectrum of RC J0311+0507 with all of the available measurements collected in the CATS database (Verkhodanov et al. 1997), including the RATAN-600 multifrequency data. We also added the measurements at 38 and 178 MHz from Williams et al. (1968). The curve in Fig. 1 corresponds to the equation log S = 1.423 + 0.212 log ν − 0.241(log ν) 2 (1) that was obtained by fitting a parabola to all measurements (31 data points). In the frequency range 365-4850 MHz, the object has an ultrasteep spectrum (α=1.31), which is the first signature of a high redshift. The increase in the spectral slope from low to high frequencies is also a characteristic property of distant compact powerful radio sources. The VLA observations carried out by W.M. Goss in June 1995 with a resolution of 1".4 at 1425 MHz provided evidence for a compact two-component structure (Parijskij et al. 1996). Based on their VLA observations with a similar resolution, Rottgering et al. (1994) determined the radio source as an extended one with a size of 1".6. With the kind permission of B. Burke, we found the image of this object obtained by J. Hewitt in 1985 at 4860 MHz with a resolution of 0".4 in the VLA archive where a linear triplet structure with a total angular size of 2".8 is seen. Based on these data, we can classify RC J0311+0507 as a compact steep-spectrum (CSS) object. It is distinguished by a large flux density asymmetry (∼20 times) between the two extreme components, which is much more commonly observed in quasars than in radio galaxies. Optical Identification Based on the direct BTA image with an exposure time of 400 s at 2" seeing obtained in September 1995 with a 580×520 ISD015 CCD array (pixel size 0".205×0".154), we identified the radio source (Parijskij et al. 1996) with a faint galaxy (R≈22.9 in a 5" aperture). The optical-to-radio luminosity ratio turned out to be the standard one for the population of luminous radio galaxies (McCarthy 1993; Parijskij et al. 1996). Figure 2 shows the optical object with the superimposed VLA 4860 MHz isophotes. Multicolor Photometry In November 1999, we performed the next observations with the PMCCD instrument (a TX1024A array with 0".206×0".206 pixels). We obtained B, V, R, and I images with exposure times of 600, 1000, 400, and 1000 s, respectively, at ∼2" seeing. The photometric measurements with a 5" aperture corrected for the extinction in the Galaxy (A B =0.83, A V =0.64, A R =0.51, and A I =0.37) yielded magnitudes of >24.9, 24.8±0.6, 22.6±0.15, and 22.3±0.4, respectively. The color characteristics are close to those expected for massive galaxies at z = 3 − 5, and the complete absence of the object in the B band does not contradict to the emission cutoff beyond the Lyman 912Ålimit. The R-band size of the galaxy slightly exceeds its I-band size. This may suggest the presence of a hydrogen halo around the host galaxy that is commonly observed in distant radio galaxies. The presence of a halo can lead to a considerable increase in the object's size if the strong Lyα line falls not far from the passband maximum of the corresponding filter (see, e.g., RC J0105+0501; Soboleva et al. 2000), where the Lyα line increases significantly the V-band size of the object). Spectroscopic Observations In September and November 2004, we obtained BTA spectra of the host galaxy. The observations were carried out with the SCORPIO universal focal reducer that was put into operation on BTA late in 2003 as the main multipurpose, high-efficiency instrument (Afanasiev and Moiseev 2005). On November 8-9, 2004, we were able to obtain the best-quality spectrum of the host galaxy with a total exposure time of 3600 s in long-slit observations at 1" seeing. The gr300G grating provided the entire spectral range accessible to the instrument (3800-9400Å) with a resolution of ∼20Å, which is commonly used to study objects of this type. The slit width was 1" and the position angle was −11 • . The spectrum was reduced using the SCORPIO data reduction and analysis software package (Afanasiev and Moiseev 2005) and is shown in Fig. 3. The size of the region of integration over the slit height was 1".6. The absolute spectrum calibration was performed using the spectrophotometric standard Hiltner 600 and is given in units of 10 −17 erg cm −2 s −1Å−1 . An intense line is seen at a wavelength of 6703Å. The line flux is ≈5×10 −16 erg cm −2 s −1 , the F W HM is ∼1500 km s −1 , and the equivalent width is ∼1000Å. We interpret it as Lyα with a redshift of 4.514±0.001. The luminosity in this line is close to the (R-band) continuum luminosity, which is observed in steep-spectrum radio galaxies only for the Lyα line (McCarthy 1993). The alternative interpretation ([O II] 3727Åwith z = 0.8) is highly unlikely because of the complete absence of [O III] 5007Å, which is usually twice as intense as [O II] 3727Åfor this population of objects. The identification with Lyα is consistent with the weakness of other lines falling into the spectral range studied of which only the C IV 1549Åline is detected with a signal-to-noise ratio of ∼2 at 10% of the Lyα intensity. The ratio of the continuum levels in 400Å-wide intervals on both sides of Lyα is ∼3. The lowering of the continuum at wavelengths shortward of Lyα is attributable to the absorption by the Lyα forest and is in agreement with the data for quasars at z = 4.5 (Songaila 2004). In general, the spectrum of RC J0311+0507 is similar in its characteristics to the spectra of high-redshift radio galaxies (see, e.g., 8C 1435+63, z = 4.261, Fig. 1 in Spinrad et al. (1995)). Discussion Although z = 4.514 is considerably lower than the limiting redshifts detected to date, for galaxies (Malhotra and Rhoads 2005;Stanway et al. 2003;Pello et al. 2004) and quasars (Fan et al. 2003;Walter et al. 2004), RC J0311+0507 is only the second luminous radio galaxy detected at a redshift higher than 4.5. Let us compare the main parameters of RC J0311+0507 with those of other radio galaxies at z > 4. Only seven such galaxies are known and almost all of them have been studied more or less adequately. Table 1 successively lists the names of the radio galaxies, their redshifts, optical R (or I) magnitudes, infrared K magnitudes, 1400-MHz flux densities (NVSS; Condon et al. 1998) (except the object VLA J123642+621331, for which the data were taken from Richards (2000)), two-frequency spectral indices α from the Texas Survey (365 MHz) and NVSS (with the exception of VLA J123642+621331, for which only 1.4 and 8.5 GHz measurements are available ;Richards 2000), the largest angular sizes (LAS) in arcseconds, and morphology of the radio galaxies in the standard notation (S-single, D-double, AD-asymmetric double, C-core, and E-extended). The last column gives references to the publications from which the redshifts, optical magnitudes, infrared magnitudes, and LAS of the radio sources were taken. In three cases (VLA J123642+621331, TN J1123-2154, and 7C 1814+670), only a weak Lyα line was detected; in the remaining cases, the Lyα line is very intense. The color data (after the subtraction of the Lyα contribution, ∼0. m 7) are consistent with new models for evolution of large galaxies. Figure 3: Optical spectrum of the host galaxy of the radio source RC J0311+0507. We identify a narrow, intense line at the center. with Lyα 1216Å. The dashed indicate the expected positions of the emission lines typical of distant radio galaxies. Other spectral features include the residual effect of strong atmospheric lines after the subtraction of the night-sky spectrum. Thus, for example, for the GALEV2 model (Bicker et al. 2004) with the assumed epoch of primary star formation (z = 5), the expected colors of the stellar population of an elliptical galaxy are given in Table 2. This table once again confirms that the case with a high redshift is correct. In its redshift, RC J0311+0507 is second only to the object TN J0924-2201 (z = 5.199), but this source exceeds it in radio luminosity. Having determined the flux density 3.5 Jy at 254 MHz (which corresponds to the emission frequency 1400 MHz at z = 4.514) using interpolation over the spectrum, we obtain the power of the radio source, 1.3×10 29 W Hz −1 (for H 0 =70 km s −1 Mpc −1 , Ω M =0.3, and Ω Λ =0.7). RC J0311+0507 and 8C 1435+63 turn out to be similar in their parameters to superpowerful radio galaxies at z > 4, which exceed in luminosity Cyg A, the most powerful nearby radio galaxy, by a factor of ∼10. An ultrahigh radio luminosity is a signature of a supermassive black hole (M bh ∼ 10 9 M ⊙ ) at the center of the host galaxy. However, in the standard model, the time scale of its growth from the time of secondary ionization from ∼ 10M ⊙ to M bh ∼ 10 9 M ⊙ is only ∼0.5 Gyr. Therefore, the object can be of interest in connection with the problem of age crisis (Cunha and Santos 2004;Loeb and Barcana 2001). Not all of the models for the formation of supermassive black holes admit such a fast growth of their masses from several solar masses to 10 9 M ⊙ ; either severe con straints on the rate of their growth are needed or one must accept the version of primordial (pregalactic) large black holes forming stellar systems around themselves that have been often discussed in recent years. The weakness of other lines in the spectrum (Fig. 3) suggests that the main emission in the Lyα line is attributable to a halo that is poorly enriched with He II (1640Å) and C IV (1549Å). The upper limit on He II and C IV obtained by Dawson et al. (2004) for distant Lyα-emitting galaxies is 15% (our value is 10%). Therefore, we believe that their conclusion about the primordial nature of the hydrogen gas halo not enriched with nuclear reactions in stars of the host galaxy or population III stars (z > 10 − 30) (Loeb and Barcana 2001) for this object is also applicable to RC J0311+0507. Conclusions The goal of this paper is to draw the attention of astronomers that are interested in objects of the early Universe to the source RC J0311+0507.
2007-05-21T13:14:16.000Z
2006-07-01T00:00:00.000
{ "year": 2007, "sha1": "8752bd7b4596195e966bda01ccd45df3fbe54dea", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0705.2971", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8752bd7b4596195e966bda01ccd45df3fbe54dea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5801631
pes2o/s2orc
v3-fos-license
Ribosome-associated ncRNAs: An emerging class of translation regulators Accumulating recent evidence identified the ribosome as binding target for numerous small and long non-protein-coding RNAs (ncRNAs) in various organisms of all 3 domains of life. Therefore it appears that ribosome-associated ncRNAs (rancRNAs) are a prevalent, yet poorly understood class of cellular transcripts. Since rancRNAs are associated with the arguable most central enzyme of the cell it seems plausible to propose a role in translation control. Indeed first experimental evidence on small rancRNAs has been presented, linking ribosome association with fine-tuning the rate of protein biosynthesis in a stress-dependent manner. Translation Regulation by Non-coding RNAs Gene expression is a complex and multistep cellular process, where transcription, mRNA export, mRNA degradation, translation, and protein turnover rates represent the major regulatory hubs. 1 Studies measuring the transcriptomes and proteomes of mammalian cells in parallel, demonstrated that for the vast majority of protein coding genes, the transcript levels do not reflect the actual protein levels. Although the correlation is higher as initially reported, 2 this new data highlight that mRNA levels do not represent protein levels and most of the differences could be explained by translation regulation control mechanisms. 3 The same is also true for prokaryal organisms, where no correlation between mRNA and protein copy numbers could be found using a single cell approach in Escherichia coli. 4 Regarding the observation that the transcriptome does not entirely correlate with the proteome 1,5 the term 'ribonome' was proposed. 3 The 'ribonome' is defined by the total cellular RNA content and its regulatory factors, including ribosomes and their regulatory non-coding RNAs (ncRNAs). Translation control utilizing structural features and regulatory sequences within the untranslated regions (UTRs) of messenger RNAs (mRNAs) on the one hand and protein-based targeting of translation initiation on the other hand, are reasonably well understood mechanisms. 6 Since Ambros and coworkers discovered the first micro RNA (miRNA) in 1993, ncRNAs came into focus of translational control. 7,8 Shortly thereafter miRNAs turned out to be a widespread family of endogenous ncRNAs, processed from larger hairpin structured precursors to a length of »22 nucleotides (nt). mRNAs are the target of miRNAs loaded on the RISC complex, to which they bind by imperfect base-pairing, leading to mRNA decay, translational repression, or sequestration of mRNAs to specific cellular compartments. 9,10 In addition to miRNAs, genome-encoded small interfering RNAs (endo-siRNAs) have been described in a variety of multicellular organisms. 11 While these siRNAs are biochemically indistinguishable from miR-NAs, they differ by their origin and mode of action. 12 siRNAs are usually cis-encoded and processed from long double-stranded RNAs, also loaded to the RISC complex to be functional. 13 Generally, the 21-23 nt long siRNAs bind to mRNAs by perfect Keywords: non-coding RNA, translation control, lncRNA, ribosome base-pairing and thereby trigger endonucleolytic mRNA cleavage and degradation. 14 Analogous to miRNAs and siRNAs in eukaryotes, bacterial antisense RNAs have been shown to be the main ncRNA regulators of translation. In general antisense RNAs can be clustered into 2 families, the cis-and trans-acting small regulatory RNAs (asRNAs). 15 In case of trans-encoded antisense RNAs, multiple mRNAs are targeted via imperfect base pairing. In contrast cis-encoded antisense RNAs, derived from the opposite strand of the same genomic region, accomplish translation repression of their target mRNAs by perfect complementarity. 15 All of the above mentioned ncRNA translation regulators (miRNAs, siRNAs, asRNAs) share one common feature: they all target mRNAs. This restricts regulation of protein synthesis typically to specific target messages and thus allows fine-tuning of gene expression in time and space of a defined subset of mRNA transcripts. However, is it also possible to regulate the ribosome, the central enzyme of the translation machinery, directly with ncRNAs? By targeting the ribosome, RNA molecules would allow a fast and direct regulation of protein production. Such a rapid response is important under sudden environmental changes and allows the required massive reprogramming of the gene expression pattern. 16 However, conventional signaling pathways, including the synthesis, degradation or modification of protein factors are comparably time and energy consuming. A Hitchhiker's Guide to The Ribosome With the notable exceptions of the bacterial transfer-messenger RNA (tmRNA) and the universally conserved signal recognition particle (SRP) RNA, all functionally characterized ncRNAs capable of regulating protein biosynthesis target the mRNA rather than the ribosome directly. This is unexpected given the central role the ribosome plays during cell metabolism and the assumption that the ribosome evolved in the 'RNA world', where it likely learned receiving regulatory input from non-proteinous co-factors. Thus it is conceivable that such ribosome-bound ncRNAs have survived the evolutionary transition from the 'RNA world' to contemporary biology but have so far escaped the detection in transcriptome screens. The kinetic and energetic advantage of ribosome-bound ncRNA translation regulators compared to protein sensors would be the immediate availability and biological functionality of the ncRNA upon changing environmental conditions without the need of prior production of a costly regulatory polypeptide. While initially ribosome-bound ncRNAs and ncRNA fragments were serendipitously found in mRNA-based RNA-seq approaches as 'contaminants', [17][18][19] recently more focused studies on the ribosomal ncRNA interactome have been conducted (refs. 20-24 and our unpublished data). A plethora of small and long ncRNAs has been identified to be enriched in the polysomal and sub-polysomal fractions, thus emphasizing their putative roles in translation control. First experimental data support the view that these ncRNA entities do not represent passive hitchhikers of the translation machinery but appear what can be called an emerging class of non-coding ribo-regulators of protein biosynthesis. 22,24,25 Ribosome-bound small ncRNAs. In our lab we performed targeted transcriptome screens for ribosome-associated ncRNAs (rancRNAs) that potentially regulate protein biosynthesis. Therefore we have applied numerous environmental stress conditions to various model systems spanning all 3 domains of life followed by ribosome preparation, small RNA isolation, and finally RNA-seq analyses. By this approach we have picked up thousands of different small RNA molecules in the size range between 20 and »300 nt (refs. 21,22, and our unpublished data). The RNAs either originate from intergenic regions of the genomes, and thus represent so far unrecognized ncRNA genes, or they are processed out of functional precursor transcripts such as mRNAs, tRNAs, snoRNAs, SRP RNA, and rRNAs. Post-transcriptional RNA cleavage events have been demonstrated to further expand the spectrum and functionality of transcriptomes. 26,27 In-depth analyses on the fate of these processing products are largely lacking, or are restricted to investigations on RNAirelated trans-silencing activities. 26 The rancRNAs in our screens were not only processed from specific sites of the parental RNA, but also showed stress-specific expression or ribosome-association. 21,22,24 Some of these ncRNAs are able to inhibit protein production on the global scale, 22,24 others obviously have a stimulating effect on translation (our unpublished data) (Fig. 1). Two ribosomebound ncRNAs that were investigated in more detail originate from the TRM10 open reading frame in S. cerevisiae, 24 or from the 5' parts of valine and alanine tRNAs of the halophilic archaeon Haloferax volcanii (ref. 22 and unpublished data). These ncRNAs down regulate protein synthesis on a global level by interacting with the large or small ribosomal subunit, respectively. It is important to note that the mode of action of these 2 examples is different. Whereas the tRNAderived fragment of H. volcanii competes with mRNA binding to the small ribosomal subunit, the ncRNA originating from the S. cerevisiae TRM10 mRNA interferes with P-tRNA occupancy (our unpublished data). It was shown that these regulatory events are stress-dependent and occur quickly in response to sudden environmental changes. This highlights the power of ribosome-bound ncRNAs for rapid global translation attenuation. In S. cerevisiae we could demonstrate that a ribosome-bound ncRNA is needed for rapid shutdown of global translation and efficient growth resumption under hyperosmotic conditions. 24 Obviously this fast and global attenuation of metabolic activity as a consequence of high salt stress is crucial to open a time window for stressspecific adaptation programs to be established. Both the mRNA-derived fragment in yeast, as well as the tRNA-derived fragments in H. volcanii seem to inhibit the translation initiation process. However, there is no reason to assume that translation initiation is the sole step that can be regulated by small rancRNAs. Indeed, certain ncRNA candidates appear to specifically interfere with the elongation phase of protein biosynthesis (our unpublished data) but in principal it is conceivable that every sub-step of the translation cycle could be affected by ncRNA-mediated regulation. Besides regulating protein synthesis on a global level, rancRNAs have also the potential to target the translation of specific mRNAs (Fig. 1). Two prominent examples for the latter scenario are the bacterial tmRNA and the SRP RNA. tmRNA mediates a unique global quality control system that combines translational surveillance with the rescue of stalled ribosomes. 28 tmRNA specifically recognizes and binds to ribosomes that got stuck on open reading frames (ORFs) due to the absence of stop codons or due to other unproductive pausing events. This ribosome-targeted ncRNA functions as both, mRNA and tRNA, and thus enables ribosome recycling and simultaneously tagging of the incompletely translated protein for degradation. 29 The second example for a well-known mRNA-specific rancRNA is the SRP RNA, which is an integral part of the abundant, cytosolic, and universally conserved SRP ribonucleoprotein (RNP) complex. The SRP is involved in targeting of certain nascent polypeptides to protein-conducting membrane channels, enabling transportation of nascent polypeptide chains across membranes as well as their integration into the membrane itself. 30 Thereby the ncRNA component of the complex (7SL RNA in eukaryotes and 4.5S RNA in bacteria) is not only necessary for binding to the ribosome and recognition of the emerging peptide signal sequence, 31,32 but also for the whole complex assembly and thus represents the functional core of the SRP. 33 Recently an additional function for a 7SLderived ncRNA has been proposed. The Alu RNA (a repetitive element originating from the 7SL RNA) has been suggested to deliver the protein dimer SRP9/14 to the small ribosomal subunit. 34 As a consequence reduced polysome levels were observed resulting in global translation inhibition. 35 The small rancRNAs are reminiscent of known low molecular weight effectors, such as antibiotics and other secondary metabolites, which have been shown to be capable of tuning the ribosome. 36 The 2 functionally studied small rancRNAs in yeast and H. volcanii have been demonstrated to target functional hotspots of the ribosomes and possess K d values in the low micromolar range, comparable to ribosome-targeted antibiotics. These examples of ribosome-bound small ncRNAs likely represent only the forefront of a so far largely elusive class of translation regulators and can pave the way for novel mechanisms to be uncovered. Ribosome-bound long ncRNAs. Long ncRNAs (lncRNAs) have recently received considerable attention in the field. This class of ncRNA molecules is vaguely defined by the length range of >200 nt to several kilobases. 37 Initially lncRNAs have been connected to chromosome dosage compensation in mammals (Xist RNA) Figure 1. Functional consequences of ncRNA-ribosome interactions. Short or long rancRNAs can target ribosomes (dark gray) either as naked molecules or as RNPs (light gray). As a consequence global (e.g., the yeast TRM10 mRNA-derived 18-mer ncRNA) 24 or mRNA-specific translation regulation can occur. Loading ribosomes on ncRNAs can also affect the cellular stability and/or localization of rancRNAs (ref. 19 and references therein). Size and line thickness of the arrows on the right correspond to experimentally supported (thick and solid), predicted (thin and solid), or in principle possible (dotted) rancRNA functions. and regulation of imprinting (e.g. HOTAIR RNA). Recent years have witnessed a burst of lncRNA identification (primarily by bioinformatic means) and have expanded the scope of lncRNA functions to transcription enhancer, miRNA sponging, RNA turnover, or translation control roles (reviewed in refs. 37,38). These capabilities of lncRNAs in turn affect the regulation of crucial cellular processes such as embryogenesis, cell cycle, maintenance of pluripotency, apoptosis, and differentiation. In general lncRNAs share common features with mRNAs, such as transcription by polymerase II, splicing, 5'-capping, and 3'-polyadenylation. What distinguishes them from genuine mRNAs is the lack of reasonably long and evolutionarily constrained ORFs, the predominant nuclear localization, and the lack of encoded peptide fragments detectable in mass spectrometry studies. Recent ribosome profiling, translating ribosome affinity purification (TRAP), as well as polysome profiling approaches, however, presented evidence that some lncRNAs are in fact cytoplasmic and associate with ribosomal and poly-ribosomal fractions. 17,20,23,[39][40][41] This raises the possibility that lncRNAs are ribosome-bound to fine-tune the speed or specificity of the translation machinery (Fig. 1). It has been suggested that lncRNAs decorated with multiple ribosomes would be a means for titrating out and storing ribosomes for later use. 23 Alternatively it has been proposed that lncRNAs pair sequence specifically with certain mRNAs promoting 42 or inhibiting 43 their translation. Strictly speaking, the latter anti-sense lncRNAs are in fact not genuine rancRNAs since they associate with polysomes via their hybridization with mRNAs. On the other hand polysomal lncRNAs might in fact co-sediment with translating ribosomes because they actually encode proteins or short peptides. The currently available data fueled an intense discussion about the possible protein coding potential of several lncRNAs 19,44 and called into question the annotation of non-protein-coding transcripts. 37,38 A study in zebrafish has shown that up to 45% of previously proposed lncRNAs possess detectable ORFs and therefore might actually represent genuine mRNAs, 19 thus stressing the fact that the protein/peptide coding potential of lncRNAs has been underestimated. In support of this view, 2 very recent whole human proteome studies identified up to »450 peptides to be encoded in annotated lncRNAs, pseudogenes, and other transcripts of uncertain coding potential. 45,46 It therefore appears that conventional gene annotations over-estimated the number of lncRNAs in vertebrate genomes. Nevertheless the ribosome association of genuine lncRNAs implies an attractive possibility for translation control and awaits further investigations. Conclusion & Outlook Thousands of putative ribosome-associated ncRNAs (rancRNAs) have been recently identified, yet not all of them are expected to alter the performance of the ribosome. It is possible that some, or even the majority, of these RNA molecules are ribosome-bound by unspecific interactions and thus represent biological noise. Others might be ribosome-bound because the ribosome is in a state that can be referred to as 'default translation initiation' mode ( Fig. 1). This might represent the basal program of the translation machinery attempting to spuriously bind initiation codons on all encountered cytoplasmatic RNAs. 23 On the other hand, first experimental evidence on some small rancRNA molecules has been presented that suggest indeed translation control functions. 22,24,25 These first examples demonstrated a very rapid global attenuation of protein production in a stressdependent manner (Fig. 1). Furthermore, mRNA-specific effects on protein biosynthesis by rancRNAs have also been observed with the well characterized SRP RNA and tmRNA as role models for this subclass of ribo-regulators. Future work will need to clarify whether or not ribosome-bound ncRNAs are capable of eliciting additional regulatory cues to the translation machinery and thus further expand the known repertoire of translation regulation and ncRNA biology (Fig. 1). Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2016-08-09T08:50:54.084Z
2014-11-01T00:00:00.000
{ "year": 2014, "sha1": "8f9985cfe78392a7e188113340eef84768facff0", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15476286.2014.996459?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b37548888cad8f06e6d4d36aab19081bde4d5b22", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17612890
pes2o/s2orc
v3-fos-license
Mutation of NLRC4 causes a syndrome of enterocolitis and autoinflammation Upon detection of pathogen-associated molecular patterns, innate immune receptors initiate inflammatory responses. These receptors include cytoplasmic NOD-like receptors (NLRs), whose stimulation recruits and proteolytically activates caspase-1 within the inflammasome, a multi-protein complex. Caspase-1 mediates the production of interleukin-1 family cytokines (IL1FCs), leading to fever, and inflammatory cell death (pyroptosis)1,2. Mutations that constitutively activate these pathways underlie several autoinflammatory diseases with diverse clinical features3. We describe a family with a previously unreported syndrome featuring neonatal-onset enterocolitis, periodic fever, and fatal/near-fatal episodes of autoinflammation caused by a de novo gain-of-function mutation (p.V341A) in the HD1 domain of NLRC4 that co-segregates with disease. Mutant NLRC4 causes constitutive Interleukin-1 family cytokine production and macrophage cell death. Infected patient macrophages are polarized toward pyroptosis and exhibit abnormal staining for inflammasome components. These findings describe and reveal the cause of a life-threatening but treatable autoinflammatory disease that underscores the divergent roles of the NLRC4 inflammasome. Upon detection of pathogen-associated molecular patterns, innate immune receptors initiate inflammatory responses. These receptors include cytoplasmic NOD-like receptors (NLRs), whose stimulation recruits and proteolytically activates caspase-1 within the inflammasome, a multiprotein complex. Caspase-1 mediates the production of interleukin-1 family cytokines (IL1FCs), leading to fever, and inflammatory cell death (pyroptosis) 1,2 . Mutations that constitutively activate these pathways underlie several autoinflammatory diseases with diverse clinical features 3 . We describe a family with a previously unreported syndrome featuring neonatal-onset enterocolitis, periodic fever, and fatal/near-fatal episodes of autoinflammation caused by a de novo gain-offunction mutation (p.V341A) in the HD1 domain of NLRC4 that co-segregates with disease. Mutant NLRC4 causes constitutive Interleukin-1 family cytokine production and macrophage cell death. Infected patient macrophages are polarized toward pyroptosis and exhibit abnormal staining for inflammasome components. These findings describe and reveal the cause of a life-threatening but treatable autoinflammatory disease that underscores the divergent roles of the NLRC4 inflammasome. He gradually improved and was discharged after 9 weeks, remaining on cyclosporine; serum ferritin normalized but IL-18 remained markedly elevated (Fig. 1d,e). He subsequently reported a lifelong history of periodic fevers (>40 °C) provoked by physical and emotional stressors. During infancy he had an extended hospitalization for fever, vomiting, non-bloody diarrhea and failure to thrive; no specific diagnosis was made. His gastrointestinal symptoms resolved by one year. In adulthood erythematous plaques and joint pains accompanied fevers; sero-negative psoriatic arthritis was diagnosed. The father's family history revealed healthy parents and two additional offspring, one without illness and a five-year-old half-brother (III.2) of the deceased infant (III.3) who also had periodic fevers (range 38.9 -40 °C) beginning on day three of life after circumcision. A more severe febrile episode associated with vomiting, non-hemolytic anemia and acute renal failure occurred at 6-weeks of age (Supplementary Table 1). Later his fevers were induced by over-exertion and accompanied by abdominal pain. A duodenal biopsy in the first year revealed villous blunting and intraepithelial lymphocytes (Fig. 1b, lower panel). Inflammatory markers including ferritin (516-856 ng/ml), C-reactive protein, soluble IL-2R and plasma IL-18 (11,520 to 24,129 pg/ml) were persistently elevated (Fig. 1d,e). NK cells, normal in number, were dysfunctional by chromium release assays (Supplementary Table 1). Clinical signs of chronic inflammation included short stature (< 3 rd percentile for height and weight) and recurrent myalgias. During the index case's acute illness, the possibility of a novel genetic syndrome was considered, leading to exome sequencing of the index case and his parents (see Methods). Clincial features suggesting hemophagocytic lymphohistiocytosis led to examination of genes implicated in this syndrome 11 ; no rare variants were identified (Supplementary Table 2). Upon the father's illness, 34 novel protein-altering variants (absent in dbSNP, 1000 genomes, NHLBI and Yale exome databases) shared by the index case and his father were identified, including six occurring at positions invariant among orthologs (Supplementary Table 3). While none of these altered genes causing known inflammatory diseases, one was in NLRC4, which encodes a core inflammasome protein. This p.Val341Ala variant occurs within helical domain 1 (HD1), which provides a 'lid' to the ADP nucleotide-binding domain (NBD) in the crystal structure of inactive NLRC4 (Fig. 2). Ligand binding normally opens this structure, leading to exchange of ATP for ADP, promoting oligomerization and inflammasome assembly 12 . Gain-of-function mutations in the NBD of the related protein NLRP3 cause constitutive NLRP3 inflammasome assembly, resulting in production of IL-1β, fever and a spectrum of autoinflammatory disorders, the cryopyrinopathies [13][14][15] . These diseases are clinically distinct from the disease in our family as the cryopyrinopathies lack gastrointestinal pathology 16 . Evaluation of NLRC4 V341A in the extended family demonstrated that it occurred de novo in the affected father, and co-segregated with the inflammatory disease (Fig. 1a). None of the other five novel variants at conserved positions showed co-segregation with disease or de novo mutation. The finding of a de novo mutation in NLRC4, that co-segregates with a consistent clinical syndrome and biomarkers of inflammasome activation, provides strong evidence that NLRC4 V341A causes this syndrome (syndrome of enterocolitis and autoinflammation associated with mutation in NLRC4; SCAN4). We measured cell death-associated LDH release in the same 18-hour culture supernatants. SCAN4 macrophages released more LDH than healthy control macrophages (12.3% versus 4.7% P<0.0001) (Fig. 4b). Addition of Z-YVAD-FMK, which inhibits the catalytic site of cleaved caspase-1, significantly reduced IL-1 family secretion, but did not reduce cell death (Supplementary Fig. 3a,b). Thus NLRC4 V341A is gain-of-function, eliminating the requirement for "signal 2" for activation of caspase-1 and production of IL1FCs. NLRC4 V341A also promotes pyroptosis independent of caspase-1 cytokine processing. We next infected LPS-primed healthy control or patient macrophages with either of two flagellated, TTSS-positive pathogens, S. typhimurium (strain SL1344) or P. aeruginosa (strain PAKΔSTY), thus providing both 'signal 1' and 'signal 2' provocation. As anticipated, LPS-primed healthy control macrophages secreted abundant IL1FCs and initiated pyroptosis upon infection (Supplementary Fig. 4a,b). Responses were reduced when infected with P. aeruginosa strain PAKΔpopD, which lacks a functional TTSS (Supplementary Fig. 4a,b). In contrast, SCAN4 macrophages secreted significantly less IL1FCs upon infection with pathogenic strains, yet showed more cell death than healthy control macrophages (Supplementary Fig. 4a,b). These findings describe a previously unreported Mendelian autoinflammatory syndrome featuring periodic fever, neonatal-onset enterocolitis and high levels of IL1FCs, and demonstrate its causation by a gain-of-function mutation in NLRC4. Like NLRP3 cryopyrinopathies 14 , SCAN4 is associated with constitutive activation of caspase-1 and production of IL1FCs. In the inhibited, ADP-bound state, Val341 of NLRC4 makes van der Waals contacts with side chains of an adjacent helix in the HD1 domain, comprising the 'lid' on the nucleotide-binding site. The decreased hydrophobicity of the Ala341 mutation may reduce this interaction, allowing more movement of helix α12 and promoting ATP for ADP exchange, either by favoring the open conformation of NLRC4, or by disrupting the stabilizing interaction of His443 with the beta-phosphate of ADP (Fig. 2). Either possibility would promote ligand-independent activation of NLRC4. It is compelling that Canna et al. have identified an independent de novo mutation in NLRC4, p.Thr337Ser 19 . The identification of two de novo mutations in close proximity in the same gene that segregate with a novel clinical phenotype provides strong support for a causal relationship of the mutations to disease pathogenesis. SCAN4 is distinctive from NLRP3 cryopyrinopathies in its association with neonatal-onset enterocolitis. This may relate to NLRC4 being highly expressed in intestinal macrophages while NLRP3 is not 20 . It is interesting that the marked enterocolitis of each surviving SCAN4 patient resolved by one year of age. We speculate that this chronic inflammatory state may be exacerbated in the infant gut by constant "signal 1" provocation from newly acquired symbionts. As host-microbe interactions mature, a less pro-inflammatory microflora may account for reduced gut inflammation 21 . IL-1β -targeted drugs are approved for treatment of NLRP3 cryopyrinopathies [22][23][24] . We expect IL-1β blockade will be similarly effective in SCAN4 patients. Although both surviving members in our family have presently declined interictal therapy, the complementary report by Canna et al provides evidence for efficacy of IL-1 receptor blockade 19 . SCAN4 macrophages show high IL1FC secretion and increased cell death with "signal 1" despite the absence of "signal 2"; addition of "signal 2" frequently produces multiple ASC foci with increased cell death, despite blunted IL1FC secretion. One possible explanation is that mutant NLRC4 promotes traditional inflammasome assembly in the absence of "signal 2" provocation but intracellular ligand binding promotes formation of ASC foci that lack activated caspase-1, resulting in smaller structures with impaired ability to produce cytokines. This proposal is supported by mice with mutated inflammasome components 6,9,10 which demonstrate defective cytokine processing yet intact pyroptosis . Modulating the balance between cytokine production and pyroptosis may determine the distinct states of subclinical autoinflammation, periodic fever, and fatal or near-fatal autoinflammation seen in SCAN4 patients. Research Subjects The study protocol was approved by the Yale Human Research Protection Program. Informed consent was provided by all participants or their legal guardians. Clinical data was abstracted from medical records. Tissues from biopsy and autopsy specimens were labeled using standard hematoxylin and eosin staining protocols or by immunohistochemical staining with an anti-CD163 antibody (Abcam). Genetic analysis DNA was prepared from venous blood samples of the index case and kindred members. Exome sequencing of the index case and his parents was performed by capture on the NimbleGen 2.1 Exome reagent followed by 74 base paired end sequencing on the Illumina platform to high coverage (each targeted base was read by a mean of more than u80 independent reads in each subject) as previously described26. Sequences were aligned to NCBI Build 36 of the human genome and SNV and indel calls were assigned quality scores (QS) using SAMtools and annotated for novelty (using Yale, 1000 genomes, and NHLBI exome databases), for impact on encoded proteins, and for conservation of variant position as previously described26. Variants were sought in genes implicated in hemophagocytic lymphohistiocytosis; none were identified (Supplementary Table 2). Thirty-four proteinaltering variants that were absent in dbSNP, 1000 genomes, NHLBI and Yale exome databases that were shared by the index case and affected father were identified (Supplementary Table 3) and evaluated. Only one was in a gene known to play a role in activation of the innate immune system (NLRC4). Variants in NLRC4 and the other 5 genes in which novel variants that occurred at completely conserved positions (ALK, DCC, FBXO4, KIF13B, and SLC7A6OS) were confirmed by PCR amplification followed by Sanger sequencing and transmission through the complete pedigree was evaluated. The NLRC4 variant proved to be de novo in the affected father and perfectly cosegregated with the autoinflammatory syndrome in the pedigree, while the others were all transmitted from an unaffected grandparent of the index case and did not co-segregate with disease. The NLRC4 p.Val341Ala mutation has been deposited into the National Center for Biotechnology Information's ClinVar database (ClinVar accession #, SCV000172282) Functional studies of monocyte derived macrophages CD14 + monocytes were purified from peripheral blood mononuclear cells of SCAN4 patients and healthy donor controls using anti-human CD14 magnetic beads (Miltenyi). Cells from the two living SCAN4 patients were used for all functional studies whereas the number and relatedness of healthy donors used varied from experiment to experiment and was based upon same-day availability. Monocytes were differentiated to macrophages in RPMI containing 10% FBS and M-CSF (10ng/ml) over 7 days 27 . 2 × 10 5 macrophages were cultured for 18 hours in culture media containing LPS (1ng/ml) with or without Z-YVAD-FMK (Enzo Life Sciences) at 0.1-0.5μM concentrations. Culture supernatants were collected and cells washed and re-cultured with LPS free media before infection with P. aeruginosa or S. typhimurium. PAKΔSTY and SL1344 are flagellated strains that are type 3 secretion system (T3SS)-positive. PAKΔpopD is a Fla+ strain that does not express a functional T3SS. Construction and characterization of bacterial strains has been previously described 6,28 . Infected macrophage culture supernatants were collected after one hour. Secreted IL-1β and IL-18 were measured by ELISA (Millipore and MBL, respectively) in both the 18-hour and one-hour culture supernatants. Cell-free LDH was measured according to manufacturer's protocol (Takara). LDH release in supernatants was normalized to LDH released from macrophages lysed with Triton X-100 (0.1%). Immunofluoresence microscopy of infected macrophages 2×10 5 macrophages, differentiated as above, were plated on glass coverslips. Cells were incubated with 2 μM biotin-YVAD-CMK for 30 minutes prior to infection with P. aeruginosa PAKΔSTY or ΔpopD or S. typhimurium SL1344 at a multiplicity of infection of 20 bacteria/cell for 1 hour. Cell were fixed with paraformaldehyde (4%), blocked with 1% fish scale gelatin (Sigma) in PBS + 0.1% TX-100, and stained with rabbit anti-ASC (AL177; AdipoGen) and 4',6-Diamidino-2-phenylindole (Sigma). A streptavidin-Alexa Fluor 488 conjugate (Invitrogen) and anti-rabbit Alexa Fluor 594 antibody (Life Technologies) were used for secondary stainings. Macrophages were visually inspected for immunofluorescence using an Axiovert 200M microscope. Manual enumeration of ASC + and YVAD + macrophages was performed over 20 representative fields at 20x magnification. DAPI staining of bacterial DNA was used to confirm macrophage infection. High detail magnification for phased images at 60X and 100X was performed on a Nikon Eclipse TE2000-S microscope. The V341A amino acid substitution is positioned within the HD1 domain of NLRC4. (a) A schematic representation of the NLRC4 protein with individual domains colored as follows: CARD in black, NBD in blue, HD1 in cyan, WHD in pink, HD2 in green, LRR in lilac. The location of the V341A substitution is displayed (b) Mapping of Val-341 onto the crystal structure of murine NLRC4 in the ADP-bound state (PDB accession code 4KXF) 12,25 . The ribbon diagram excludes the N-terminal CARD domain which was not included in its crystal structure. ADP is drawn as sticks, and the position of Val-341 is indicated with red spheres. The zoomed-in region (structure rotated 90° toward the reader) shows the position of Val-341 on α-helix 12. Neighboring hydrophobic residues within the HD1 (black outlines) and adjacent α-helices are numbered. NLRC4 V341A promotes spontaneous cleavage of pro-caspase-1 and ASC multimerization in HEK293 cells. (a) Increased cleavage of FLAG-procaspase-1 in HEK293 cells expressing NLRC4 V341A versus NLRC4 WT . Western blot at top of figure shows results of blotting for NLRC4, pro-caspase-1 (p45) and its p35 and p10 cleavage products as well as actin controls in cells expressing constructs shown below, as described in Methods. Levels of p35 and p10 are normalized in each case to the level of pro-caspase-1. Alternate analysis normalizing p35 and p10 to levels of actin and NLRC4 also yielded statistically significant differences between lines transfected with NLRC4 WT and NLRC4 V341A . Mean and standard deviation of four independent transfections is shown. A two-sided Student's t-test was used to determine statistical significance. (b) Spontaneous ASC multimerization (white arrows) in HEK293 cells expressing GFP-ASC and either NLRC4 WT (left panel) or NLRC4 V341A (right panels) using epifluorescent microscopy. A total of 1422 cells transfected with wildtype NLRC4 and 1155 cells transfected with mutant NLRC4 were scored. * the frequency of ASC puncta + cells in lines transfected with NLRC4 WT is significantly different (P<0.0001) from lines transfected with NLRC4 V341A (chi-square testing). Scale bars, 20 μm. Increased production of IL-1β,IL-18 and increased cell death in macrophages harboring NLRC4 V341A . Monocyte-derived macrophages from SCAN4 patients (II.3 and III.2) and WT controls (one related and four unrelated) were cultured for 18 hours in media containing low-dose LPS (1 ng/ml) followed by measurement of (a) IL-1-β(b) IL-18, and (c) LDH as described in Methods. LDH release is reported relative to result following total lysis by TritonX-100 (0.1%). Cytokine secretion and cell death results were similar in macrophages from patients (II.3 and III.2). Bar graphs show mean ± S.E.M from three separate experiments. Significance by unpaired Student's t-test is indicated.
2016-05-12T22:15:10.714Z
2014-07-29T00:00:00.000
{ "year": 2014, "sha1": "959a1d8eaa001a66bd66be2bf084deece7df0d22", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4177367?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9fba8403fa17a86da2a505e7b5d162cefb93d3b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13247227
pes2o/s2orc
v3-fos-license
Dyslipidemia and associated factors among diabetic patients attending Durame General Hospital in Southern Nations, Nationalities, and People’s Region Background Diabetes mellitus is a group of metabolic disorders that are caused by deficiency in insulin secretion or the decreased ability of insulin to act effectively on target tissues, particularly muscle, liver, and fat. As a result of insulin resistance in the target tissues, particularly in the adipocytes, free fatty acid flux is increased, leading to increased lipid synthesis in hepatocytes, which is responsible for diabetic dyslipidemia. Objective The objective of this study was to determine the prevalence and associated factors of dyslipidemia among diabetic patients in Durame General Hospital in Kembata Tembaro zone. Methods A cross-sectional study was conducted from September 2015 to April 2016. In total, 224 subjects were involved in the study by using convenient sampling techniques. Face-to-face interview–administered questionnaire was used to collect sociodemographic data and other possible clinical data associated with the prevalence of dyslipidemia. Fasting venous blood specimens were collected to assess serum lipid profiles. Blood pressure (BP), weight, height, and waist circumference were measured. Results The prevalence of dyslipidemia was 65.6%. Individual lipid abnormality of elevated LDL-C, TC, TG, and reduced HDL-C were identified in 43.8%, 23.7%, 40.6%, and 41.9% of study subjects, respectively. The prevalence of dyslipidemia was significantly associated with high BP, high body mass index, aging, and longer duration of diabetes mellitus. Conclusion High prevalence of dyslipidemia was found among diabetic patients in the study area. Therefore, a compressive mechanism is required to screen, treat, and prevent dyslipidemia. Introduction Diabetes mellitus is a progressive chronic disease caused by a relative or definite insulin deficiency or by insulin resistance, leading to hyperglycemia that is characterized by metabolic disorders of lipids, carbohydrates, and proteins. [1][2][3] The decreased ability of insulin to act effectively on target tissues leads to metabolic abnormalities that cause an increased risk of cardiovascular disease (CVD) and diabetes mellitus (DM). The important features of the insulin resistance include central obesity, hypertriglyceridemia, low high-density lipoprotein (HDL) cholesterol, hyperglycemia, and hypertension. [4][5][6] An early major contributor to the development of insulin resistance is overabundance of circulating free fatty acids (FFAs) that are released from expanded adipose tissue triglyceride (TG) stores through the lipolysis of TG-rich lipoproteins in tissues by lipoprotein lipase. [7][8][9] In the liver, FFAs result in an increased production of glucose and TGs, secretion of very low-density lipoprotein cholesterol (VLDL-C) and low-density lipoprotein cholesterol (LDL-C), as well as reduction in HDL cholesterol (HDL-C). 4 FFAs also reduce insulin sensitivity in muscle by inhibiting insulinmediated glucose uptake, and FFA flux to the liver is associated with the increased production of TG-rich VLDL-C. [10][11][12] Dyslipidemia can be defined as lipid metabolism disorder that can lead to elevated total or LDL-C levels or low levels of HDL-C. Diabetic dyslipidemia is a cluster of plasma lipid and lipoprotein abnormality that is metabolically interrelated, and it is characterized by low HDL-C and increased LDL-C, TGs, and total cholesterol (TC) levels. The pattern of abnormality lipoproteins can be individual or combined. High levels of TGs or low levels of HDL-C or both have been identified in approximately half of the subjects in type 2 DM (T2DM). The abnormal features of the lipid profile are common in subjects with central obesity, metabolic syndrome, insulin resistance, and T2DM. 13,14 Dyslipidemia is the most important and modifiable risk factor for CVDs. Atherogenic dyslipidemia is one of the major risk factors for CVD in diabetic patients. An increase level of VLDL particles in T2DM leads to the generation of atherogenic remnants. Type 1 diabetes (T1D) is also associated with high CVD risk. The lipid profile in T1D with good glycemic control is characterized by subnormal TG and LDL-C, but with slightly elevated HDL-C. [15][16][17] The prevalence of dyslipidemia is continuously increasing globally probably due to Westernization of diet, reduced physical activity, and urbanization as well as obesity. Physical inactivity or sedentary lifestyle is a predictor of CVD events and related mortality. Increased adipose tissue (predominantly central), reduced HDL-C, increased TGs, high blood pressure (BP), and high blood glucose concentration are associated with a sedentary lifestyle. [18][19][20] There is also high prevalence of dyslipidemia in developing countries due to urbanization, changing lifestyle, and food habits. There are very few data available on the prevalence of dyslipidemia in diabetic patients in Ethiopia. As to the knowledge of the principal investigators, there are no data available on the prevalence of dyslipidemia among diabetic patients in the study area. Therefore, the present study aimed at studying the prevalence, severity, and pattern of dyslipidemia among diabetic subjects in the study area. Materials and methods The study was conducted in Southern Nations, Nationalities, and People's Region at Durame Hospital, Kembata Tembaro zone, which is located 290 km away to the south from Addis Ababa, capital of Ethiopia. The Kembata Tembaro zone has a total population of 828,002, of which 404,150 (48.81%) and 423,852 (51.19%) are women and men, respectively. The study was conducted from September 2015 to April 2016. This is an institution-based cross-sectional study. Data collection techniques Before collecting any data, an ethical clearance was obtained from the ethical review board of Institute of Health Sciences, Jimma University, to Durame Hospital medical authorities. Next, a permission letter was obtained from the medical director of Durame Hospital to diabetic clinic head office to conduct the study. Then, the aim of the study was clearly explained to the study subjects. A convenient sampling technique was used to select study subjects from study population; this sampling technique was used because it was difficult to use random sampling technique as the study subjects' appointment for their follow-up varied, and some of them might not come for the follow-up on the specified date. In addition to this, it is easy and not time-consuming compared to other sampling techniques. A written consent was obtained from each study subject before any data collection. Interview-administered structured questionnaire was used to collect sociodemographic and clinical data. Those study subjects who were pregnant; who were taking lipid lowering drugs; and who had a known history of cardiac problem, chronic liver, and renal diseases were excluded from the study. Physical examination Anthropometric measurements were administered by trained professional nurses working at diabetic clinic in the morning after overnight fasting by using a standardized protocol. The height and weight of each study subject were measured by using analog digital scale without shoes. The height was measured by instructing each subject's feet pointed outward; legs straight and knee together; arms at sides; head, shoulder blades, buttocks, and heels touching measurement surface; looking straight ahead; and shoulder relaxed. The body mass index (BMI) was calculated by using the formula, weight over height square, and the results were recorded. Circumferences were evaluated by using a stretch-resistant 1-cm-wide measuring tape that provides a constant measurement. Circumference measurements were taken while the subject is in the standing position and breathing normally. Hip circumference was measured around the widest portion of the buttocks, with the tape parallel to the floor. For taking both waist and hip circumference measurements, the Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy 2017:10 submit your manuscript | www.dovepress.com 267 Prevalence of dyslipidemia tape was snug around the body, but not pulled so tight that it is constricting. Each measurement was repeated twice; the average of measurements within 1 cm of one another was calculated. If the difference between the two measurements exceeds 1 cm, the two measurements were repeated. Waist-to-hip ratio (WHR) was calculated as indicated by World Health Organization as waist circumference (WC) divided by hip circumference. The normal range for WC is ≤120 cm for men and ≤88 cm for women, and the normal range for WHR is 0.9 for men and 0.85 for women. BP was measured by using a mercury sphygmomanometer for three consecutive times. The first measurement was taken after a person sits down for at least 10 minutes, and the following measurements were taken every 5 minutes thereafter. The BP values used for analysis were the mean of the last two measurements. Blood specimen collection technique and investigation Five milliliter blood was collected from each study subject by a trained medical laboratory technologist after overnight fasting following the standard operating procedure guideline. The collected blood specimen was kept at room temperature for ~30 minutes for clot formation. After clot formation, the blood was centrifuged at 2,000 rpm for 10 minutes by using a fixed head rotor centrifuge. Then, the serum was separated from the whole blood and stored at −20°C before analysis. Then, the analysis was performed by using A25 BioSystems clinical chemistry analyzer (BioSystems, Costa Brava, Spain) at Hawassa University Referral Hospital Laboratory Unit. Definition of terms Hypertension was defined as systolic BP (SBP; ≥140 millimeter of mercury [mmHg]) or diastolic BP (DBP; ≥90 mmHg); both SBP and DBP are elevated in patients on antihypertensive medication. Dyslipidemia was defined as lipid profile that consists of the following abnormalities either singly or in combination. These include TC ≥200 mg/dL, TG levels ≥150 mg/dL, HDL-C <40 mg/dL, and LDL-C ≥100 mg/dL. Based on the results of BMI, study subjects were categorized as underweight with BMI <18.5 kg/m 2 ; normal weight when BMI range was 18.5 to 24.9 kg/m 2 ; overweight with BMI range from 25 to 29.9 kg/m 2 ; obese with BMI range from 30 to 34.9 kg/m 2 ; severely obese with BMI range from 35 to 39.9 kg/m 2 ; and morbidly obese with BMI range ≥40 kg/m 2 . Statistical analysis Statistical analysis of the data was performed by using Microsoft Office Excel for Windows 2008 and SPSS Version 20.0 software. Bivariate and multivariate logistic regression models were used to assess how well predictor independent variables explain or predict dependent variables and to control possible confounders and to identify the determinant factors associated with the prevalence of dyslipidemia. P-value <0.05 was considered as statistically significant. All the data from questionnaires were checked manually for completeness and clarity as well as edited for inconsistencies before data analysis. Sociodemographic and other characteristics of study subjects From the 224 study subjects involved in the study, 53.6% of them (n=120) were men, and the rest were women; the mean age was 38±15; the majority of the study subjects (66.5%; n=149) were aged >30 years; 59.8% (n=134) of them were urban dwellers; and 40.6% (n=91) of them had attended their secondary education. The prevalence of dyslipidemia was highest among men (38.8%; n=84), and among agegroups >30 years (43.8%; n=98). Again the prevalence of dyslipidemia was highest among urban dwellers (36.2%; n=81) compared to rural dwellers. From 24.1% (n=54) of the study subjects who were overweight according to BMI calculation, 94.4% (n=51) of them had abnormal serum lipid profile, whereas 26.3% (n=59) of them had abnormal serum lipid profile according to WHR. From 6.7% (n=15) of the study subjects who were obese according to BMI calculation, 93.3% (n=14) of them had abnormal lipid profile, whereas 80.0% (n=64) of 80 obese individuals had abnormal lipid profile according to WHR (Table 1). Pattern of serum lipid profile abnormality and prevalence of dyslipidemia components The pattern of serum lipid abnormality was identified. According to this pattern, ~26.5 % (n=39) of study subjects from 65.6% (n=147) had single or isolated lipid profile abnormality, whereas 73.5% (n=108) of the subjects had combined serum lipid abnormality. From isolated or individual lipid abnormality, LDL-C was found among 6.7% (n=15) of the subjects, whereas HDL-C, LDL-C, TG, and TC were the combined lipid profile abnormality found among 11.2% (n=25) from total 147 study subjects identified with lipid Table 2). Factors associated with the prevalence of dyslipidemia Ten independent variables were entered to multivariate logistic regression analysis model to identify independent predicator variable that was associated with the prevalence of dyslipidemia. According to this analysis, being female, aged >30 years, being overweight, obesity, hypertension, mode of transport, and having 6-10 and >10 years of diabetes had statistically significant association with the prevalence of dyslipidemia (P<0.05). In contrast, residence, educational status, and family history of DM had no statistically significant association with the prevalence of (Table 3). Discussion Due to economic growth and changing of lifestyle in developing countries, the prevalence of abnormal serum lipid profile is increasing, particularly in population with chronic illness with less physical activity. Dyslipidemia is the most important independent predictor of CVD in diabetic patients, which leads to the high morbidity and mortality of diabetic patients. The current study was conducted to assess the prevalence of dyslipidemia and associated factors in Durame General Hospital in Kembata Tembaro zone. According to our finding, 41.9%, 43.8%, 40.6%, and 23.7% were the prevalence of individual lipid profile of low HDL-C, high LDL-C, TG, and TC, respectively. The prevalence of low HDL-C in our study is almost comparable to the finding from United Arab Emirates, whereas the prevalence of hypertriglyceridemia in our finding is much higher than that from United Arab Emirates. 21 The prevalence of high-level TC in the current finding is much lower than the finding reported from Libya, but the prevalence of high TG and LDL-C is almost comparable. 22 The prevalence values of hypercholesterolemia, hypertriglyceridemia, and low HDL-C and high LDL-C are much less than the finding reported from Jordan in which 77.2%, 83.1%, 83.9%, and 91.5% were the prevalence of TC, TG, low HDL-C, and high LDL-C, respectively. 23 Combined prevalence of reduced HDL-C, elevated LDL-C, TG, and TC was indicated in 11.6% (n=26) of the study subjects. The study also indicated that the prevalence of dyslipidemia was 65.6%. This finding is much higher compared to the study conducted in China where 270 Bekele et al the prevalence of dyslipidemia was 34.64% 24 and less than the finding reported from Jordan in which the prevalence of dyslipidemia was 90%, and the prevalence was much higher among male diabetic patients compared to women in contrast to the study reported from Jordan. 23 The prevalence of the current finding is also less than the finding reported from Finland and USA in which the prevalence values of dyslipidemia were 85.0% and 70.5%, respectively. 22 The prevalence of individual lipid profile was almost similar to a study conducted in other area of Ethiopia except that there is much higher prevalence of low HDL-C in the current study. 25 Our study also revealed that being female, aged >30 years, being overweight, obesity, hypertension, and having 6-10 and >10 years diabetes were statistically associated with the prevalence of dyslipidemia. Limitation of the study We did not classify the study subjects as type 1 diabetics or type 2 diabetics because the diagnosis of diabetes was based only on clinical and abnormal fasting or random blood sugar, which is not used to distinct the type of diabetes. Conclusion Our study indicated the high prevalence of dyslipidemia among diabetic patients. Gender, aging, longer duration of diabetes, higher BMI, and high BP were the risk factors associated with the prevalence of dyslipidemia. Dyslipidemia is the major public health problem in developing countries; and it is an independent predictor factor for developing CVD. In addition, with other risk factors such as high BP, it complicates the development of CVD among diabetic patients, leading to high mortality. Therefore, it is mandatory to screen, treat, and educate diabetic patients about dyslipidemia and its associated risk factors.
2018-04-03T00:37:18.550Z
2017-06-22T00:00:00.000
{ "year": 2017, "sha1": "bc7ba8a269030e54f2d3568954413578936c1f5b", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=37058", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bbae18ffde73c157f99ea542823624537917423", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247939863
pes2o/s2orc
v3-fos-license
Design of Low Thrust Controlled Maneuvers to Chase and De-orbit the Space Debris Over the several decades, the space debris at LEO has grown rapidly which had caused a serious threat to the operating satellite in an orbit. To avoid the risk of collision and protect the LEO environment, the space robotics ADR concept has been continuously developed for over a decade to chase, capture, and deorbit space debris. This paper presents the designed small satellite with dual robotic manipulators. The small satellite is designed based on CubeSat standards, which uses commercially available products in the market. In this paper, an approach is detailed for designing the controlled chase and deorbit maneuver for a small satellite equipped with an RCS thruster. The maneuvers are comprised of two phases, a. bringing the chaser satellite to the debris orbit and accelerating it to close proximity of 1m to the debris object by using the low thrust RCS thruster, and b. Once captured, controlled deorbiting it to 250 km of altitude. A Hohmann transfer concept is used to move our chaser satellite from the lower orbit to the debris orbit by two impulsive burns. A number of the scenarios are simulated, where one or more orbital elements are adjusted. For more than one orbital elements adjustment, the DAG law and the Q law are utilized. These laws synthesize the three direction thrusts to the single thrust force for the controlled maneuver. The $\Delta$V requirement at each maneuver is determined by using the performance parameters of the RCS thruster intended for a small satellite. The results show that, for long term simulation of a chaser satellite maneuver to debris object, an optimum DAG law is most suitable than the Q law, as it can handle the singularity behavior of the orbital elements caused due by adjustment of one or more elements more efficiently. Introduction The low earth orbit(LEO) space environment is continuously crowded with space debris. This debris mostly consists of the residual of the spent satellite, rocket staging, body and booster, junk particles from the collision of debris object. Around mid of the twentieth century, space junk was not considered a serious issue as fewer applications were known at LEO. Recently, the space junk has increased drastically in the various belt of existing satellite orbits. Rex et al. study show that the debris junk particle will grow with a 5% growth rate per year if the possible debris mitigating measures are not taken [1]. The post-mission removal of the spent satellites and the rockets bodies has become important for keeping the functional satellites in favorable orbital condition and for upcoming space missions. Most of the studies shows that spacecraft's uncontrolled space debris collisions had increased continuously in orbital belts [2] The accumulation of the space junk in existing orbits can cause the possibility of the collision, which can cause an increment of more debris into the orbital belt. The augmented debris can be grouped as the "Kessler Syndrome" which could have a major impact worldwide and the frequency of the collisions will increase if post-mission removal is not done [3]. The orbiting debris is capable of having a relative speed of 15000 mph and can cause serious damage to the existing satellites in the orbit belts. Pelton describes how the debris generates by the cascading effect caused due to collision of space debris. He also explains the international standards for space traffic management and mitigating debris for future stratospheric missions and activities [4]. According to the European Space Agency(ESA), it has become mandatory to de-orbit the satellites bodies within the twenty-five years of its end of life (EOD). Moreover, de-orbiting becomes an essential process to remove the debris object from the orbital belts to reduce future collision probability. In the future, it is necessary to conserve the debris environment with minimized risk, mainly in the LEO region [5]. Anselmo et al. article suggest that the long-term evolution of debris population collision risk can be reduced by the explosion avoidance strategies and de-orbiting of upper stages in LEO regions [6]. To reduce the debris growth in LEO orbit, there are typically two measures defined, one is debris avoidance and the other is debris removal. In debris avoidance, the operation spacecraft or satellites use in-flight maneuvers to avoid collision with the space junk. But in debris removal, space junk from the orbit is removed by the means of other spacecraft and de-orbit to very low earth orbit or transferred to the graveyard orbit. Basically, there are two debris removal approaches, they are 1. Active Debris Removal (ADR) 2. Passive Debris Removal (PDR) ADR concept has been continuously developed for over a decade for the removal of large space debris object. Whereas, PDR approaches are used for the removal of the small debris. As the small debris has enough kinetic energy which is capable of destroying the operational satellites. As the ADR concept mostly uses the propulsion system for its operation. Over the last couple of decades, the advancement in the propulsion system has raised continuously for the number of missions. Previously, the solid and liquid propulsion systems are continuously used for different satellite missions, interplanetary mission, etc. for orbit transfer, maneuvers, docking,etc. operation. But the continuous up-gradation and miniaturization in these propulsion systems have caused the generation of the hybrid propulsion system like the RCS thruster. This kind of thruster can be used for both maneuvering and controlling the orbit transfer and chase to the close proximity of the target satellite. Various space agencies like NASA, ESA, and universities like UTIAS SFL, Surrey space center, JPL Caltech, DLR (German Aerospace Center) Braunschweig, University of Patras, etc. are working in chasing, capturing, and de-orbiting a spent satellite. Recently, Astro-scale Japan had successfully demonstrated chase, in-space capture, and the release system to clean the space debris [7], [8]. Recently active debris removal has continuously evolved for the mitigation of the debris from the LEO space environment. The ADR concept consists of different methods like space robotics, tether-based , collective, laser-based, sailbased, ion beam, dynamic system-based,etc. [9] One of the recently evolving ADR methods is space robotics for debris removal. Over the last decade, space-based manipulators is continuously evolving approaches for the multiple on-orbit servicing (OOS) missions like debris removal, refueling, docking, assembly functions, transporting, berthing, etc. But the debris removal has become an emergent application for keeping a safe space environment. A number of the OOS mission were completed successfully for different application except for debris removal. Space robotics has been installed in the international space station (ISS) for the assembly and servicing purpose which consists of three manipulator systems; European Robotic Arm [10], Canadian Mobile Services System, and Japan Experimental Modules Manipulator System. Recently, a satellite equipped with robotic manipulator systems with grasping devices on it for the active debris removal application has been comprehensively developed. This paper presents the designed satellite with dual manipulators known as debris chaser satellite [11] [12] for the debris removal application. The satellite is designed based on the CubeSat standard Which is 12U and commercially available products in the market. The paper technically presents a simulation approach in designing the controlled chase and de-orbiting maneuver of the debris chaser satellite for the PSLV debris removal. The maneuvers are comprised of two phases: a)bringing the chaser satellite to the debris orbit, and accelerating it to close proximity of 1m in-track separation to real PSLV Debris through the impulsive maneuvers, and b) Once captured, controlled de-orbiting it to 250 km of altitude. Once, the PSLV debris is in a very low earth orbit, atmospheric drag and solar radiation activity projection on it will be enough to burn the structure and decay. The Directional Adaptive Guidance law (DAG-law) and the Proximity Quotient Guidance laws (Q-law) are explained in the paper and their importance was analyzed for our simulation cases. And the DAG guidance law is utilized to execute our controlled chase maneuver for our scenarios by using the RCS thruster at optimum thrust requirement. Debris Chaser Mission and Architecture The debris chaser mission consists of debris assessment, chase maneuver operation, robotic manipulator deployment, and de-orbiting operation. These mission operations are shown in figure 1 in the block diagram format. All the simulation platform is developed based on the above mission block diagram. Initially, the investigation of the selected debris properties like orbit location, size, shape, orbital speed, etc. is to be done through the debris assessment tools. Then allocation of the debris chaser is done at optimal orbit height based on the capability of its maneuvers. The orbital maneuver like orbit transfer and chase maneuver is done to reach close to the debris targets and the orbital alignment along the phase is executed. Once reaching nearby, all the orbital elements and attitude orientation is done by attitude determination and control system of the debris chaser satellite After that, on-orbit deployment of robotics hands will be done by the manipulator trajectory which will be determined by using the tracking camera which is installed in the robotic arms. The robotic arm will be deployed and debris objects get attached to our satellite. Then reaction control system (RCS) propulsion system will be used for controlling the disoriented motion of debris along with the debris chaser satellite. Then de-orbiting technique will be employed for moving the satellite to very low earth orbit. The system will be de-orbited and release the debris object towards 250 km altitude and the atmospheric drag would be enough to bring back to earth atmosphere and burn it. The designed debris chaser satellites [11] are shown in figure 2 and figure 3. The designed satellite is of 12U form factor with an approximate dimension of 20X20X30 cm with dual robotic manipulators. Figure 2 represents the chaser satellite in the stowed configuration whereas figure 3 represents a satellite with a robotic manipulator with an extended arm. All the cad design is executed in the CATIA V5 tool which gives us iterative design with a higher accuracy level. A customized 3-DOF manipulator is designed based on the concept of the UR5 robot and resizing is done, so that manipulators got adjusted with the satellites model. The design of the hand gripper is still in progress which will be designed based on the grasp points in the debris. Similarly, the system architectures [13] were designed based on the optimal requirement of the hardware and the software for successful removal of the debris at polar orbit. The schematic diagram of the system architecture is shown in figure 4. The blue and red lines represent the data and power input/output section. From figure 4, we can also see that, for the application of the debris chaser, ADCS [14] requires huge amounts of sensors which will help our debris chaser satellite to capture the debris and de-orbit it. Similarly, the thruster will play an important role in debris removal operations as thrust is required while executing the transfer, chase, and de-orbiting operations. Numerous literature surveys were done for all different types of the propulsion system [15] [16], where we found that, for our application scenario, the RCS system is suitable for our transfer and rendezvous maneuvers. For our debris chaser satellite, we will be using the 3 RCS thruster which will be oriented in all three directions. The RCS thruster is made of the reaction control wheel and thruster whose main function is to provide the required thrust in any direction and control the attitude motion of the debris chaser satellite. Sometimes, this kind of thruster helps in providing the torque to control the rotational motion like pitch, roll, and yaw. we are using the commercially available product of the RCS system. A survey for the different propulsion systems from the existing literature was conducted and a feasible option was found based on the compatibility and available product in the market. It was found that the VACCO chemically-etched micro propulsion system modules [17] [18], specifically the hybrid ADN delta-V/RCS system is more compatible and feasible for our debris chaser mission. It is the single axial high-thrust with a high specific impulse ADN thruster which can deliver up to 1,036 Ns total impulse using only integral propellant. After doing the trade-off analysis, we found that the VACCO modules can achieve the delta-V required within two hours. Here, we assumed that the thruster is firing constantly which may not be a realistic case; however, the magnitude is acceptable. Orbital Maneuver An orbital maneuver is one of the important parts of astrodynamics which transfer the spacecraft from one orbit to another orbit. It requires the propulsive thrust to do so. It is one of the most effective methods for transfer, chase, docking, and de-orbiting operation for the debris removal application. For the orbit transfer, Hohmann's method is commonly used and is an effective technique while executing the transfer. It is considered as a simplest and most efficient method of transferring a satellite in the co-apsidal axis and co-planar orbits. It is a two-impulse elliptical transfer between two co-planar circular orbits. The transfer itself consists of an elliptical orbit with a perigee at the inner orbit and an apogee at the outer orbit. Figure 5 shows Hohmann's transfer orbit with the direction of net velocity after firing the propulsion unit. The governing equations of Hohmann's orbit transfer can be found widely in literature, which is one of the basic methods for the execution of orbit transfer to a higher orbit. For the in-orbit transfer to the debris object, a rendezvous chase maneuver like R-bar, V-bar, and Z-bar traditional approach are most commonly used. In the R-bar approach, a satellite chase from below or above the target object, along its radial direction. In the V-bar approach, a satellite chase from ahead or from behind and in the same direction as the orbital motion of the target object. The satellite motion is parallel to the target's orbital velocity. In it, a thruster fires small amount of fuel to increase its velocity in the direction of the target while chasing form behind. For our case, we will be using the V-bar approach to chase the space debris from behind by using an RCS thruster and will reach close proximity to the debris object. Once reaching the nearby, robotic manipulators will be used for grasping the non-cooperative debris through the contact dynamics method (which is not presented here). After grasping and ceasing the motion o of the debris object, the debris chaser satellite with debris will become a single body and need to de-orbit to very low earth orbit. For de-orbiting operation, the same Hohmann's orbit transfer will be used for the execution and moving debris to approximate 250 km where an atmospheric drag and solar projection towards debris will be enough to burn and decay it. Guidance Algorithm To optimize the orbital trajectory during rendezvous and close proximity operation, low thrust maneuver techniques are highly considered during its operation. Numerous researchers had considered different optimal low thrust algorithms for the rendezvous and docking process which are still in use. An optimal steering algorithm has been developed by Edelbaum T. [19] which is used for continuous thrusting in circular orbit for a given classical orbital element. For low thrust and controlled maneuvers, two guidance laws are continuously used for the adjustment of one or multiple orbital elements. They are; one is Proximity Quotient (Q) law [20] and another is Directional Adaptive Guidance (DAG) law [21]. Proximity Quotient (Q) law The proximity Quotient law is also known as the Q law. It was first developed by the Petropoulos [20] [22] in 2003 to find the initial assumption for optimal propellant and low thrust transfer between 2 Keplerian orbits. The Q-law is generally based on the Lyapunov feedback control loop, which calculates the optimal thrust direction i.e. α and β, based on the initial orbit of the spacecraft and desired target orbit. The Q-law was mostly developed for the 2 body problem which is based on the Gauss planetary equations and helps us to determine the rate of the Layapunov function analytically. Based on Layapunov function, the Q is defined as: Where, P represents the penalty introduced in the algorithm to find the solution, otherwise it can be involved in unacceptable low periapsis altitudes, W p is the weighting factor in the P term. The S COE is introduced to scale down the semi-major axis error and to improve the convergence in the simulation and W COE is the weighting factor that has a significant role in changing each COE quickly. The term(COE) xx is the maximum rate of change of a given COE within the instantaneous orbit. The expression and governing equation of the S COE and P is mentioned below; S e = S i = S Ω = S ω = 1 (3) Where a and a t are the instantaneous and the target values of the semimajor axis. Where, K denotes the penalty function strength, r p and r pmin are the instantaneous and the minimum allowable periapsis radius. The important idea behind the proximity quotient is that the Q usually represents some error for the desired state. The expression ofQ can be determined analytically by using the Gauss form of the Lagrange Planetary equations and will involve the unit thrust angle α and β (i.e. pitch and yaw angle.) are shown below;Q From the above equation, an analytical solution exists which minimizes thė Q by varying the thrust angle α and β. By using these angles of the satellite derives Q to zero at a given instant and thus driving the satellite to the desired target orbit. Directional Adaptive Guidance (DAG) Law The optimal guidance for each orbital element is obtained in the adaptive method then clustered them by weighted-sum approach, which will give us a single thrust vector for the rendezvous, proximity, and de-orbiting operation. Factors like thrust angle expressions, maneuver efficiency, adaptive weighting factors are also considered in a single thrust vector. This law is implemented as a self-contained routine requiring the vehicle state vector and epoch of interest along with desired final targets, directional weighting factors, and stopping tolerance with the control rate limit. These implementations are mostly represented in the RIC frame. RIC frame represents R = Radial direction, I= In track direction, and C = Cross track, whose origin is the center of the satellite. The RIC frame is the relative motion frame between the chaser satellite and target debris. The thrust angling of the vehicle state vector at an instant was computed; They are represented as; α is the in-plane angle (Pitch) and β is the out-plane angle (Yaw). These angles mostly produce the largest rate of change in all the orbital element parameters like semi-major axis (a), inclination (i), eccentricity (e), right ascension of the ascending node (Ω), and argument of Perigee (ω). The optimal results obtained from the classical orbital elements in-plane and out-plane are found from the literature [23] which is given below; Semi-major axis (a); α = arctan( e sin θ 1 + e cos θ ), β = 0 Where, θ represent the true anomaly in degree. eccentricity (e); α = arctan( sin θ cos θ + cos E ), β = 0 Where, E represent the eccentricity anomaly. Inclination (i); α = 0, β = sgn(cos (ω + θ)) * Π 2 (8) Ascending Node (Ω); Argument of Periapsis (ω); The optimal thrust angles i.e, in-plane(α) and out-of-plane (β) for the maximum instantaneous change in each classical orbital element (COE) can be represented by the above solutions. After having all the thrusting angles from the above equations [1][2][3][4][5], the corresponding thrust direction can be computed in the RIC frame which is given below; For our desired target of PSLV debris, an adaptive ratio is computed which signifies the percentage change that occurred in each COE. The computed adaptive ratio can be shown as; Where COE represents the classical orbital element at a certain instant, COE t indicates the COE of the desired target, and the COE i is the initial value of COE of the chaser satellite. Once the adaptive ratio is calculated, the overall unit thrust direction to achieve our target at multiple-element change at the same time can be calculated by the below formula which can be found in the literature [21] [23]. Where f t is the thrust vector in the RIC/RSW frame and δ COE,COEt is the Kronecker delta function which is given below; The DAG targeting algorithm is capable of achieving the desired target but not necessary the resulting path need to be optimal. To generate a better solution, direction weighting factors, W Dir,COE can be included in equation 8 and with consideration that instantaneous COE is not equal to the target COE from equation 9. It can be modified as; Whereas the W Dir,COE parameter for each COE may be constant or timevarying or maybe the function of some set of the variables in the problem. From the governing equation of Q law and DAG law, we can see that Qlaw can cause high computational cost and complexity to simulate as there is no analytical solution while determining the extreme ofQ which is required at every point of the simulation during the maneuver and de-orbiting operations. Similarly, a convergence of the simulation for maneuver operations to reach nearby to a target location and de-orbiting operation will be time-consuming, which can cause low performance and accuracy of Q law than DAG law. Hence, For our simulation, we will be using highly DAG law for better convergence and accuracy level instead of Q law. Multiple iterative scenarios will be run by considering the DAG for the maneuver, close proximity, and de-orbiting operation in the debris orbit by python package through the STK graphics interface. Simulation Procedure The chase maneuver and the de-orbiting maneuver simulation are done in the STK interface with python run code. To achieve the controlled maneuvers to reach close proximity to debris object and de-orbiting of it, requires all the space environment like perturbation models, debris environment, rendezvous modules, collision avoidance models, etc. At LEO orbit, the perturbation like an atmospheric drag, the non-elliptical shape of the earth, third body like moon and sun, ocean tides, solar radiation pressure (SRP), earth tides, etc. are encountered and need to be modeled accurately while simulating to achieve higher accuracy level. There is existing orbital propagator like J2, SGP4, HPOP in the literature to the model above effects accurately. Each propagator has its significance to the model of the different perturbation effects. For our case, we had used the HPOP propagator throughout the simulation for better modeling all the perturbation effects encountered by the satellites. The HPOP propagator uses Grace gravity Model(GGM03S) [24], SW model, EOP model, SOLRESAP model, and SOLFSMY model to provide the Earth gravity field, Space weather data, Earth orientation parameters, Geomagnetic storm indices, and Solar storm indices space environment in the simulation setup. The NASA Debris Assessment Software (DAS) provides us the space debris data set of the tracked debris of the LEO orbit along with its properties like shape, size, orbital 2-line elements, post-mission lifetime, compliance model, etc. of debris which will be helpful while designing the optimal trajectory to chase and de-orbit the PSLV debris. The considered debris is the rocket junk bodies at an approximate altitude of 668 km which its properties can be introduced in simulation by using its 2-line elements with satellite catalog no. of 27160U. The debris chaser initial orbit and the PSLV target orbit orbital parameters are shown in table 1. A chase maneuver is to be done to reach close to the PSLV debris target state then after capturing, it will be de-orbiting at an altitude of 250 km. The initial states of the debris chaser satellite and the PSLV debris, with reference to the time frame in the ECI frame, are shown in figure 6. From table 1 and figure 6, we can see that, we have to work on the three classical orbital elements of the chaser satellite i.e. semi-major axis and inclination to align with the orbit to the target state and true anomaly to reach nearby the target. This can be done by 3 mission sequence operations. They are; a) Hohmann's Orbit transfer operation to reach and align to the orbit of the PSLV Deb., b) V-bar maneuver and close proximity operation to chase the PSLV deb. c) De-orbiting operation to very low earth orbit. The simulation procedures are shown in figure 7. Figure 7 represents the flow of the process from the orbit transfer to the de-orbiting operation whereas the capture techniques are not presented in this paper. The snapshots of the v-bar approach of the chaser satellite to chase the debris are shown in figure 8. It forms the circumnavigated trajectory to reach close to the debris objects. The closest in-track separation between the debris chaser satellite and PSLV debris is about 1 m. The close proximity snapshot between the chaser and debris satellite is shown in figure 9. Once reaching too close, the robotic manipulator will be used for grasping and ceasing the random motion of the PSLV debris. Results and Discussion The simulation was run by using the DAG law for chase maneuver(V-bar approach and close proximity) and the de-orbiting operation. To achieve these operations it almost took 18 hours of the operation which include 2 hours for the capturing process, once reaching close enough to PSLV debris. The overall delta requirement for these operations as followed by the simulation procedure is shown in table 2. Table 2 represents steps 1-3 which is the initial Hohmann's transfer from low earth orbit to debris orbit which is executed by 2 tangent burns and one non-tangent burn for inclination change. Once reaching debris orbit, a couple of non-tangent burns (i.e. steps 4-5) is made for the controlled v-bar rendezvous approach and close proximity operation. After that, the space manipulators are used to capture the non-cooperative PSLV debris and cease their motion, and orient with debris chaser satellite. Once it becomes a single body, again two tangent burn (i.e.step 6-7) is made for de-orbiting operation from debris orbit to very low earth orbit. i.e. 250km. Once, the controlled operations are executed, the variation of the orbital elements, velocity vector field, and RIC frame parameters was sketched for all the simulation time frames of PSLV debris and Chaser satellites. And these sketched parameters were analyzed technically. The graphical representation of the semi-major axis with respect to time is shown in figure 10. From it, an altitude shift is taking place from low earth orbit to debris orbit by means of ∆V1 and ∆V2 tangent burn. And the sinusoidal motion is due to the perturbation effects considered in the LEO environment. The ∆V1 and ∆V2 cause the transfer is not a straight line, which is actually inclined and impulsive maneuver. The maneuvers are taking just few a minutes while making a transfer at constant thrust. And the ∆V3 is required for the inclination change once reaching that height After the execution of transfer operation, as the orbit gets stabilized after 2 orbits, then we go for v-bar rendezvous and proximity operation. From figure 10, we can see again, ∆V4 non-tangent burn is done at 10:44 am to initiate the V bar chase approach to reach near to debris and form the circumnavigate motion up to 10 hr period, causing the periodic height change. Once reaching close to the debris location, the ∆V5 burn is done to stabilize and align the radial and cross-track of the chaser satellite along with the debris satellite within track separation of 1m. This process is simulated in a controlled maneuver way with the help of DAG law and provides stabilized chase towards the target debris. The graphical representation of the variation of inclination and the eccentricity of simulated simulation with respect to time is shown in figures 11 and 12. we can see that the chaser satellite was initially at 95 degrees and remain constant until the non-tangential ∆V3 is applied to change inclination to 98.3 degrees as of the debris chaser orbit. As being an impulsive maneuver, this non-tangent transfer to make orbital plane shift also take less than a minute time. The change that occurred in the semi-major axis and the inclination by the ∆V budget causes the change in the other orbital parameters in the same fashion. From figure 12, we can see the variation of the eccentricity with respect to the time, as both debris chaser and the PSLV Deb. orbit is circular, so most of the eccentricity is value is close to zero. we can see that the slight change in the eccentricity caused during the ∆V applied to section and maximum change that occurred is about 0.03. The schematic diagram of the variation of the RAAN with respect to the simulation time is shown in figure 22. From figure 22, we can see that there is much less change occurred in the initial and the final value of the RAAN of the chaser satellite during the whole operation. The graphical representation of a variation of RAAN and Argument of Periapsis with respect to time is shown in figures 13 and 14. The orbital plane was mostly defined by the inclination and RAAN and we found that there are 3.3 degrees of an orbital shift to align to debris inclination. This shift causes a slight change in the RAAN of the chaser satellite and continuous convergence takes place during rendezvous operation and de-orbiting operation and the final RAAN of debris chaser matches with the PSLV debris RAAN. We can see the maximum variation that occurred during the ∆V operations time frame as caused in the semi-major axis and the eccentricity. And maximum change occurs during the ∆V3 time frame where inclination change had caused the peak change in the argument of periapsis. And after the end of rendezvous operations, the argument of periapsis matches the argument of periapsis of PSLV debris and de-orbiting variation takes place at the end. The position of the satellite in an orbit is mostly determined by the true anomaly. It varies periodically from 0-360 degrees in an orbit. This parameter will help us to find the true anomaly separation between the debris chaser satellite and PSLV debris and help us to do rendezvous operations accordingly. The graphical representation of the true anomaly is shown in figure 15. At maneuver location, there is a slight shift taking place and at the end of close proximity operation, it will be in the same time with PSLV debris true anomaly and de-orbiting take place at the end which causes a slight shift in a true anomaly. Similarly, we had generated the result of the velocities parameter of debris chaser satellite and PSLV debris in all x,y, and z-axis. The schematic diagram is shown in figures 16, 17, and 18. Since, all the above results are shown in orbital elements and velocity vector, which represents the shift and how we are controlling each orbital element at a time to reach nearby the PSLV debris location and controlled de-orbiting operation. But, RIC frame parameters become important to analysed to know where our satellite is located in orbit with respect to PSLV debris. To do so, we had graphically represented the variation of radial, in-track, and crosstrack components with reference to debris target with respect to time. These schematic diagrams are shown in figures 19, 20, and 21. From figure 19 and 20, we can find that the gap in radial distance and intrack component between the chaser satellite and the PSLV Deb is decreasing continuously and at the end of the rendezvous operation, the radial and intrack is almost in the same line of the debris orbit with in-track separation of 1m displacement. Once the capturing operation is done at close proximity, the de-orbiting operation is done where the gap is continuously increasing in both radial and in track components. Similarly, the cross-track components of the chaser satellite with reference to the PSLV Deb along with the time are sketched in figure 21. The cross-track component varies sinusoidally during the orbit transfer operation and at close proximity, the cross track aligns with the PSLV debris cross-track component. In de-orbiting operation, the cross-track remains the same as PSLV debris as it acts as a combined body. The tabulation of the RIC components with desired, achieved and the difference was shown in table 3. From table 3, we can see the RIC frame components during the close proximity operation with desired, achieved, and their difference values. During close proximity operation, the radial component will be in same as to PSLV debris so the desired value is approximate 0 km and from the iterative operation, we found we had achieved -1.30732e-7 km separation which is desirable. Similarly, the in-track separation needs to be at 1m between the debris chase and PSLV debris but we achieve the approximate same value. and in cross-track, we achieved -3.9958e-5 separation which is highly acceptable. These differences can be corrected by using the ADCS system of satellites with efficient sensors. From the results section, we can see that we had graphically represented and analyzed all the orbital elements, orbital velocities along with the RIC frame components variation with respect to the time of all operations. The delta-V budget required for the successful operation of this simulation and close proximity RIC components achievement were tabulated and the accuracy level was analyzed. Conclusion A low thrust controlled maneuver to chase and de-orbit the space debris was designed by using the debris chaser satellite with dual manipulators. The paper describes the debris chaser mission whose main function will be to chase, capture and de-orbit the PSLV debris of polar orbit. The designed satellites are made based on the CubeSat standards with 12U form factor and components were selected based on the commercial products available in the market. A debris chaser system architecture was presented with the block diagram with all the important components mentioned on it. And it was found that ADCS sensors will play the most vital role while executing the chase and de-orbiting operations. For our case, we had to use the basic Hohmann's transfer concept for orbit transfer and the V-bar approach for the chase maneuver along with the DAG guidance algorithm for the controlled process. Most of the orbit transfer, chase maneuver, and de-orbiting operations were executed in the STK interface with the python pipeline. We had also discussed the use and importance of the DAG law in our simulation over the Q-law. In a later section, we had explained the simulation procedures and their environment setup in our simulation along with its process for the execution of the operations. Once a simulation is completed, multiple datasets like the orbital element, velocity vector and RIC frame components are extracted and plotted against the time. All the delta-V budget was tabulated with their start time which is required while executing the simulation operation. It was found that the use of DAG law has provided smooth control operation for chase and de-orbiting maneuvers. All the sketched graphs provide the variation of parameters with respect to the time and their importance during the execution of the operations shows stable results while chasing and de-orbiting the space debris. The sketched RIC frame components of the chaser satellite by taking reference of the PSLV debris provide the radial, in-track, and cross-track variation with respect to debris. It was found that the radial and cross-track separation during the close proximity operation shows the desirable results which were achieved successfully. Similarly, an in-track separation between the debris chaser satellite and the PSLV debris is desired to be 1m which was achieved successfully. Hence, we can assure that, for the long-term simulation of the rendezvous, close proximity maneuver, and de-orbiting operation, the optimal DAG law is more effective in simulation as it can handle the singularity behavior of the orbital elements caused due to adjustment of one or more elements more efficiently. And the use of RCS thusters for our operations has successfully provided the required thrust based on the delta-V budget in all directions. The next work will be of grasping the non-cooperative target debris and ceasing the motion of the debris by the use of thruster and ADCS system.
2022-04-05T01:16:06.433Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "d777b84e255d1a7de4194d30178705bbd9530c4f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d777b84e255d1a7de4194d30178705bbd9530c4f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
12207758
pes2o/s2orc
v3-fos-license
Localized bases for finite dimensional homogenization approximations with non-separated scales and high-contrast We construct finite-dimensional approximations of solution spaces of divergence form operators with $L^\infty$-coefficients. Our method does not rely on concepts of ergodicity or scale-separation, but on the property that the solution space of these operators is compactly embedded in $H^1$ if source terms are in the unit ball of $L^2$ instead of the unit ball of $H^{-1}$. Approximation spaces are generated by solving elliptic PDEs on localized sub-domains with source terms corresponding to approximation bases for $H^2$. The $H^1$-error estimates show that $\mathcal{O}(h^{-d})$-dimensional spaces with basis elements localized to sub-domains of diameter $\mathcal{O}(h^\alpha \ln \frac{1}{h})$ (with $\alpha \in [1/2,1)$) result in an $\mathcal{O}(h^{2-2\alpha})$ accuracy for elliptic, parabolic and hyperbolic problems. For high-contrast media, the accuracy of the method is preserved provided that localized sub-domains contain buffer zones of width $\mathcal{O}(h^\alpha \ln \frac{1}{h})$ where the contrast of the medium remains bounded. The proposed method can naturally be generalized to vectorial equations (such as elasto-dynamics). Introduction Consider the partial differential equation − div a(x)∇u(x) = g(x) x ∈ Ω; g ∈ L 2 (Ω), a(x) = {a ij ∈ L ∞ (Ω)} u = 0 on ∂Ω, (1.1) where Ω is a bounded subset of R d with a smooth boundary (e.g., C 2 ) and a is symmetric and uniformly elliptic on Ω. It follows that the eigenvalues of a are uniformly bounded from below and above by two strictly positive constants, denoted by λ min (a) and λ max (a). Precisely, for all ξ ∈ R d and x ∈ Ω, λ min (a)|ξ| 2 ≤ ξ T a(x)ξ ≤ λ max (a)|ξ| 2 . (1.2) In this paper, we are interested in the homogenization of (1.1) (and its parabolic and hyperbolic analogues in Sections 4 and 5), but not in the classical sense, i.e., that of asymptotic analysis [9] or that of G or H-convergence ( [47], [57,32]) in which one considers a sequence of operators − div(a ǫ ∇) and seeks to characterize limits of solution. We are interested in the homogenization of (1.1) in the sense of "numerical homogenization," i.e., that of the approximation of the solution space of (1.1) by a finite-dimensional space. This approximation is not based on concepts of scale separation and/or of ergodicity but on compactness properties, i.e., the fact that the unit ball of the solution space is compactly embedded into H 1 0 (Ω) if source terms (g) are integrable enough. This higher integrability condition on g is necessary because if g spans H −1 (Ω), then the solution space of (1.1) is H 1 0 (Ω) (and it is not possible to obtain a finite dimensional approximation subspace of H 1 0 (Ω) with arbitrary accuracy in H 1 -norm). However, if g spans the unit ball of L 2 (Ω), then the solution space of (1.1) shrinks to a compact subset of H 1 0 (Ω) that can be approximated to an arbitrary accuracy in H 1 -norm by finite-dimensional spaces [10] (observe that if a = I d , then the solution space is a closed bounded subset of H 2 ∩ H 1 0 (Ω), which is known to be compactly embedded into H 1 0 (Ω)). The identification of localized bases spanning accurate approximation spaces relies on a transfer property obtained in [10]. For the sake of completeness, we will give a short reminder of that property in Section 2. In Section 3, we will construct localized approximation bases with rigorous error estimates (under no further assumptions on a than those given above). In Sub-section 3.4, we will also address the high-contrast scenario in which λ max (a) is allowed to be large. In Sections 4 and 5, we will show that the approximation spaces obtained by solving localized elliptic PDEs remain accurate for parabolic and hyperbolic time-dependent problems. We refer to Section 6 for numerical experiments. We refer to Section B of the Appendix for further discussion and a proof of the strong compactness of the solution space when the range of g is a closed bounded subset of H −ν (Ω) with ν < 1 (this notion of strong compactness constitutes a simple but fundamental link between classical homogenization, numerical homogenization and reduced order modeling). We call ψ a-flux the flux-norm of Ψ. The following proposition shows that the flux-norm is equivalent to the energy norm if λ min (a) > 0 and λ min (a) < ∞. 2.2) Motivations behind the flux-norm: There are three main motivations behind the introduction of the flux norm. • The flux-norm allows to obtain approximation error estimates independent from both the minimum and maximum eigenvalues of a. In fact, the flux-norm of the solution of (1.1) is independent from a altogether since u a-flux = ∇∆ −1 g (L 2 (Ω)) d . (2.3) • The (·) pot in the a-flux-norm is explained by the fact that in practice, we are interested in fluxes (of heat, stress, oil, pollutant) entering or exiting a given domain. Furthermore, for a vector field ξ, ∂Ω ξ · nds = Ω div(ξ pot )dx, which means that the flux entering or exiting is determined by the potential part of the vector field. Theorem 2.1. (Transfer property of the flux norm) [Theorem 2.1 of [10]] Let V ′ and V be finite-dimensional subspaces of H 1 0 (Ω). For f ∈ L 2 (Ω), let u be the solution of (1.1) with conductivity a and u ′ be the solution of (1.1) with conductivity a ′ . If The usefulness of (2.5) can be illustrated by considering a ′ = I so that div a ′ ∇ = ∆. Then, u ′ ∈ H 2 and therefore V ′ can be chosen as, e.g., the standard piecewise linear FEM space, on a regular triangulation of Ω of resolution h, with nodal basis {φ i }. The space V is then defined by its basis Equation (2.5) shows that the approximation error estimate associated with the space V and the problem with arbitrarily rough coefficients is (in a-flux norm) equal to the approximation error estimate associated with piecewise linear elements and the space H 2 (Ω). More precisely, where C does not depend on a. We refer to [22], [25] and [11] for recent results on finite element methods for high contrast (λ max (a)/λ min (a) >> 1) but non-degenerate (λ min (a) = O(1)) media under specific assumptions on the morphology of the (high-contrast) inclusions (in [22], the mesh has to be adapted to the morphology of the inclusions). Observe that the proposed method remains accurate if the medium is both of high contrast and degenerate (λ min (a) << 1), without any further limitations on a, at the cost of solving PDEs (2.6) over the whole domain Ω. Remark 2.1. We refer to [10] for the optimal constant C in (2.7). This question of optimal approximation with respect to a linear finite dimensional space is related to the Kolmogorov n-width [54,44], which measures how accurately a given set of functions can be approximated by linear spaces of dimension n in a given norm. A surprising result of the theory of n-widths is the non-uniqueness of the space realizing the optimal approximation [54]. Observe also that, as another consequence of the transfer property (2.5), a h k+1 rate of convergence can be achieved in (2.7) by replacing φ i with higherorder basis functions in (2.6), and g L 2 with g H k in (2.7). Similarly an exponential rate of convergence can be achieved if the source terms g are analytic. This is the reason behind the near exponential rate of convergence observed in [6] for harmonic functions (i.e., with zero source terms, and particular "buffer" solutions computed near the boundary) and bounded (non high) contrast media. 3 Localization of the transfer property. The elliptic PDEs (2.6) have to be solved on the whole domain Ω. Is it possible to localize the computation of the basis elements θ i to a neighborhood of the support of the elements φ i ? Observe that the support of each φ i is contained in a ball B(x i , C h) of center x i (the node of the coarse mesh associated with x i ) of radius C h. Let 0 < α ≤ 1. Solving the PDEs (2.6) on sub-domains of Ω (containing the support of φ i ) may, a priori, increase the error estimate in the right hand side of (2.5). This increase can, in fact, be linked to the decay of the Green's function of the operator − div(a∇). The slower the decay, the larger the degradation of those approximation error estimates. Inspired by the strategy used in [35] for controlling cell resonance errors in the computation of the effective conductivity of periodic or stochastic homogenization (see also [36,53,63]), we will replace the operator − div(a∇) by the operator 1 T − div(a∇) in the left hand side of (2.6) in order to artificially introduce an exponential decay in the Green's function. A fine tuning of T is required because although a decrease in T improves the decay of the Green function, it also deteriorates the accuracy of the transfer property. In order to limit this deterioration, we will transfer a vector space with a higher approximation order than the one associated with piecewise linear elements. Let us now give the main result. Localized bases functions. Let h ∈ (0, 1). Let X h be an approximation sub-vector space of where, the x i are the nodes of a regular triangulation of Ω of resolution h. • X h satisfies the following approximation properties: and • For all i, • For all coefficients c i , Remark 3.1. Examples of such spaces can be found in [17] and constructed using piecewise quadratic polynomials. From the first bullet point it follows that h can be though of as the diameter of the support of the elements ϕ i . The largest parameter h d /C satisfying (3.4) is the minimal eigenvalue of the stiffness matrix ( Ω (∇ϕ i ) T ∇ϕ j ) 1≤i,j≤N and Condition (3.4) is obtained from the regularity of the tessellation of Ω. In fact, the proof of Proposition 3.2 shows that Condition (3.4) can be relaxed to the assumption of existence of a constant d ϕ > 0 independent from h such that for all coefficients c i Through this paper, we will write C any constant that does not depend on h (but may depend on d, Ω, and the essential supremum and infimum of the maximum and minimum eigenvalues of a over Ω). Let α ∈ (0, 1) and C 1 > 0. For each basis element ϕ i of X h let ψ i be the solution of be the linear space spanned by the elements ψ i . Theorem 3.1. For g ∈ L 2 (Ω), let u be the solution of (1.1) in H 1 0 (Ω) and u h the solution of (1.1) in V h . There exists C 0 > 0 such that for C 1 ≥ C 0 , we have where the constants C and C 0 depend on a, d, Ω but not on h. Remark 3.2. Theorem 3.1 shows the convergence rate in approximation error remains optimal (i.e., proportional to h) after localization if 0 < α ≤ 1/2 and decays to 0 as h 2−2α for 1 2 ≤ α < 1. In particular, choosing localized domains with radii O( √ h ln 1 h ) is sufficient to obtain the optimal convergence rate O(h). Observe that the choice of the constant α in equation (3.6) is arbitrary. Remark 3.3. According to Theorem 3.1, the constant C 1 in (3.6) needs to be chosen larger than C 0 to achieve the convergence rate h + h 2−2α . The constant C 0 depends on α, d, λ min (a) and λ max (a). The constant C in the right hand side of (3.8) also depends on α, d, λ min (a) and λ max (a). It is possible to give an explicit value for C 0 and C by tracking constants in the proof (in particular, as stated in Subsection 3.4, the dependence on λ max (a) can be removed if the elements Ψ i are computed on sub-domains with added buffer zones around high-conductivity inclusions). Remark 3.4. If one uses piecewise linear basis elements instead of the elements ϕ i (i.e., in the absence of property (3.2)), then the estimate in the right hand side of (3.8) deteriorates to h 1−2α . The proof of this remark is similar to that of Theorem 3.1. The main modification lies in replacing h 2 /T by h/T in equations (3.10) and (3.16). Remark 3.5. One could use piecewise linear basis elements instead of the elements ϕ i , and also remove the term h −2α ψ i from the transfer property (3.6). In this situation, we numerically observe a rate of convergence of h for periodic, stochastic and lowcontrast media after localization of (3.6) to balls of radii O(h). In these particular situations (characterized by short range correlations in a), the term h −2α ψ i should be avoided to obtain the optimal convergence rate h after localization to sub-domains of size O(h). In that sense, the estimate in the right hand side of (3.8) corresponds to a worst case scenario with respect to the medium a (characterized by long range correlations), requiring the introduction of the term h −1 ψ i and a localization to sub-domains of size O( √ h ln 1 h ) for the optimal convergence rate h. Remark 3.6. For the elliptic problem, computational gains result from localization (the elements ψ i are computed on sub-domains Ω i of Ω), parallelization (the elements ψ i can be computed independently from each other), and the fact that the same basis can be used for different right hand sides g in (1.1). Computational gains are even more significant for time-dependent problems because, once an accurate basis has been determined for the elliptic problem, the same basis can be used for the associated (parabolic and hyperbolic) time-dependent problems with the same accuracy (we refer to Sections 4 and 5). For the wave equation with rough bulk modulus and density coefficients, the proposed method (based on pre-computing basis elements as solutions of localized elliptic PDEs) remains accurate, provided that high frequencies are not strongly excited On Localization. We refer to [22], [25] and [6] for recent localization results for divergence-form elliptic PDEs. The strategy of [22] is to construct triangulations and finite element bases that are adapted to the shape of high conductivity inclusions via coefficient dependent boundary conditions for the subgrid problems (assuming a to be piecewise constant and the number of inclusions bounded). The strategy of [25] is to solve local eigenvalue problems, observing that only a few eigenvectors are sufficient to obtain a good pre-conditioner. Both [22] and [25] require specific assumptions on the morphology and number of inclusions. The idea of the strategy is to observe that if a is piecewise constant and the number of inclusions bounded, then u is locally H 2 away from the interfaces of the inclusions. The inclusions can then be taken care of by adapting the mesh and the boundary values of localized problems or by observing that those inclusions will affect only a finite number of eigenvectors. The strategy of [6] is to construct Generalized Finite Elements by partitioning the computational domain into to a collection of preselected subsets and compute optimal local bases (using the concept of n-widths [55]) for the approximation of harmonic functions. Local bases are constructed by solving local eigenvalue problems (corresponding to computing eigenvectors of P * P where P is the restriction of a-harmonic functions from ω * onto ω ⊂ ω * , P * is the adjoint of P , and ω is a sub-domain of Ω surrounded by a larger sub-domain ω * ). The method proposed in [6] achieves a near exponential convergence rate (in the number of pre-computed bases functions) for harmonic functions. Non-zero right hand sides (g) are then taken care of by solving (for each different g) particular solutions on preselected subsets with a constant Neumann boundary condition (determined according to the consistency condition). As explained in Remark 2.1, the near exponential rate of convergence observed in [6] is explained by the fact that the source space considered in [6] is more regular than L 2 (since [6] requires the computation particular (local) solutions for each right hand sides g and each non-zero boundary conditions, the basis obtained in [6] is in fact adapted to a-harmonic functions away from the boundary). The strategy proposed here can also be used to achieve exponential convergence for analytic source terms g by employing higher-order basis functions ϕ i in (3.6). Furthermore, as shown in sections 4, 5 and 3.4 the method proposed here allows for the numerical homogenization of time-dependent problems (because it does not require the computation of particular solutions for different source or boundary terms) and can be extended to high-contrast media. We also note that the basis functions ψ i are simpler and cheaper to compute (equation (3.6)) than the eigenvectors of P * P required by [6]. We refer to page 16 of [6] for a discussion on the cost of this added complexity. On Numerical Homogenization. By now, the field of numerical homogenization has become large enough that it is not possible to give an exhaustive review in this short paper. Therefore, we will restrict our attention to works directly related to our work. -The multi-scale finite element method [40,62,41] can be seen as a numerical generalization of this idea of oscillating test functions found in H-convergence. A convergence analysis for periodic media revealed a resonance error introduced by the microscopic boundary condition [40,41]. An over-sampling technique was proposed to reduce the resonance error [40]. -Harmonic coordinates play an important role in various homogenization approaches, both theoretical and numerical. These coordinates were introduced in [42] in the context of random homogenization. Next, harmonic coordinates have been used in onedimensional and quasi-one-dimensional divergence form elliptic problems [7,5], allowing for efficient finite dimensional approximations. The connection of these coordinates with classical homogenization is made explicit in [2] in the context of multi-scale finite element methods. The idea of using particular solutions in numerical homogenization to approximate the solution space of (1.1) appears to have been first proposed in reservoir modeling in the 1980s [16], [61] (in which a global scale-up method was introduced based on generic flow solutions i.e., flows calculated from generic boundary conditions). Its rigorous mathematical analysis was done only recently [49] and is based on the fact that solutions are in fact H 2 -regular with respect to harmonic coordinates (recall that they are H 1 -regular with respect to Euclidean coordinates). The main message here is that if the right hand side of (1.1) is in L 2 , then solutions can be approximated at small scales (in H 1 -norm) by linear combinations of d (linearly independent) particular solutions (d being the dimension of the space). In that sense, harmonic coordinates are only good candidates for being d linearly independent particular solutions. The idea of a global change of coordinates analogous to harmonic coordinates has been implemented numerically in order to up-scale porous media flows [27,26,16]. We refer, in particular, to a recent review article [16] for an overview of some main challenges in reservoir modeling and a description of global scale-up strategies based on generic flows. -In [24,29], the structure of the medium is numerically decomposed into a microscale and a macro-scale (meso-scale) and solutions of cell problems are computed on the micro-scale, providing local homogenized matrices that are transferred (up-scaled) to the macro-scale grid. This procedure allows one to obtain rigorous homogenization results with controlled error estimates for non-periodic media of the form a(x, x ǫ ) (where a(x, y) is assumed to be smooth in x and periodic or ergodic with specific mixing properties in y). Moreover, it is shown that the numerical algorithms associated with HMM and MsFEM can be implemented for a class of coefficients that is much broader than a(x, x ǫ ). We refer to [34] for convergence results on the Heterogeneous Multiscale Method in the framework of G and Γ-convergence. -More recent work includes an adaptive projection based method [48], which is consistent with homogenization when there is scale separation, leading to adaptive algorithms for solving problems with no clear scale separation; fast and sparse chaos approximations of elliptic problems with stochastic coefficients [60,37,23]; finite difference approximations of fully nonlinear, uniformly elliptic PDEs with Lipschitz continuous viscosity solutions [19] and operator splitting methods [4,3]. -We refer to [13,12] (and references therein) for most recent results on homogenization of scalar divergence-form elliptic operators with stochastic coefficients. Here, the stochastic coefficients a(x/ε, ω) are obtained from stochastic deformations (using random diffeomorphisms) of the periodic and stationary ergodic setting. For each basis element The following Proposition will allow us to control the impact of the introduction of the term 1 T in the transfer property. Observe that the domain of PDE (3.9) is still Ω (our next step will be to localize it to Ω i ⊂ Ω). Define a[v] to be the energy norm a[v] := Ω (∇v) T a∇v. Multiplying (3.12) by u − v and integrating by parts, we obtain that Write c i = c i,1 + c i,2 and let w 1 and w 2 be the solutions of ∆w 1 = g − i c i,1 ∆ϕ i and ∆w 2 = u T − i c i,2 ∆ϕ i with Dirichlet boundary conditions on ∂Ω. Then, we obtain by integration by parts and the Cauchy-Schwartz inequality that Using (3.1), we can choose (c i,1 ) so that we conclude the proof of the approximation (3.10) by observing that u H 1 0 (Ω) ≤ C g L 2 (Ω) . Let us now prove Equation (3.11). First, observe that Equation (3.4) and the triangular inequality imply that Next, we obtain from (3.15) and Poincaré inequality and We conclude by combining equations (3.18) and (3.19) with (3.17). We will now control the error induced by the localization of the elliptic problem (3.9). To this end, for each each basis element ϕ i of X h write S i the intersection of the support of ϕ i with Ω and let Ω i be a subset of Ω containing S i such that dist(S i , Ω/Ω i ) > 0. Let also ψ i,T,Ω i be the solution of We refer to Section A of the Appendix for the proof of Proposition 3.2. Taking Ω i := B(x i , C 1 h α ln 1 h ) ∩ Ω (we use the particular notation C 1 because our proof of accuracy requires that specific constant to be large enough, i.e., larger than a constant depending on the parameter C appearing in the right hand side of (3.21) and the parameter C describing the balls B(x i , C h) containing the support of the basis functions (ϕ i ) 1≤i≤N introduced in Subsection 3.1) and T = h 2α in equation (3.21) of Proposition 3.2, we obtain for C 1 large enough (but independent from h) that Let u be the solution of (1.1) in H 1 0 (Ω). Using Proposition 3.1, we obtain that there exist coefficients c i such that Using the triangle inequality, it follows that whence, from Cauchy-Schwartz inequality, (3.26) Combining (3.26) with (3.24), we obtain that (3.27) Using (3.22) in (3.27), we obtain that Observe that it is the exponential decay in (3.21) that allows us to compensate for the large term on the right hand side of (3.27) via (3.22). This concludes the proof of Theorem 3.1. On localization with high-contrast. The constant C in the approximation error estimate (3.8) depends, a priori, on the contrast of a. Is it possible to localize the computation of bases for V h when the contrast of a is high? The purpose of this subsection is to show that the answer is yes provided that there is a buffer zone between the boundaries of localization sub-domains and the supports of the elements ϕ i where the contrast of a remains bounded. More precisely, assume that Ω is the disjoint union of Ω bounded and Ω high . Assume that (1.2) holds only on Ω bounded , and that on Ω high we have . For each i, define b i to be the largest number r such that there exists a subset Ω ′ i such that: the closure of Ω ′ i contains the support of ϕ i , (Ω ′ i ) r is a subset of Ω i (where A r are the set of points of Ω that are at distance at most r for A), and (Ω ′ i ) r /Ω ′ i is a subset of Ω bounded . If no such subset exists we set b i := 0. b i can be interpreted as the non-high-contrast buffer distance between the support of ϕ i and the boundary of Ω i . We refer to Figure 1 for illustrations of the buffer distance. Theorem 3.2. For g ∈ L 2 (Ω), let u be the solution of (1.1) in H 1 0 (Ω) and u h the solution of (1. where the constants C and C 0 depend on λ min (a), λ max (a) (the bounds on a in Ω bounded ), d, Ω but not on h and γ (The upper bound on a on Ω high ). Remark 3.7. Recall that the global basis computed in (2.6) remains accurate if the medium is both of high contrast (λ max (a) >> 1) and degenerate (λ min (a) << 1). The basis computed in (3.30) preserves the former property (of accuracy for high contrast media) but loses the latter (property of accuracy in the degenerate case) since the constant C in (3.32) depends on λ min (a). Remark 3.8. Observe that local solves have to resolve the connected components of high contrast structures. This is the price to pay for localization with high contrast in the most general case. Recall that in classical homogenization with high contrast the limit of the homogenized operator may be a non-local operator (we refer for instance to [21]). A similar phenomenon is observed here (distant points connected by high conductivity channels are associated with a low resistance metric and a large coupling coefficient in the numerically homogenized stiffness matrix). The proof of Theorem 3.2 is similar to that of Theorem 3.1, but it requires a precise tracking of the constants involved. Because of the close similarity we will not include the proof in this paper but only give its main lines. First, the proof of Proposition 3.1 remains unchanged as the constants C in (3.10) and (3.11) do not depend on the maximum eigenvalue of the conductivity a. Only the proof of Proposition 3.2 has to be adapted and the part of the proof below Proposition 3.2 remains unchanged. This requires an application of the elements of lemmas A.2, A.3, A.4 and A.5 to buffer subdomains (Ω ′ i ) r /Ω ′ i . The main point is to observe that the decay of the Green's function in (Ω ′ i ) r /Ω ′ i can be bounded independently from γ (due to the maximum principle). Observe that the choice of the sub-domain Ω i in (3.30) can be chosen to be the same as in (3.20) if its intersection with high contrast inclusions is void (i.e., if the maximum eigenvalue of a over Ω i remains bounded independently from γ); otherwise the choice of Ω i in (3.30) has to be enlarged (when compared to that associated with (3.20)) to contain the high-contrast inclusion (plus its buffer). The basis remains accurate for parabolic PDEs. The computational gain of the method proposed in this paper is particularly significant for time-dependent problems. One such problem is the parabolic equation associated with the operator − div(a∇). More precisely, consider the time-dependent partial differential equation Let V h be the finite-dimensional approximation space defined in (3.7). Let u h be the finite element solution of (4.1), i.e., u h can be decomposed as and solves for all j Proof. The proof is a generalization of the proof found in [50] (in which approximation spaces are constructed via harmonic coordinates). Let A T be the bilinear form on L 2 (0, T, H 1 0 (Ω)) defined by Observe that for all v ∈ L 2 (0, (4.8) Using ∂ t u h in (4.3) and integrating, we obtain that Using Minkowski's inequality, we deduce that Similarly, ∂ t u 2 L 2 (Ω T ) + a u(., T ), u(., T ) ≤ C g 2 L 2 (Ω T ) . (4.11) Using Cauchy-Schwartz and Minkowski inequalities in (4.8), we obtain that Take v = R h u to be the projection of u onto L 2 (0, T, V h ) with respect to the bilinear form A T . Observing that − div(a∇u) = g − ∂ t u with (g − ∂ t u) ∈ L 2 (Ω T ), we obtain from Theorem 3.1 that Let us now show (using a standard duality argument) that (4.14) Choose v * to be the solution of the following linear problem: For all w ∈ L 2 (0, T, H 1 0 (Ω)) Hence by Cauchy Schwartz inequality and (4.13), (4.17) Using Theorem 3.1 again, we obtain that Combining (4.18) with (4.17) leads to (4.14). Combining (4.12) with v = R h u, (4.14) and (4.13) leads to which concludes the proof of Theorem 4.1. Discretization in time. Let (t n ) be a discretization of [0, T ] with time-steps |t n+1 − t n | = ∆t. Write Z h T , the subspace of L 2 (0, T, V h ), such that (4.20) Write u h,∆t , the solution in Z h T of the following system of implicit weak formulation (such that u h,∆t (x, 0) ≡ 0): For each n and ψ ∈ V h , (4.21) Then, we have the following theorem The proof of Theorem 4.2 is similar to that of Theorem 1.6 of [50] and will not be given here. Observe that homogenization in space allows for a discretization in time with time steps O(h + h 2−2α ) without compromising the accuracy of the method. The basis remains accurate for hyperbolic PDEs. Consider the hyperbolic partial differential equation where a, Ω, Ω T and ∂Ω T are defined as in Section 4. In particular, a is assumed to be only uniformly elliptic and bounded (a i,j ∈ L ∞ (Ω)). We will further assume that ρ is uniformly bounded from below and above (ρ ∈ L ∞ (Ω) and essinf ρ(x) ≥ ρ min > 0). It is straightforward to extend the results presented here to nonzero boundary conditions (provided that frequencies larger than 1/h remain weakly excited, because the waves equation preserves energy and homogenization schemes can not recover energies put into high frequencies, see [51]). For the sake of conciseness, we will give those results with zero boundary conditions. PDE (5.1) corresponds to acoustic wave equations in a medium with density ρ and bulk modulus a −1 . Let V h be the finite-dimensional approximation space defined in (3.7). Let u h be the finite element solution of (5.1), i.e., u h can be decomposed as (5.2) and solves for all j Remark 5.1. We refer to [59] for an analysis of the sub-optimal rate of convergence associated with finite-difference simulation of wave propagation in discontinuous media (see also [18,56]). We refer to [51] for an alternative upscaling strategy based on harmonic coordinates. If the medium is locally ergodic with long range correlations [8] and also characterized by scale separation then we refer to HMM based methods [28,1]. Homogenization based methods require that frequencies larger than 1/h remain weakly excited. For high frequencies, and smooth media (or away from local resonances, e.g. local, nearly resonant cavities), we refer to the sweeping pre-conditioner method [30,31]. Proof. Let A T be the bilinear form on L 2 (0, T, H 1 0 (Ω)) defined in (4.6). Observe that for all v ∈ L 2 (0, as a test function in (5.6) and integrating in time, we deduce that for ∂ t v ∈ L 2 (0, T, V h ), where (v, w) L 2 (ρ,Ω T ) := T 0 Ω v w ρ dx dt. Taking the derivative of the hyperbolic equation for u in time, we obtain that Integrating (5.8) against the test function ∂ 2 t u and observing that ∂ 2 t u(x, 0) = g(x, 0), we also obtain that Table 1: Example 1 of Section 3 of [49] (trigonometric multi-scale, see also [45]) with α = 1/2. Similarly, we obtain that , we obtain from (5.9) and Theorem 3.1 that Furthermore, using the same duality argument as in the parabolic case, we obtain that Using Cauchy-Schwartz and Minkowski inequalities and the above estimates in (5.7), we obtain that (5.13) We conclude using Grownwall's lemma. In this example, a is characterized by a fine and long-ranged high conductivity channel ( Figure 4). We choose a(x) = 100, if x is in the channel, and a(x) is the percolation medium, if x is not in the channel (the conductivity of each site, not in channel, is equal to γ or 1/γ with probability 1/2 and γ = 4). Figure 5 shows the log 2 of the numerical error (in L 2 and H 1 norm) versus log 2 (h). The three cases for the localization are , h 0 = 3h with no buffer around the high conductivity channel and h 0 = 3h with a buffer b i around the high conductivity channel of size 3h. The first case shows that the method of Sub-section 3.4 is converging as expected. The second case shows that, as expected, taking α = 1, does not guarantee convergence. The third case shows that adding a buffer around the high conductivity channel improves numerical errors but is not sufficient to guarantee convergence (as expected, we also need α < 1). The percolating background medium has been re-sampled for each case; the effect of this re-sampling can be seen for the largest value of h (i.e., log 2 (h) = −1). Wave equation. We compute the solutions of (5.1) up to time 1 on the fine mesh and in the finitedimensional approximation space V h defined in (3.7). The initial condition is u(x, 0) = 0 and u t (x, 0) = 0. The boundary condition is u(x, t) = 0, for x ∈ ∂Ω. The density is uniformly equal to one and we choose g = sin(πx) sin(πy). Figure 6 shows the fine mesh solutions u and u h at time one, for a given by the trigonometric example (6.1), with h = 0.125, h 0 = 3h and T = h. Figure 6 shows the fine mesh solutions u and u h at time We refer to [52] for a list of movies on the numerical homogenization of the wave equation with and without high contrast and with and without buffers (extended buffers in the high contrast case). A Proof of Proposition 3.2. The proof of Proposition 3.2 is a generalization of the proof of the control of the resonance error in periodic medium given in [35]. First we need the following lemma, which is the cornerstone of Cacciopoli's inequality. Let ζ : D → R + be a function of class C 1 such that ζ is identically null on an open neighborhood of the support of f . Then, where C only depends on the essential supremum and infimum of the maximum and minimum eigenvalues of a over D. Proof. Multiplying (A.1) by ζ 2 v and integrating by parts, we obtain that where C only depends on d and the essential supremum and infimum of the maximum and minimum eigenvalues of a over D. Proof. Extending a to R d and using the maximum principle, we obtain that we conclude by using the exponential decay of the Green's function in R d (we refer to Lemma 2 of [35]). Lemma A.3. Let ψ i,T be the solution of (3.9) and ψ i,T,Ω i the solution of (3.20). Let and Proof. For A ⊂ Ω, write A r the set of points of Ω that are at distance at most r from A. Let us now use Cacciopoli's inequality to bound Ω/Ω ′ i |∇ψ i,T | 2 . Using Lemma A.1 with ζ identically equal to one on Ω/Ω ′ i , zero on (Ω/Ω ′ i ) r with r := dist(S i , Ω/Ω ′ i )/3 and |∇ζ| ≤ C/r, we obtain that Next, observe that for x ∈ (Ω/Ω ′ i ) r , Hence, Another use of Cacciopoli's inequality leads to Combining (A.9) with (A.11) with (A.12), we obtain that We conclude the proof of (A.7) using Lemma A. Write S the intersection of the support of ψ with D. Let D 1 be a sub-domain of D such that dist(D 1 , S) > 0, then where C does not depend on D, D 1 , S. Using Cauchy-Schwartz inequality, we obtain that For A ⊂ D, write A r the set of points of D that are at distance at most r from A. Let us now use Cacciopoli's inequality to bound D 1 |∇w| 2 . Using Lemma A.1 with ζ identically equal to one on D 1 , zero on D/D r 1 1 and such that |∇ζ| ≤ C/r 1 we obtain that Lemma A.5. Let ψ i,T be the solution of (3.9) and ψ i,T,Ω i the solution of (3.20). Let Ω ′ i be a sub-domain of Ω i such that dist(Ω/Ω i , Ω ′ i ) > 0. We have Proof. Lemma A.5 is a direct consequence of Lemma A.4. To this end, we choose D := Ω i , v =: ψ i,T −ψ i,T,Ω i and D 1 := Ω ′ i . We also choose ψ := ηψ i,T where η : Ω → [0, 1] is C 1 , equal to one on Ω/Ω i and 0 on (Ω/Ω i ) r with r := dist(Ω/Ω i , Ω ′ i )/3 (A r being the set of points in Ω at distance at most r from A) and |∇η| ≤ C/r. We obtain from Lemma A.4 that We conclude using (3.3) and ψ H 1 (Ω) ≤ C dist(Ω/Ω i ,Ω ′ i ) ∇ϕ i (L 2 (Ω)) d . B On the compactness of the solution space. Although the foundations of classical homogenization [9] were laid down based on assumptions of periodicity (or ergodicity) and scale separation, numerical homogenization, as described here, is independent from these concepts and solely relies on the strong compactness of the solution space (and the fact that a compact set can be covered with a finite number of balls of arbitrary sizes). Observe that an analogous notion of compactness supports the foundations of G and H-convergence ( [47], [57,32]). The main difference is that G and H-convergence rely on pre-compactness and weak convergence of fluxes and here, we rely on compactness in the (strong) H 1 0 -norm, i.e. the following theorem. Let W be the range of g in (1.1). Write Proof. We have (a∇u) pot = −∇∆ −1 g. So using the same notation as in (2.4) we get (a∇V ) pot = −∇∆ −1 W . Let u n be a sequence in V then there exists a sequence in W such that −div(a∇u n ) = g n . Using the fact that −∇∆ −1 W is a compact subset of (L 2 (Ω)) d (we refer, for instance, to the Kondrachov embedding theorem) we get that there exists g * ∈ W such that ∇∆ −1 g n − ∇∆ −1 g * L 2 → 0. Writing u * the solution of −div(a∇u * ) = g * and using (a∇(u n − u * )) pot = −∇∆ −1 (g n − g * ) we get that (a∇(u n − u * )) pot L 2 → 0. Using the equivalence between the flux norm and the H 1 0 norm we deduce that u n − u * H 1 0 → 0. This finishes the proof. This notion of compactness of the solution space constitutes a simple but fundamental link between classical homogenization, numerical homogenization and reduced order modeling (or reduced basis modeling [20,43]) (we also refer to the discussion in Section 6 of [10]). This notion is also what allows for atomistic to continuum up-scaling [64], the basic idea is that if source (force) terms are integrable enough (for instance in L 2 instead of H −1 ) then the solution space is no longer H 1 but a sub-space V that is compactly embedded into H 1 and, hence, it can be approximated by a finite-dimensional space (in H 1 -norm). In other words if these systems are "excited" by "regular" forces or source terms (think compact, low dimensional) then the solution space can be approximated by a low dimensional space (of the whole space) and the name of the game becomes "how to approximate" this solution space (and this can be done by using local time-independent solutions).
2011-08-04T15:23:46.000Z
2010-11-03T00:00:00.000
{ "year": 2011, "sha1": "736856774133943adaf27e311f5411cbf5e3c42a", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/28984/1/Owhadi2011p16815Multiscale_Model_Sim.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "09d4db1ecf2fe0a64c81bc4dc12826f7f18047f2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Computer Science" ] }
24099306
pes2o/s2orc
v3-fos-license
Different Sources of Dignity-Related Distress in Women Receiving Chemotherapy for Breast Cancer Background: Identification of different sources of dignity-related distress experienced by people nearing the end of life may help nurses to provide better care services. This study was conducted to determine sources of dignity-related distress from the perspective of women with breast cancer undergoing chemotherapy. Materials and Methods: In this cross sectional study, the participants comprised 207 women with breast cancer undergoing chemotherapy in chemotherapy clinics in hospitals of Tehran, Iran. The Cronbach’s coefficient alpha for the PDI was 0.76. Validity of PDI by confirmatory factor analysis shows that the comparative Fit Index of this instrument is 0.96 and so it is appropriate for application in different setting. Data were analyzed by Stata version 13. Results: Patients were mostly concerned about the distress caused by disease symptoms (mean; 2.4061, S.D.; 0.96), followed by existential distress (mean; 1.8784, S.D.; 0.75), peace of mind (mean; 1.871, S.D.; 0.77), dependence (mean; 1.8647, S.D.; 0.98), and social support (mean; 1.4097, S.D.; 0.99), respectively, in order of highest scores. Conclusion: Considering that the patients were mostly concerned about the distress caused by disease symptoms, followed by existential distress, peace of mind, dependency, and social support, it seems necessary to take further measures toward addressing these issues. Introduction survival (Kesson et al., 2012) and consequently, deal with cancer and its complications and outcomes for a longer time (Darby et al., 2013). About 20%-40% of patients with cancer experience many emotional distresses (Chochinov et al., 2012). The potential sources of distress include physical symptoms, sorrow at the current and future losses, worry about the attachment to others and being a burden for others, and doubt about the future (Hall et al., 2014). The distress caused by the diagnosed cancer and its treatment is closely associated with the functional, physical, and cognitive problems. Furthermore the side effects of chemotherapy may reduce patients' quality of life and performance (Smith et al., 2013). Chemotherapy is used for cancer patients in order to kill cancer cells, and although it increases the survival rate, it causes many physical, sexual, mental, and social side effects (Ewertz and Jensen, 2011). Chemotherapy kills all the cells that have a high proliferation rate including blood cells, epithelial cells of the gastrointestinal tract, and hair follicles besides cancer cells. Therefore, complications, such as infections, fatigue, hair loss, oral lesions, nausea, and diarrhea are observed in most patients undergoing chemotherapy (Beusterien et al., 2014). The correlation of dignity-related problems with lower quality of life and higher level of depression has been reported in patients with cancer (Hall et al., 2014). Dignity implies a state in which people feel valued and respected. Types of dignity include the innate dignity (basic) and social dignity. Innate dignity means the respect for basic rights of people in various areas, and social dignity involves feeling worthy in relation to individual objectives and social situations (Albers et al., 2011). Although nurses are the integral part of the high-quality care providing system (Spilsbury et al., 2011). Studies have shown that medical and nursing personnel are not much aware of the importance of patients' privacy and dignity and perceive these concepts in different ways. Therefore, health service providers should recognize the aspects and factors influencing patients' privacy and provide strategies for promoting and supporting the dignity in clinical settings (Torabizadeh et al., 2012). Moreover, nurses are professionally obliged to acquire knowledge about the development, maintenance, and promotion of dignity in every patient given the underlying differences. Recognizing and emphasizing the influential factors help nurses to maintain and promote patients' dignity and provide healthcare while respecting their dignity (Manookian et al., 2014). If patients' dignity is respected, they feel comfortable, confident, and worthy and can make necessary decisions pertaining to their healthcare and treatment; otherwise, the patients' therapeutic and caring outcomes may be influenced, and they may be hospitalized for a longer time besides feeling uncertain, humiliated, and embarrassed (Baillie et al., 2008). Undermining patients' dignity may affect their body, mind, mood, and spirituality and make them stressed (Borhani et al., 2014). All the studies performed on the dignity of patients undergoing palliative care have emphasized the need to further studies identifying different sources of stress associated with dignity in this group of patients. This study was aimed to determine the sources of dignity-related distress from the perspective of women with breast cancer undergoing chemotherapy in Iran. Design The participants in this cross-sectional study comprised 207 women with breast cancer going to chemotherapy clinics in three hospitals of Shahid Beheshti University of Medical Sciences in Tehran (Tehran is the capital city of Iran and also it is the most populous city in Iran and Western Asia). The hospitals were selected through purposive sampling, and the patients were selected through convenience sampling method. With regard to the objectives, 3 hospitals were selected as the setting of the study. The selection of participants began in late December 2014 and lasted two months. At baseline, the researcher provided some information on the objectives of the study for the participants and explained how to complete the demographics questionnaire and the Patient Dignity Inventory (PDI) after adopting the verbal consent of the participants. Patient dignity inventory is a modern, valid, and reliable instrument designed for detecting the distresses associated with the dignity experienced by the patients undergoing palliative care by Chochinov in 2008(Chochinov et al., 2008 and used in this article. Eligibility criteria The inclusion criteria were as follows: minimum age of 18 years and maximum age of 70 years, patients' awareness of their breast cancer, the ability to speak Persian, willingness to participate in the study, and the patients with stage 3 and 4 of breast cancer (End stage patients). Instruments Distresses associated with dignity were measured using PDI that includes 25 items on the distress caused by disease symptoms, dependency, social support, existential distress, and peace of mind (Di Lorenzo et al., 2017). The demographics questionnaire included the information on age, marital status, occupation status, education level, satisfaction with family monthly income, place of residence, the time passed since the diagnosis, and history of mastectomy. The Cronbach's coefficient alpha for the PDI was 0.76. Validity of PDI by confirmatory factor analysis show that comparative fit Index of this instrument is 0.96 and so it is appropriate for application in different setting. Statistical Analysis The data were presented using descriptive statistics (mean, standard deviation (S.D.), and percentage). The Shapiro Wilk test was used for assess the normality assumption and linear regression model was used to examine the relationship between independent variables and dimensions of PDI. The data were analyzed by using Stata software version 13, EQS6.1 and Fitting Model Indexes. A p value of less than 0.05 was considered significant. Ethical consideration This project was confirmed by the Ethics Committee of Shahid Beheshti University of Medical Sciences and also informed verbal consent was obtained from all patients. Results The response rate was 100%. In relation to the personal specifications of the participants the mean age were 48.86 years (S.D.; 10.74, 95% CI: 47.39-50.33 years). As shown in Table 1, 67.6% of participants were not much satisfied with their monthly income and also 68.1% were married, 95.2% were resident in city, 46.9% has a history of mastectomy, most of patients (84.5%) were housewife, and 56.52 % has a less than 1 year time of diagnosis of cancer. The mean of child number were 2.23 (S.D.; 1.80, 95% CI: 2.07-2.57). Total mean score of dignity was 1.94. Mean scores of different dimensions of dignity are shows in Table 2. In the assessment of relationship between the dimensions of human dignity and demographic variables, age marital status, education level, occupation status, 0.76. Validity of PDI by confirmatory factor analysis show that comparative fit Index of this instrument is 0.96 and so it is appropriate for application in different setting. The comparison of the total score of dignity in patients undergoing chemotherapy in number 2 Hospital with patients in other studied hospitals revealed higher feeling of being dignified in those patients, which might be due to the free medications used in that hospital and the presence of charity center and free consulting services in that hospital. The results showed that the most frequent problem in relation to dignity in women with breast cancer undergoing chemotherapy was the distress caused by disease symptoms. The patients undergoing mastectomy expressed higher level of social support and dependency distress than patients not undergoing the surgery. In Hall et al study (Hall et al., 2014) on 45 patients with cancer, the distress caused by disease symptoms and inability to do daily tasks were reported in over one third of the patients. These results conform to those of the present study and could be attributed to the pain, dyspnea, nausea, vomiting, and diarrhea caused by the disease and treatment-related complications in patients with cancer. Hall et al. also reported the correlation of dignity-related problems with lower quality of life and higher level of depression in patients with cancer (Hall et al., 2014). According to the results mean score for the distress caused by disease symptoms, existential distress, peace of mind, dependency, and social support was obtained respectively as 2.4061, 1.8784, 1.871, 1.8647, and 1.4097 out of the maximum score of 5. In Borhani et al study (Borhani et al., 2014) performed on 280 patients hospitalized in internal and surgical wards, the patients were mostly concerned about the distress caused by disease symptoms (mean and standard deviation of 2.09±0.92), and the dimensions of peace of mind (mean and standard deviation of 1.91±0.97), dependency (mean income satisfaction, place of residency, diagnosis time and history of mastectomy were included in the liner regression model as potential confounder. All regression coefficients in Table 3 were adjusted for other included variables in the model. The results of this analysis were presented in Table 3 and just significant relationship was reported. Based on this table, marital status has a significant relationship with all dimension of patient's dignity. Age has a significant relationship with Peace of Mind and Existential Distress. The results showed a significant relationship between history of mastectomy with Social Support, Existential Distress and Dependency. There was a significant relationship between education level and Social Support, Peace of Mind, Symptom Distress and Dependency. And also Income satisfaction has a significant relationship with Existential Distress and Symptom Distress. Discussion Our finding shows that patients were mostly concerned about the distress caused by disease symptoms and the existential distress, peace of mind, dependency and social support respectively gained the highest score. The Cronbach's coefficient alpha for the PDI was and standard deviation of 1.89±1.01), existential distress (mean and standard deviation of 1.88±0.96), and social support (mean and standard deviation of 1.60±0.89) respectively gained the highest score. According to Borhani et al study (Borhani et al., 2014) the distress caused by disease symptoms was the most frequent concern in both patients hospitalized in the internal and surgical wards. As mentioned before, nausea, vomiting, and sleep disorders are common symptoms of distress in women with breast cancer undergoing chemotherapy (Yazdani, 2010). The experienced pain, concerns about symptoms of distress, and loss of dependency due to the decreased performance are considered as major threats to the feeling of individual dignity, and the experience of severe symptoms of distress may make patients think of death as the only option (Hall et al., 2014). Therefore, it seems necessary to pay special attention to the reduction of distress caused by disease symptoms, especially in patients with cancer who undergo chemotherapy and experience severe symptoms, in order to maintain and promote their dignity. In this study, the total mean score of dignity was 1.94. In Borhani et al study (Borhani et al., 2014) performed on patients hospitalized in internal and surgical wards, the total mean score of dignity was obtained as 1.89. Through a more detailed examination of the items in PDI and comparison of the two studied groups of patient in terms of the mean score, the different scores can be attributed to the fact that the diagnosis of cancer might cause higher depression and anxiety in patients in the present study than in patients hospitalized in internal and surgical wards. Actually, depression and anxiety are the most frequent mental disorders in patients with cancer (Carlson et al., 2013) and considerably reduce dignity in patients. Chochinov et al (2011) conducted an analysis on 326 patients receiving palliative end-of-life care and reported that the patients did not bear considerable stress in relation to different dimensions of dignity (Chochinov et al., 2011). Although most patients in Chochinov et al study (98%) suffered cancer, only 8.9% of them suffered breast cancer. The difference between Chochinov et al study and the present study in terms of dignity-related distress might be due to the involvement of different organs in cancer and different stages of the disease that directly influenced the level of dignity in patients studied by Chochinov et al (Chochinov et al., 2011). We examined the effect of the type of hospital on patients' dignity was, and it was found that only the existential distress significantly correlated with the type of hospital. The patients studied in number 2 Hospital gained lower score for the existential distress, which showed the more favorable condition of this dimension of dignity in these patients. Moreover, the comparison of the total score of dignity in patients undergoing chemotherapy in number 2 Hospital with that in patients in other hospitals revealed higher feeling of being dignified in those patients, which might be due to the free medications used in that hospital and the presence of a charity center and free consulting services in that hospital. According to the participants, the good behavior of most personnel working in Hospital number 2 significantly increased their satisfaction and reduced their distress. In fact, the hospital's atmosphere should provide the physical structure for promotion of human dignity, and every personnel also should promote the patient dignity through their own behavior toward patients and be aware of the effect of their behavior in every encounter on patient dignity (Tadd et al., 2011;Sharifi et al., 2016). Although the atmosphere of hospital and personnel's behavior are factors influencing the patients' dignity (Baillie et al., 2008), the present study, as mentioned before, did not show any significant correlation between patient dignity and type of hospital. However, Borhani et al. (Borhani et al., 2014) revealed a significant correlation between type of hospital and total score of dignity, the distress caused by disease symptoms, peace of mind, and social support, which might be due to the diverse physical and geographical atmospheres of the studied hospitals, the large number of studied centers, and better psychological condition in some of those hospitals. In Lam's study (2007) conducted on 50 patients receiving palliative care, two thirds of patients studied in the hospital reported that their feeling of dignity had been endangered (Lam, 2007). In the present study, the score of existential distress in educated women was higher than that in other women although no significant difference was found between different educational levels and score of dignity. Considering the items of PDI, the higher score of existential distress in educated women might be related to the fact that educated patients expect themselves to act more efficiently in life regarding the knowledge they acquired, and as the disease reduces their efficiency and power, they experience more stressful situations. Chochinov et al. conducted a study (2009) on 253 patients receiving palliative care and reported that the educated patients with sexual partner suffered specific problems, especially in the existential distress dimension. This result confirm to that of the present study (Chochinov et al., 2009). In this study, the patients with more satisfaction with their income mentioned less stress in relation to the 3 dimensions of the distress caused by disease symptoms, existential distress, and social support. In fact, the economic level and social welfare is one of the important factors of the social aspects of the disease (Zavras et al., 2013). Although the patients were spending different stages of treatment during the study, and many expenses of patients had been reduced due to the beginning of the plan for development of the health system, the patients' inability to pay for their required medications, especially in hospitals that were not supported by charity institutes, increased patients' mental and physical stress. For instance, patients' inability to pay for the anti-nausea drug made them experience physical symptoms either in the chemotherapy ward and at home, and as mentioned before, the problem endangered patients' feeling of worth and they deem themselves inefficient and disabled in their role as a mother and a wife besides experiencing the symptoms of stress. Furthermore, the physical weakness caused by the disease, its treatment, and its complications and patients' urgent need to others' support made the patients feel like a burden. Such a situation made the patients concerned about others' change of attitudes toward them and developed a sense of meaninglessness in patients. Moreover, the patients with poor financial status may enjoy a low level of social support. Borhani et al also found a significant correlation between level of satisfaction with family income and the distress caused by disease symptoms, dependency, and existential distress, which conforms to the results of this study. The reduced physical health is both the cause and the effect of the disease, poverty, and lifestyle. Patients get poorer and poorer because they lose their job and income( (Tadd et al., 2011). The higher existential distress in such patients may be due to the fact that poverty is accompanied with distress. Feelings of fear, insecurity, dependency, depression, anxiety, shame, disappointment, loneliness, and disability are immeasurable states experienced by low-income patients. This group of people has difficulty supplying food, accessing health services, and working, which may affect specific dimensions of human dignity (Borhani et al., 2014). The treatment of cancer is very costly and may threat patients' financial status because it requires frequent hospitalizations, laboratory and advanced diagnostic tests, chemotherapy, and expensive drugs. In this regard, activities, such as the further cooperation of charity centers in oncology wards and the government's more efforts for supplying expenses of patients with cancer seem effective in reducing the dignity-related distress in these patients. According to the result, there was no significant difference between different occupations. Moreover, the peace of mind and social support significantly differed from one educational level to another one. Borhani et al did not report any significant correlation of occupational status and educational level with dimensions of dignity. Considering the items of the inventory, the difference might be due to the fact that the educated patients expect themselves to act more efficiently in life regarding the knowledge they acquired, and as the disease reduces their efficiency and power, they experience more stressful situations. In this study, a significant correlation was found between peace of mind and social support and marital status. However, Borhani et al did not find any significant correlation between marital status and score of dignity. Hall et al also did not report any significant correlation between patients' level of dignity and marital status. Although the score of dignity in married women partially differed from that in the single women, it should be noted that married patients with cancer find their role as a mother and a wife in danger and experience more stress than single patients due to their physical and mental problems (Mehdipour-Rabori et al., 2016) related to the disease and its treatment, especially the complications caused by chemotherapy and mastectomy. Furthermore, patients undergoing chemotherapy experience many sexual problems (Moradi N et al., 2013) that affect their feeling of dignity adversely. Limitations of the study Considering that the present study was performed only on the patients going to the cancer clinics of the selected hospitals affiliated to Shahid Beheshti University of Medical Sciences, further studies should be performed in other clinical areas in order to generalize the results. The limited time of the study and problems, such as ideas of some participants about uselessness of participation in the study might be other limitations of this study. Moreover, few articles were found in this regard because only one similar article had been publish in Iran, and it is hoped that future studies improve further knowledge in this regard. Declaration of conflicting interests The authors declared no potential conflicts of interest with respect to the research, authorship, and publication of this article. In conclusion the results showed the total score of 1.94 out of 5 for patient dignity, as the lower score implies that the patient enjoys more favorable status of dignity. With regard to the specific objectives of the study, the results showed that the patients gained a higher score respectively in the distress caused by disease symptoms, existential distress, and peace of mind, dependency, and social support. Among the demographic variables, a significant correlation was found between dignity and marital status, education, satisfaction with family income, and mastectomy in patients with breast cancer undergoing chemotherapy in the studied hospitals.
2018-04-03T05:19:39.090Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "c0f42bd8cd2f33d541235e651d2e7d6b59a75d47", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c0f42bd8cd2f33d541235e651d2e7d6b59a75d47", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
310511
pes2o/s2orc
v3-fos-license
Usage and Distribution for Commercial Purposes Requires Written Permission. Epibulbar Nodular Fasciitis Purpose: To report a case of epibulbar nodular fasciitis in a 32-year-old female and provide context by reviewing the current literature. Results: Using excisional biopsy, the patient was successfully diagnosed and treated for epibulbar nodular fasciitis. Upon follow-up, there has been no recurrence, consistent with the typical course for nodular fasciitis. Conclusions: Epibulbar nodular fasciitis is a rare process that can be successfully treated by surgical resec-tion. While two cases of trauma-associated epibulbar nodular fasciitis have been present in the literature, our case did not have such a history. The etiology of nodular fasciitis remains unclear. Introduction Nodular fasciitis is a benign soft-tissue tumor, resulting from fibroblast proliferation. While benign, the tumor can be mistaken for malignancy. Proper identification is essential in helping direct appropriate treatment. A histopathologic specimen is required for definitive diagnosis. Although trauma has been suggested as a possible cause, there is no clear etiology. Nodular fasciitis occurs in a variety of anatomical locations in both adult and pediatric populations, without gender predilection. Ocular involvement is rare but documented, especially in the orbit and adnexa [1][2][3][4]. Epibulbar nodular fasciitis is extremely rare, with seven reported cases prior to ours [1,[5][6][7][8][9]. We present the case of a 32-year-old female with epibulbar nodular fasciitis. Case Report A 32-year-old female was referred to a cornea clinic with a bump on the left eye. After noticing the bump approximately 2 months prior, she saw an ophthalmologist who referred her for specialty care. The patient denied a history of pain or discharge, but did complain of a subjective decrease in visual acuity of the left eye. The patient was previously healthy, with unremarkable general medical and ocular histories. She denied a history of trauma or foreign body. Over the past 2 months, the patient noticed the lesion changing shape and color, describing the lesion as initially elevated and white, but progressively becoming flatter and yellow, with increased vascularity. On examination, visual acuities were 20/20 and 20/40. A probable explanation for the decrease in visual acuity of the left eye is that the lesion caused a disruption in proper tear film distribution, resulting in blurred vision. Unfortunately, refraction was not performed at this time, but was performed postoperatively. Slit-lamp examination of the right eye was within normal limits. Examination of the left eye revealed a 7 × 5.3 mm oval yellow elevated conjunctival lesion, inferior to the cornea and extending up to the limbus. The left cornea was clear. The patient was started on a steroid taper (prednisolone 1% drops four times per day for 1 week, and then three times per day for 1 week) and instructed to follow up in 2 weeks. At the follow-up, no improvement of the lesion was noted, and the patient was still complaining of decreased visual acuity. Examination of the left eye again revealed an oval yellow elevated conjunctival lesion, measuring 5.8 × 5.8 mm, focused at the inferior limbus at 5:30 o'clock ( fig. 1a, b). The overlying conjunctiva was noted to be injected with dilated vessels. The mass was nontender and nonmobile. The remainder of the left-eye examination and right-eye examination was within normal limits. Prednisolone was stopped, and the patient was scheduled for excisional biopsy. The patient was taken to the operating room for removal of the lesion, requiring excisional biopsy with anterior sclerectomy. During surgery, the lesion was pinkish-yellowish in color and was quite hard in consistency. It separated easily from the overlying conjunctiva, but was very adherent to the sclera. There was not a clear plane distinguishing the lesion from normal sclera, so the lesion was debulked, while keeping the natural contour of the sclera intact. The conjunctiva was closed primarily over the scleral defect using interrupted 8-0 Vicryl sutures. The patient tolerated the procedure well. The specimen was sent to the pathology laboratory for evaluation. Postoperatively, the patient was placed on topical moxifloxacin for 1 week and a topical steroid taper for 3 weeks. She was followed during the postoperative period without complication. At approximately 6 months postoperatively, her visual acuity using a Snellen chart was OD 20/25 and OS 20/30. Her refraction was OD: -1.00 +0.50 × 039 20/20 and OS: -1.75 +1.00 × 116 20/20. She had no signs of recurrence. Pathologic Evaluation The specimen was observed under light microscopy with H&E stain (fig. 2). Normal topography of the epithelium, substantia propria, and superficial sclera were identified. Deep to the substantia propria, irregular fibrous connective tissue was identified. Interesting features include proliferating spindle fibroblasts within the dense connective tissue. The fibroblasts appear non-atypical, and no sign of dysplasia or malignancy was noted. Normal sclera was identified, and the lesion did not extend into the sclera. Scant inflammatory cells were noted on high-power fields. No sign of fluid extravasation was noted. Interestingly, no myxoid tissue was noted in this specimen, which would be typical of nodular fasciitis. Given the clinical history, nodular fasciitis is the diagnosis most consistent with the histological features. Discussion While the exact pathophysiologic mechanism of nodular fasciitis is unknown, some have hypothesized that it is a reactive process of the connective tissue when exposed to repeated trauma or inflammation [10]. Evidence to support this theory is questionable both in ocular and nonocular nodular fasciitis. Indeed, several recent case series in pediatric nodular fasciitis have shown minimal to no history of trauma [11][12][13]. Of these 23 reported cases, only 1 had a history of trauma and 1 had a history of localized infection. Of the 7 published cases of epibulbar nodular fasciitis, the most recent 2 have a plausible history of repetitive trauma [8,9]. In 2005, Stone and Chodosh [8] described a case of epibulbar nodular fasciitis in a patient with floppy eyelid secondary to vigorous rubbing. In 2015, McClintic et al. [9] reported a case of corneal nodular fasciitis in a patient who also had a history of floppy eyelid syndrome. McClintic et al. postulate chronic irritation as a possible predisposing factor, a form of microtrauma. Interestingly, however, our patient denied any history of 'macro' trauma, such as vigorous rubbing, and has no discernable medical conditions, such as floppy eyelid syndrome, to predispose her to this described microtrauma. Despite no obvious source, it is still plausible that the patient experienced subclinical microtrauma. Nodular fasciitis is a benign nodular reactive proliferation of fibroblasts and vascular tissue usually arising within the fascia. In the eye, it usually manifests in the orbit, eyelid, or episclera. Grossly, the lesions tend to be between 0.5 and 1.5 cm in diameter and tend to be round or oval. The specimen is not encapsulated. The specimens tend to be sparsely cellular, with scant infiltration of lymphocytes and mononuclear cells. Gross samples are mostly made of spindle fibroblasts (non-atypical), myxoid ground substance, and vasculature. Mitotic figures may be identified, and it is important not to confuse this with a sarcoma. The clinical history typically details an isolated tender rapidly growing subcutaneous nodule, presenting within 1-3 weeks. Our case was initially concerning for episcleritis, a common differential diagnosis. Episcleritis is a benign recurring condition that often presents as hyperemia, edema, and infiltration, which are all limited to the episcleral tissue. Episcleritis can be classified as simple episcleritis or nodular episcleritis. Simple episcleritis is characterized by diffuse edema, redness caused by engorged episcleral vessels, and small gray deposits. Histologically, there is vascular dilatation and perivascular lymphocytic infiltration. Proteinaceous fluid extravasation in the connective tissue appears as a uniform staining patch under simple H&E staining. These fluid collections are not to be confused with the myxoid ground substances found in nodular fasciitis. The engorged vessels retain the normal radial position. Nodular episcleritis is characterized by localized redness and edema. It is associated with systemic diseases such as rheumatoid arthritis and infection. An intraepiscleral nodule can be observed underlying the sclera. Histologically, chronic nongranulomatous inflammation of lymphocytes, plasma cells, and edema can be observed in the episcleral tissue. Chronic granulomatous inflammation is rarely seen. The differential diagnosis includes orbital lymphoma and idiopathic orbital inflammatory disease (formerly inflammatory pseudotumor) [14]. General considerations in the differentiation of nodular fasciitis and episcleritis are as follows: (1) episcleritis tends to be a more lymphocytic reaction, whereas nodular fasciitis tends to be more of a fibrocytic reaction, and (2) presence of a myxoid background points more towards nodular fasciitis. Clinical history is important in making a diagnosis, i.e. the presence of systemic inflammatory illness increases the likelihood of nodular episcleritis. Differentiating between nodular fasciitis and episcleritis is important because the treatments differ. For nodular fasciitis, conservative steroid treatment may be attempted first to shrink the growth. Ultimately, the standard of care treatment for nodular fasciitis is surgical excision. As in our case, with excision, patients do extremely well. In contrast, episcleritis is often managed conservatively. As it is not typically sight-threatening and is often self-limited, symptomatic relief is the goal of therapy. For example, topical lubricants and oral NSAIDS can alleviate the discomfort. An admitted deficiency of the preoperative examination in this patient was the lack of a manifest refraction. Six months postoperatively, the patient's uncorrected visual acuity had improved from 20/40 to 20/30, and was 20/20 with a myopic and astigmatic correction. It is likely that refractive error was responsible for at least part of the decrease in vision preoperatively, and it is also possible that an uneven tear film distribution may have contributed to the decreased vision. In summary, we presented a case of epibulbar nodular fasciitis in a 32-year-old female. Pathological diagnosis and treatment were achieved via excision. Unlike the two previous reports of epibulbar nodular fasciitis, our case has no clear history of trauma. Admittedly, subclinical microtrauma could still explain the etiology of the mass in our patient. However, this case lends limited support to a trauma/irritation-based theory of etiology. The pathophysiologic mechanism of nodular fasciitis is poorly understood, and further research is necessary. Statement of Ethics Patient care was provided under high ethical standards, with proper informed consent. Disclosure Statement Supported in part by an Unrestricted Grant from Research to Prevent Blindness, Inc., New York, N.Y., to the Department of Ophthalmology and Visual Sciences, University of Utah. Fig. 2. a, b Pathology revealed a proliferation of plump and spindle fibroblasts, consistent with nodular fasciitis. a Dense fibrous tissue is observed which is representative of the samples. H&E. ×20. b Higher magnification of the sample reveals normal fibroblasts in a field of dense irregular fibrous tissue. H&E. ×40.
2018-05-08T17:44:19.768Z
0001-01-01T00:00:00.000
{ "year": 2016, "sha1": "b9060afd9ce0d9abc7b5a84f681bb274d059d947", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/445974", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9060afd9ce0d9abc7b5a84f681bb274d059d947", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235684267
pes2o/s2orc
v3-fos-license
Long-term Clinical Outcomes in Synovitis, Acne, Pustulosis, Hyperostosis, and Osteitis Syndrome Objective To assess the outcome of empirical therapeutic interventions for synovitis, acne, pustulosis, hyperostosis, and osteitis (SAPHO) syndrome. Methods The clinical features and treatment outcomes of a cohort of 21 patients diagnosed with SAPHO in Western Australia were reviewed retrospectively. Results All 21 patients met published diagnostic criteria; 20 (95%) were Caucasian, and the median age was 47 years. The median follow-up was 6 years (range, 2 to 32 years). Three patients (14%) received no treatment; 18 (86%) required conventional synthetic disease-modifying antirheumatic drug (DMARDs). Thirteen (62%) had an initial good response to methotrexate; 8 relapsed and progressed to biologic DMARDs (bDMARDs) during a period of 14 years. Of the 13 recipients on a tumor necrosis factor inhibitor, 11 (85%) continued treatment for a median of 4 years (range, 1 to 14 years), whereas none of 3 recipients of interleukin 17/23 continued treatment (median, 4 months). Higher Physician Global Assessment scores (better outcomes) were observed in bDMARD recipients (mean, 7.06±2.24 [SD]) compared with non-bDMARD recipients (mean, 5.63±2.50; P=.1672) after a median of 3 years of therapy. Conclusion This study describes the broad range of clinical manifestations in SAPHO, variable courses over time, and inconsistent outcomes with diverse empirical therapies. Moderately good long-term treatment outcomes were observed in most recipients of tumor necrosis factor inhibitor. Poorer outcomes were observed with bisphosphonates and interleukin 17/23 axis inhibitors; however, low numbers preclude robust comparison. Suboptimal treatment may be associated with poorer clinical outcomes and greater skeletal damage. Trial Registration Australian and New Zealand Clinical Trials Registry: ACTRN12619000445178 S ynovitis, acne, pustulosis, hyperostosis, and osteitis (SAPHO) syndrome is a rare immunoinflammatory disorder characterized by cutaneous and osteoarticular manifestations. The acronym SAPHO was coined by Chamot et al 1 in 1987. Diagnostic criteria were subsequently proposed and revised by Khan and Khan 2 in 1994. The SAPHO syndrome has been estimated to affect fewer than 1 in 10,000 adults. It has been defined by Kahn and Khan as multifocal osteitis with or without skin symptoms, sterile acute or chronic joint inflammation with either palmoplantar pustulosis (PPP) or psoriasis, or acne or hidradenitis; sterile osteitis with any one of these sets of criteria is deemed sufficient for diagnosis. As the symptoms and signs of SAPHO syndrome are nonspecific and the osteoarticular and cutaneous manifestations are broad and not always present initially or simultaneously, SAPHO is mainly a clinical diagnosis of exclusion. Many physicians recognize inclusion and exclusion features of SAPHO as described by Kahn and Khan. 2 These were proposed more than 30 years ago and last revised in 1994. Given significant medical advances in clinical and microbial science and particularly with respect to bone and joint imaging, a review of these criteria is long overdue. The precise etiopathologic process of SAPHO is unknown. An autoinflammatory basis to the disorder is favored by some, but no specific genes have yet been implicated. The possibility remains that it may be incited by an infectious agent. Given overlap with the full spectrum of manifestations of cutaneous psoriasis and psoriatic and other spondyloarthropathies, including infection-triggered reactive arthritis, a postinfectious basis for SAPHO remains plausible. The clinical manifestations vary in frequency. Most common are osteitis and other bone lesions; chest wall pain, likely to be of mixed origin; and synovitis, mostly monoarticular or oligoarticular, that also has a predilection for the axial skeleton and joints, notably the medial clavicles, manubriumsternum, mandible, vertebrae, and sacroiliac joints. Also common are diverse pustular skin lesions, including severe acne, solitary pustules, hidradenitis suppurativa, and PPP. 3 Responses, although partial and poorly sustained in many cases, to moderate-or highdose prednisone/prednisolone as well as to other conventional and biologic immunosuppressive therapies further support an osteoarticular condition in which there may be discrete subsets, such as noncutaneous or minimal cutaneous disease. A spondyloarthropathy subset may also exist, more easily distinguishable on the basis of computed tomography and magnetic resonance imaging. Importantly, it is unusual with immunosuppressive treatment for any unequivocal exacerbation of the musculoskeletal features of SAPHO to be observed, as might be expected were it due to persistent infection alone. Notwithstanding the low frequency of HLA-B27 in all cohorts studied, the finding of sacroiliitis and vertebral inflammatory lesions in an appreciable subset adds further weight to the notion that SAPHO is a member of the spondyloarthropathy family of rheumatic diseases. Taken together, these observations in conjunction with the usual absence of a typical quotidian fever argue against an autoinflammatory syndrome. There has been considerable interest in an infective cause, in part because the pustules and osteitis lesions sometimes contain potentially relevant organisms, such as Cutibacterium acnes (formerly Propionibacterium acnes), and in part because there are similarities between the osteitis lesions and those found in osteomyelitis. A mostly juvenile equivalent of the disorder is referred to as chronic recurrent multifocal osteomyelitis. 4 By definition, the osteomyelitis in chronic recurrent multifocal osteomyelitis must be sterile, and likewise in adults with acne or hidradenitis suppurativa or PPP, sterility in the osteitis lesions is obligatory. However, tissue biopsy and culture in relevant lesions may overlook infection as there may be hitherto unrecognized infective agents or there may be failure to isolate indolent organisms. Microorganisms are not often isolated, despite thorough investigation for sepsis. In a meta-analysis, it was found that C. acnes was isolated from 42% of bone lesions. 5 In 1987, Trimble et al 6 observed that intra-articular injection of inactivated "P. acnes" in laboratory animals resulted in joint and bone erosions. Thus, an inciting and possibly perpetuating role for this or perhaps other microbes cannot be entirely excluded. The possibility also exists that persistent low-grade infection may perpetuate the reaction as opposed to triggering it and then playing no further part in sustaining it. Germane to this idea are the observations in respect to empirical antibiotic therapy, which has been found to be partially and temporarily effective in 6 independent studies, mostly small in scale. 5 Impressively, azithromycin was reported to improve symptoms and radiologic findings in a short-term study, and this was followed by relapse after treatment. 7 The relative rarity of SAPHO has hindered the implementation of therapeutic trials and in turn restricted the development of clear guidelines for treatment. Many therapies have been described in the literature, including antibiotics, methotrexate (MTX), bisphosphonates, and biologic disease-modifying antirheumatic drugs (bDMARDs). To date, most treatment options have been tailored to the individual case. With the advent of tumor necrosis factor (TNF) inhibitors, a potentially useful additional therapeutic option has become available for the management of SAPHO, and it seems likely that other biologic therapies, including interleukin (IL) 17/23 inhibitors and also Janus kinase inhibitors, will be tested empirically as they too become increasingly accessible. Two large studies have catalogued responses to bDMARDs. 8,9 Should the concept of an autoinflammatory syndrome gain further traction, there may be impetus to further explore IL-1 antagonists, including longer half-life agents, such as rilonacept and canakinumab. A small to intermediate rise in inflammatory markers is often seen in SAPHO. The Creactive protein (CRP) concentration is usually elevated, but not often does it increase to above 40 mg/L (to convert CRP values to nmol/L, multiply by 9.524). Likewise, the erythrocyte sedimentation rate is usually raised, but it is mostly less than 60 mm/h. Accordingly, much higher values should heighten concern in respect to infection. Chronic recurrent multifocal osteomyelitis and nonbacterial osteomyelitis, which probably represent the same condition, are mostly encountered in children or adolescents. They are remarkably similar clinically. Chronic recurrent multifocal osteomyelitis and SAPHO may constitute different phenotypic expressions of the same fundamental pathologic process. The aim of this retrospective study was to describe the clinical and treatment outcomes as well as the natural history of SAPHO in a small cohort of patients observed intensively for up to 32 years. We also reviewed published reports concerning SAPHO and related conditions. This report is a description of SAPHO outcomes during a relatively long time. Furthermore, it provides long-term follow-up and outcome data for both treated and untreated or minimally treated patients with SAPHO. METHODS Participants were collected from an audit of case records and by consultant recall at Fiona Stanley Hospital and Fremantle Hospital and Health Services Group, Royal Perth Hospital, and Sir Charles Gairdner Hospital during the period 1986 to 2018. All 4 hospital precincts service metropolitan Perth and the state of Western Australia. Accordingly, the study is a retrospective, singleegeographic regional cohort study. Information was collected concerning sex, age, ethnicity, time at diagnosis, and relevant clinical characteristics, including pertinent negatives, such as the absence of psoriasis, inflammatory bowel disease, rheumatoid nodules, tophi, and signs of classic spondyloarthropathies. Laboratory data were also collected, including rheumatoid factor and cyclic citrullinated peptide antibody concentrations. Special note was made of microbiologic findings, positive or negative. The erythrocyte sedimentation rate and CRP concentration at baseline, wherever possible and over time, were collected, but only the CRP concentration is shown for brevity. Details of exposure to the following therapeutic agents were collected: nonsteroidal anti-inflammatory drugs, conventional synthetic DMARDs, corticosteroids, antibiotics, bisphosphonates, and bDMARDs. None of our patients received targeted synthetic DMARDs. Available imaging data were reviewed. The diagnosis of SAPHO was based on the criteria proposed by Kahn and Khan. 2 Those who met these diagnostic criteria are listed in Table 1. Ethics This project was registered and approved as a quality assurance project (QA26150) by Fiona Stanley Hospital on March 19, 2018, and as such did not require formal Human Research Ethics Committee review. Statistical Analyses The statistical estimates are mostly descriptive (numerical tallies and medians). Raw data were entered in GraphPad QuickCalcs online statistical calculator. Physician Global Assessment (PGA) scores were compared by unpaired t-test. RESULTS There were 21 patients with a median followup of 6 years (range, 2 to 32 years) identified. All 21 patients met the diagnostic criteria described by Kahn and Khan 2 (applied retrospectively). Age at diagnosis is shown in Figure 1. The median age at the time of diagnosis was 48 years (range, 25 to 84 years). All were Caucasian except 1 patient, who was Asian. The CRP concentration was determined before and after initiation of therapy in 15 participants. Patients were included in this analysis only if there were baseline and follow-up CRP measurements. The pretreatment and post-treatment CRP values represent single assay results for standard CRP concentration, not high-sensitivity CRP. The CRP concentration was determined by enzyme-linked immunosorbent assay in diverse laboratories as participants underwent pathologic testing in community laboratories, mostly in the private health care sector. The upper limit of normal in these laboratories ranged between less than 5 mg/L and less than 10 mg/L. The CRP data are depicted in the plots shown for biologic and nonbiologic therapy in Figure 3. In almost all patients studied, there was a sustained fall in the CRP concentration, which accords with the clinical observation that most patients responded to therapy and achieved stable disease improvement or remission over time. Similar declines were noted irrespective of treatment. This is not surprising because all patients were treated incrementally and often "to target" in an effort to achieve minimal disease activity. No clinically apparent progression in joint or spinal damage was observed in those treated with bDMARDs, despite more than a decade of follow-up in some patients. In contrast, substantial progression was clearly apparent in 1 untreated patient and in 2 other patients not receiving bisphosphonate or a TNF inhibitor. Thus, there is a suggestion that no or minimal treatment may result in worse structural outcomes. Importantly, however, because the study was retrospective and imaging was not performed systematically, it is not possible to report the exact frequency of skeletal damage over time, either in those receiving no or minimal treatment or in any of the treatment subsets, including those receiving bDMARDs. The PGA scores were used at the most advanced time of follow-up possible to appraise outcomes. Better outcomes were observed in bDMARD recipients (mean, 7.06AE2.24 [SD]) compared with non-bDMARD recipients (mean, 5.63AE2.50; P¼.1672) after a median of 3 years of therapy. A trend toward better outcomes in recipients of bDMARDs, especially TNF inhibitors, was observed; however, this was not statistically significant, possibly because of the relatively small number of patients studied. DISCUSSION This retrospective case series illustrates the diversity of clinical manifestations in SAPHO and the inherent difficulties that beset diagnosis and management. Only 1 or just a few features may be manifested at the time of initial presentation, and it can be several or even many years sometimes before other well-recognized features of the syndrome emerge. Furthermore, it is rare for any patient to develop all the recognized features, even with long-term follow-up. The SAPHO syndrome is a multisystem immunoinflammatory disorder, and not all the manifestations are familiar to internal medicine physicians, immunologists, dermatologists, or orthopedists, for example, all of whom are likely to encounter cases from time to time. It evolves during years and sometimes many years, thereby confounding and delaying diagnosis. This report illustrates the broad spectrum of disease presentations and the range of severity in SAPHO. It also illustrates evolution over time and catalogues responses to a variety of treatments. The patients were observed for a relatively long time (median duration of follow-up was 6 years; range, 2 to 32 years). Extended observations of outcomes were possible, including the extent to which sustained responses to treatments were observed. Disease progression with unequivocal skeletal damage was observed in 3 patients, 1 of whom was receiving no treatment during a period of 26 years and 2 of whom received MTX alone (no bDMARD) during periods of 3 years and 22 years. Thus, disease progression with more skeletal damage may have been more frequent in those who did not receive a bisphosphonate or a TNF inhibitor; however, the small numbers and absence of systematically collected imaging data preclude rigorous analysis and any definitive conclusions. In our experience, moderate to severe SAPHO at the outset rarely went into spontaneous sustained remission. Patients with mild disease were responsive to minimal intervention. Rarely did these patients progress to warrant more aggressive therapy, whereas others were less responsive initially or relapsed. Notably, 13 patients had an initial good response to MTX; however, 8 of these 13 then relapsed and progressed to bDMARDs over time. Historically, bisphosphonates have been used as first-line agents after conventional synthetic DMARDs have failed. Favorable outcomes 3 In this study, 3 patients were treated with intravenous administration of bisphosphonates for up to a maximum of 3 years. All 3 patients progressed to a biologic agent because of inadequate disease control. At the time that bDMARDs were first used in any of the patients in this series, there were only anecdotal reports of biologic bDMARD use in SAPHO. Other reports have appeared since. 12,13 More than a decade later, there remain no randomized controlled trials. In our cohort, patients who were either intolerant of or unresponsive to conventional synthetic DMARDs or who had an inadequate response to bisphosphonates were treated with bDMARDs. The duration of biologic therapy ranged from 2 to 14 years. The small numbers preclude assessment of individual TNF inhibitors. The patients in this cohort received etanercept, adalimumab, and certolizumab. A trend toward better disease responses to treatment was observed for TNF inhibitors. Responses to TNF inhibitors have been reported previously in multiple studies. These outcomes are summarized together with ours in Table 2. Taken together, these studies and our study describe moderately good responses to TNF inhibitors in 55 of 74 recipients (74%). On the basis of the collective open treatment data reported from multiple centers and encompassing several races, it can reasonably be considered that both bisphosphonates and TNF inhibitors are probably safe and efficacious for SAPHO. Therapy with TNF inhibitors appears to be superior on the basis of treatment survival considerations, but of course, without randomized head-to-head studies, these agents cannot be properly compared. Furthermore, it must be acknowledged that without welldesigned and adequately powered studies in large cohorts, neither TNF inhibitors nor bisphosphonates can be considered unequivocally proven, nor can their relative efficacies be determined. Spontaneous fluctuations in disease activity, imprecise measures of disease activity, and regression to the mean likely confound empirical assessment. There remains a need to consider further trials of antibiotic therapy. More powerful diagnostic tools to discover microbial infections, including polymerase chain reaction, for example, and perhaps shotgun metagenomics, may need to be applied to persons presenting with SAPHO, and where appropriate, alternative antibiotic treatment strategies, including possibly combinations of agents and cycling/repetitive antibiotic treatment regimens, may need to be examined. One of the important strengths of this study is the length of follow-up and the capacity to demonstrate relative disease stability and mostly good long-term disease or treatment outcomes over time, perhaps greatest in those receiving TNF inhibitor therapy. Furthermore, there is a suggestion that patients either untreated or minimally treated may be subject to more clinical and skeletal damage in the long term. It is important that this prognostic uncertainty be resolved because with more evidence, this consideration has the potential to inform decision-making in management. The study has several limitations. The number of participants is small in keeping with the relative rarity of the condition and single-center/region experience. There is no "gold standard" for diagnosis, so we must admit some potential diagnostic heterogeneity, despite that accepted criteria have been satisfied. Follow-up, however, in comparison to most reports is long, which reduces the likelihood of diagnostic error. In only 1 patient was revision of the diagnosis to possible psoriatic spondyloarthritis under consideration after more than 20 years, and here there was still uncertainty with respect to disease classification. Attempts to exclude infection were thorough. Although microbial isolates were uncommon in our series, there still exists a possibility that a common organism or more than 1 organism may act to trigger or to sustain SAPHO, or provoke flares, at least in a subset of cases. Treatments were heterogeneous and empirical. All but 1 of the participants in this study was Caucasian. Accordingly, our findings may not be generalizable to other racial groups. Nevertheless, the results are consistent with those reported in China and Europe. Furthermore, there was no long-term structured evaluation of skeletal or joint tissues, so we are unable to determine the full extent of bone and joint damage over time or to correlate poor structural outcomes in joints and bone with disease characteristics at outset and thereby to predict prognosis. Use of tobacco and alcohol, infection history, exposure to ionizing radiation in probands as well as in family members, family history (including a history of psoriasis, acne, or other pustular skin disease), and comorbidities were not consistently elicited. Challenges that will confront treating physicians in the future include refinement of diagnostic criteria, selection and justification of treatments in the absence of clear guidelines, and development of strategies to identify those most at risk of preventable bone and joint damage, especially in the seemingly stable or well-controlled patient. Tofacitinib has been reported to be effective in a patient with SAPHO who had recalcitrant aggressive unilateral wrist synovitis, refractory to conventional synthetic DMARDs and etanercept. 16 This and other Janus kinase inhibitors represent yet another potentially valuable therapeutic option. Other agents with reasonable prospects but not yet critically evaluated for SAPHO include leflunomide, apremilast, IL-1 antagonists, 17 and the extended family of IL-17/23 inhibitors, although among our patients, the 3 treated with IL-17/23 inhibitors (secukinumab, n¼2; ixekizumab, n¼1) proved to be refractory. The PGA scores were used to further evaluate outcomes. A trend toward more favorable responses was observed in recipients of biologic therapies, particularly TNF inhibitors. There is a need for incorporation of patientreported outcomes in future studies. CONCLUSION The SAPHO syndrome remains an exacting condition to diagnose and to manage. There is a need to revisit diagnostic criteria because 30 years have elapsed since the concept of SAPHO first emerged and, importantly, because there have been major advances in microbial science, diagnostic imaging, and therapeutics during that interval. Treatment remains empirical and intuitive rather than protocol or guideline directed; but with the advances that have occurred in antirheumatic therapeutics during the past 2 decades especially, the number of options and scope for achieving superior disease control and better long-term disease outcomes have improved considerably. More robust and patientfocused measures of disease and treatment outcome are still much needed. Because SAPHO is relatively rare, aggregated case series and expert consensus rather than controlled trials may be required for some time yet to guide contemporary and future therapy.
2021-07-01T05:13:09.516Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "8d90129ee78b9ecd30ea7f9d62089d214159b60e", "oa_license": "CCBYNCND", "oa_url": "http://www.mcpiqojournal.org/article/S2542454821000461/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d90129ee78b9ecd30ea7f9d62089d214159b60e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247139089
pes2o/s2orc
v3-fos-license
Research on the Impact of Economic Policy Uncertainty on Enterprises’ Green Innovation—Based on the Perspective of Corporate Investment and Financing Decisions : Improving enterprises’ green innovation ability is beneficial to realize the “win–win” of economic development and environmental protection. As the global economic situation is complex and volatile, economic policies changed frequently. Will the rising uncertainty of economic policies affect enterprises’ green innovation? Taking China’s A-share-listed companies from 2008 to 2019 as the research sample, the Baker index based on news media and network information is used to measure the uncertainty of national economic policy, and the official exchange index based on the complex network is used to measure the uncertainty of economic policy in prefecture-level cities. It is found that there is an inverted U-shaped relationship between economic policy uncertainty and firms’ green innovation capability. Moreover, the uncertainty index of national macroeconomic policy is mostly on the left side of the inverted U shape, which can promote the improvement of enterprises’ green innovation ability. However, too frequent changes in regional economic policies will inhibit enterprises’ green innovation ability. This paper further analyzes the moderating effect of financialization of investment behavior and financing constraint on the impact of economic policy uncertainty on green innovation of enterprises from the perspective of investment and financing behavior choice. It is found that the impact of economic policy uncertainty on green innovation is more obvious for firms with low financing constraints and low financialization. Introduction China's economy has achieved world-renowned achievements in the past 40 years, but the long-term extensive economic development model has increased the burden of the ecological environment. Breaking the situation of "one or another" between the economy and the environment is an important part of building an economical and ecological civilization. In order to achieve a win-win situation in economic development and environmental protection, it is necessary to improve the green innovation ability of enterprises. Unlike direct participation in environmental governance and environmental protection investment, green innovation can not only reduce environmental pollution by enterprises and improve environmental performance, but more importantly, green innovation is the key for enterprises to produce green differentiated products, stimulate new market demand, and effectively improve their green competitiveness. However, solely depending on the market makes it difficult to effectively promote the improvement of enterprises' green innovation capabilities and solve the dilemma of environmental quality improvement and high-quality economic development. In the practice of market economy, environmental protection and economic development are often manifested in contradictions. Economic growth cannot effectively solve the problem of environmental degradation. Moreover, due to the negative externalities of environmental problems caused by the nature of public goods of environmental resources, the existence of opportunism of microeconomic subjects, and China's long-term high-energy economic growth model, there is a first-mover advantage in non-green production and its technology research and development in the free market economy. There is not enough motivation for enterprises independently carrying out green production and green technology research and development. Therefore, it is difficult to effectively solve the market mechanism itself to stimulate green innovation of enterprises, and the problem of market failure needs to be solved through government intervention. In order to advance the improvement of the green innovation ability of enterprises to promote the high-quality development of the economy, governments have formulated a large number of economic regulation policies. However, due to the complex and volatile global economic situation and the slowdown in domestic economic growth, the Chinese government frequently adjusts macroeconomic policies, and economic policy uncertainties are rising. Drawing on the Economic Policy Uncertainty (EPU) Index constructed by Baker [1], it can be found that China's Economic Policy Uncertainty Index was higher in 2001-2003, 2008-2009, 2012-2013, 2016-2018, and 2019-2020 (as shown in Figure 1). The high index from 2001 to 2003 was caused by the reform of state-owned enterprises, a series of policies introduced by the government in response to the financial crisis in Southeast Asia, and the economic recession caused by SARS. When the world financial crisis broke out in 2008-2009, the Chinese government launched the "Four Trillion Plan" to avoid a severe economic recession. After that, in order to cope with the superposition of the three phases of "economic growth speed shift period, structural adjustment pain period and early stimulus policy digestion period", the government implemented a series of policies such as "mass entrepreneurship, mass innovation", "three reductions in exchange rate" and so on. The frequent introduction of various policies has increased the uncertainty of economic policies. The rise of the policy uncertainty index in 2012-2013 was caused by the continuous adjustment of economic policies brought about by the change of government. In the period of 2016-2017, in order to better cope with the downward pressure of the transition economy and the complicated political and economic situation at home and abroad, the government formulated a series of policies on import and export, reducing enterprise costs, and improving manufacturing development, which not only brings benefits to enterprises but also increases the uncertainty of economic policies. In 2019-2020, the COVID-19 outbreak, the disruption of global supply chains, and the stagnation of international trade led to a series of policy responses. These policies have increased the uncertainty of economic policy while alleviating the economic downturn in China. Does the rising degree of macroeconomic policy uncertainty affect enterprises' green innovation behavior? If so, how? Many scholars at home and abroad have discussed this issue and formed a large number of research results. Some scholars believe that economic policy uncertainty will inhibit the enterprise of green innovation [2,4]. However, some Macroeconomic policies will affect the risk preference and investment and financing behavior choice of enterprises. Especially for the green innovation behavior of enterprises, such investment has the characteristics which are large investment amount, long payback period, and large uncertainty of results. Therefore the investment risk is high. The mismatch between profit and risk will lead to the lack of innovation motivation and the decline of innovation ability. Effective macroeconomic policies can reduce the risk of green investment and increase the return of green investment, thus promoting the willingness of enterprises to innovate. However, with increasing economic policy uncertainty, the external environmental risks faced by enterprises increase [2,3]. This will affect enterprises' expectations of future earnings, thus influencing their investment behavior choices. Therefore, the rise of macroeconomic policy uncertainty will have an important impact on enterprises' green innovation behavior. Does the rising degree of macroeconomic policy uncertainty affect enterprises' green innovation behavior? If so, how? Many scholars at home and abroad have discussed this issue and formed a large number of research results. Some scholars believe that economic policy uncertainty will inhibit the enterprise of green innovation [2,4]. However, some scholars believe that economic policy uncertainty can stimulate enterprises to innovate and improve their innovation ability [5][6][7]. Therefore, no consensus conclusion has been reached on how macroeconomic policy uncertainty affects enterprises' green innovation behavior. This is also the foothold of this paper and the starting point of research. What is the relationship between economic policy uncertainty and enterprise green innovation? What are the moderating variables that affect the relationship between economic policy uncertainty and firms' green innovation capability? How to determine the frequency of economic policy adjustment to promote the improvement of enterprises' green innovation level? These three issues are discussed and the research framework is constructed to clarify the substantial impact of economic policy uncertainty on green innovation of enterprises. Different from previous studies which focus more on the impact of macroeconomic policy changes at the national level on enterprise behavior from the perspective of the whole country, this paper constructs economic policy uncertainty indices from the national and regional levels to measure the frequency of economic policy changes. The local government of each province and city has certain policy-making power, and different local regulations have been formulated. Enterprises are not only affected by national economic policies but also must comply with the economic policies set by provincial and municipal governments. Regional economic policy changes will undoubtedly have an impact on the investment behavior of local enterprises. With the changes in economic conditions and industrial structure, the uncertainty of economic policies in different regions also varies. Does the uncertainty of regional economic policy affect enterprises' green innovation? Is there any difference in the impact of regional economic policy uncertainty and national macroeconomic policy uncertainty on the enterprise of green innovation? Which policy changes have a stronger impact on corporate green innovation? The discussion on this issue can enrich the research scope of economic policy uncertainty. Measuring the uncertainty of economic policy from multiple angles will increase the research depth of economic policy uncertainty, which is also the main contribution of this paper. Based on China's A-share-listed companies from 2008 to 2019 as research samples, the Baker index based on news media and network information is used to measure the uncertainty of national economic policy, and the official exchange index based on the complex network is used to measure the uncertainty of economic policy in prefecture-level cities. It is found that there is an inverted U-shaped relationship between economic policy uncertainty and firms' green innovation capability. The uncertainty of economic policy can not only improve enterprises' green innovation ability by promoting enterprises' R&D investment, but also bring fluctuations to enterprises' external environment, which has a negative impact on enterprises' economic environment and inhibits enterprises' green innovation. Therefore, uncertainty of economic policy has both "incentive effect" and "inhibition effect" on green innovation of enterprises. The results enrich the research on the relationship between economic policy uncertainty and green innovation. By comparing the influence of national macroeconomic policy uncertainty and regional economic policy uncertainty on firms' green innovation, it is found that national macroeconomic policy uncertainty index is mostly on the left side of the inverted U shape, which can promote firms' green innovation ability. However, too frequent changes in regional economic policies will inhibit enterprises' green innovation ability. Therefore, at present, China's national macroeconomic policy has a more obvious role in promoting the enterprise of green innovation. Local governments should improve the stability of economic policies and reduce frequent changes to economic policies. This paper measures the impact of economic policy uncertainty on the enterprise of green innovation from national and regional levels, making the research on economic policy uncertainty more comprehensive and detailed. Different from previous studies that focus more on the macro perspective, this paper further analyzes the impact of firm behavior choice on the relationship between economic policy uncertainty and firm green innovation from the micro perspective of firm investment and financing behavior decisions. It is found that the impact of economic policy uncertainty on green innovation is more obvious in low financing constrained enterprises and low financialized enterprises. The research of this paper can more comprehensively study the impact of economic policy uncertainty on green innovation of enterprises, and provide theoretical basis and policy choice space for the government to clarify the regulation frequency of macroeconomic policies to stimulate enterprises' green innovation vitality and stimulate enterprises' green innovation investment. Literature Review Economic policy uncertainty refers to the fact that economic entities cannot predict with certainty whether, when, and how the government will change economic policies [8]. Since the outbreak of the world financial crisis in 2008, the global economy has repeatedly fluctuated. In order to cope with the complex economic, political, financial, and other aspects of the environment, governments have begun to frequently introduce various economic policies, which has led to economic policy uncertainty, and the policy uncertainty caused by the frequent adjustment of policies has attracted the attention of more and more scholars. The Theoretical Basis of the Uncertainty of Economic Policy Affecting the Green Innovation of Enterprises Scholars often study how macroeconomic uncertainty affects macroeconomics and microenterprises based on the three theories of real options, risk compensation, and growth options. Real Options Theory The theory of real options is most common in the discussion of uncertainty and economic growth. Bernanke thought that the investment behavior of an enterprise can be regarded as a series of options: when the investment behavior is irreversible, the enterprise needs to weigh the costs and benefits between investing "now" and waiting for better investment opportunities in the future [9]. From the perspective of physical options theory, when the uncertainty of the future rises, it is more valuable for enterprises to choose to postpone investment because, during the waiting period, enterprises may be able to obtain more information about the future to avoid possible large losses [10,11]. Therefore, economic policy uncertainty will significantly reduce investment and output at both the macro and micro levels. At the macro level, uncertainty will reduce investment and output. Uncertainty shocks had a significant inhibitory effect on GDP growth and investment in the United States [12,13]. Baker et al. constructed an index of economic policy uncertainty and found that economic policy uncertainty would reduce output [1]. For microenterprises, the uncertainty of economic policies means that the external environment risks faced by enterprises are increasing [2,3] and bank credit risks are increased [14], resulting in extremely cautious attitudes towards investment. It is possible to hedge external risks by reducing R&D expenditure [4] and reducing investment [8]. Bloom found that based on the theory of physical options, due to the characteristics of large R&D investment, long cycle, and high risk, with the increase in policy uncertainty, enterprises will be more cautious about R&D investment [15]. Growth Option Theory The growth option theory is usually used to explain the formation of the Internet bubble in the United States from the end of the 20th century to the beginning of the 21st century [16]. The core is the comparison of costs and benefits. According to this theory, the economic system is full of uncertainty, but this uncertainty will promote investment, thereby promoting economic growth. For industries such as the Internet, the biggest loss of a company's investment is their cost, but once the investment is successful, the company's return is several times their cost. The temptation of such high profits increases speculative investment. Since it takes a period of time for the investment to be converted into production capacity, this investment can be regarded as a "call option" purchased by the enterprise. Bar-Ilan and Strange found empirical evidence for growth options [17]. They believed that for some industries, increased uncertainty will greatly increase expected returns. Atanssov used the election of the governor of the United States to measure policy uncertainty [5]. Through research, it was found that policy uncertainty has a positive impact on corporate innovation, especially in industries that are politically sensitive and difficult to innovate. The promotion effect is particularly obvious. Kraft et al. found that growth options are very important to explain the investment behavior of innovation-driven companies [18]. They found that when uncertainty increases, this type of company will increase R&D expenditures and enable enterprises to obtain high returns in the future. Meng and Shi studied the relationship between economic policy uncertainty and enterprise R&D investment in the DSGE model, and found that the R&D investment of enterprises is positively related to economic policy uncertainty, and the higher the risk preference for enterprises, the more economic policy uncertainty plays a role in promoting R&D investment obviously [6]. Gu et al. distinguished the selection effect and incentive effect of economic policy uncertainty on enterprise innovation and found that economic policy uncertainty is affecting the R&D investment and patent application volume of listed companies, which is different for enterprises with different property rights or industries [7]. Rao et al. found that in times of high uncertainty, enterprises will take more attention to market factors, thus improving their investment efficiency [19]. Risk Compensation Theory In economics, investors need to obtain compensation for risk taking through a risk premium. High uncertainty tends to increase the risk premium, which will further increase the cost of financing. In recent years, a large number of theoretical literature has proved that the increase in uncertainty will increase borrowing costs, intensify the degree of financing constraints of enterprises, and thus affect economic growth [20,21]. Ilut and Schneider proposed the "Ambiguous Business Cycle" model. They defined a group of institutions that were highly uncertain about the future and found that when the uncertainty increased, these institutions reduced their investment and consumption, which in turn affected economic growth [22]. The Impact of Economic Policy Uncertainty on Corporate Green Innovation Scholars hold the opposite attitude towards the impact of economic policy uncertainty on corporate green innovation. The first is suppression theory. Some scholars believe that the increase in economic policy uncertainty will reduce enterprises' green innovation motivation and innovation ability [2, 3,14]. Julio and Yook used official exchanges as a proxy variable for economic policy uncertainty [23]. The study found that economic policy uncertainty affected business operations and decision making, thereby inhibiting corporate R&D investment. Bhattacharya et al. used patent volume to measure innovation activities in their research, and national election time to measure policy uncertainty periods [4]. They conducted research on data from 43 countries and the results showed that the number of patented inventions increased with the increase in policy uncertainty and policy uncertainty has an inhibitory effect on innovation. Li used the economic policy uncertainty index researched by Baker et al. to measure economic policy uncertainty [24]. Through research, it was concluded that the increase in economic policy uncertainty will inhibit corporate R&D investment. Tan and Zhang, through their empirical research, adopted the belief that economic policy uncertainty will inhibit corporate R&D investment through two transmission mechanisms: real options and financial friction [25]. However, some scholars believed that improving corporate innovation capabilities can enhance corporate core competitiveness and alleviate the negative impact of economic policy uncertainty. Therefore, economic policy uncertainty can encourage companies to innovate and enhance corporate innovation capabilities. Atanssov used the election of the governor of the United States to measure policy uncertainty. Through research, it was found that policy uncertainty has a positive impact on corporate innovation, especially in industries that are politically sensitive and difficult to innovate [5]. The promotion effect is particularly obvious. Meng and Shi conducted an empirical study based on the data of listed companies in China from 2009 to 2015 [6]. The results showed that policy uncertainty has a positive effect on corporate innovation, and policy uncertainty will promote enterprises to seek their own development. Gu et al. studied the incentive effect and selection effect of policy uncertainty on enterprises based on the data of listed companies in China [7]. The study found that policy uncertainty can positively affect the innovation input and innovation output of enterprises. The relationship is affected by factors such as the characteristics of the company's industry and government subsidies. Some authors believed that economic policy uncertainty and enterprise innovation are jointly affected by "promotional effects" and "inhibition effects". Liu and Huang examined panel data of listed companies in China's strategic emerging industries from 2013 to 2018 and found that there is a U-shaped relationship between economic policy uncertainty and corporate innovation capabilities, and government innovation preferences have a negative regulatory effect [26]. Literature Summary Throughout the existing literature, scholars at home and abroad have conducted in-depth research on the influencing factors of corporate innovation capabilities and the impact of economic policy uncertainty, forming a wealth of research results. However, the research on how the uncertainty of economic policy affects the green innovation capability of enterprises started relatively late. The following problems exist. Inconsistent Research Conclusions On the relationship between economic policy uncertainty and corporate green innovation, scholars have reached inconsistent conclusions. Scholars who hold positive opinions believe that economic policy uncertainty will encourage enterprises to carry out green innovation and alleviate the negative impact of changes in the external environment. Scholars who hold negative opinions believe that the increase in economic policy uncertainty will make the macroeconomic situation unpredictable, delaying the R&D and innovation decisions of business operators and inhibiting the green innovation of enterprises. Some scholars believe that the uncertainty of economic policy is a U-shaped relationship under the combined effect of promotion and inhibition. There are relatively few studies on the influence mechanism between economic policy uncertainty and corporate green innovation. Few scholars have paid attention to the transmission path and influence mechanism of the impact of economic policy uncertainty on enterprises' green innovation from a micro perspective. Enterprise innovation capability is the result of the combined effect of micro factors such as enterprise investment decision making and financing scale. Therefore, from a micro perspective, it is an important way to clarify the impact mechanism between economic policy uncertainty and the enterprise of green innovation by studying the impact of increased uncertainty on the choice of investment behavior and financing constraints of enterprises, and then on the innovation ability of enterprises. Therefore, on the basis of previous studies, this article deeply researches the impact of economic policy uncertainty on the company's green innovation capability, explores its impact mechanism from two aspects of corporate investment and financing, and enriches the research in this field. The Impact of Economic Policy Uncertainty on Corporate Green Innovation First of all, when the uncertainty of economic policy is within a reasonable range, it will promote the improvement of enterprise innovation capabilities. (1) Corporate innovation activities can be regarded as an option. Growth options believe that corporate investment not only focuses on the short-term benefits that investment behavior brings to the company but also pays more attention to the long-term development of the company. Innovation is an important support for enhancing the competitiveness of an enterprise and an important source of excess profits for an enterprise. When the uncertainty of economic policy rises, the uncertainty of the market environment companies faced increases. The frequency of corporate value fluctuations also increases, and corporate value is likely to decline. In this case, many companies tend to take the lead in making innovative investments, seizing market share, and enhancing competitiveness. (2) From the perspective of corporate funds, when policy uncertainty rises, the external operating environment of the company faces greater uncertainty. Due to the savings prevention mechanism, companies tend to reduce their holdings of financial assets and store more cash. At the same time, in order to enhance competitiveness, companies often use part of their funds for innovative research and development, and their innovation capabilities also increase. Therefore, the increase in economic policy uncertainty will increase the market risks faced by enterprises, and enterprises will often increase their investment in innovation, enhance their innovation capabilities, quickly occupy the market, and enhance their competitiveness. Uncertainty in economic policies will enhance corporate innovation capabilities. However, when economic policy uncertainty continues to rise to a certain level, it will have a restraining effect on corporate innovation. (1) Based on the theory of real options, from the perspective of waiting for options, when the uncertainty of economic policy continues to increase, the uncertainty of the external environment will rise sharply, leading to an increase in the waiting value of options. As time goes by, more and more effective information will be presented, and companies will make decisions to delay investment. Due to the existence of investment irreversibility and principal-agent problems, when the economic policy uncertainty is high, the external environment of the enterprise is confusing and it is difficult for enterprise managers to predict the future development of the enterprise. For the purpose of maximizing their own interests, managers will choose to reduce innovative investment and often invest funds in projects that are most beneficial to them. (2) Sufficient funds are an important guarantee for innovative activities. When economic policy uncertainties continue to increase, financial market frictions and the risk of external capital providers of enterprises increase, which in turn, will cause corporate financing costs to increase. When the economic policy is uncertain, banks cannot accurately judge the risk and return of credit in the market, so they will choose a more conservative credit policy. These unfavorable factors will increase the financing constraints of enterprises and cause enterprises to reduce their investment in innovation, which has a restraining effect on innovation. (3) As economic policy uncertainties increase, market risks increase and corporate innovation activities themselves have certain risks. The superposition of risks will bring more adverse effects to companies, and some companies will choose to reduce their innovation investment. Therefore, too high uncertainty in economic policies will also inhibit corporate green innovation. To sum up, economic policy uncertainty not only stimulates corporate R&D investment to promote corporate innovation capabilities, but also inhibits corporate green innovation due to risks brought about by policy uncertainty. The relationship between the uncertainty of economic policy and the green innovation of enterprises under the combined effect of the suppression mechanism and the promotion mechanism may be nonlinear, as shown in Figure 2. When the level of economic policy uncertainty is low, the business environment fluctuations and financing constraints faced by enterprises are relatively small, and the incentive effect brought by policy uncertainty to enterprises will offset the adverse effects brought about by policy uncertainty. The uncertainty of economic policy promotes corporate innovation. When economic policy uncertainty further rises, market risks and financing constraints faced by enterprises further increase. As economic policy uncertainty increases, enterprise innovation will be inhibited. There is an inverted U-shaped relationship between economic policy uncertainty and enterprise innovation ability, which promotes first and then inhibits. financing costs to increase. When the economic policy is uncertain, banks cannot rately judge the risk and return of credit in the market, so they will choose a mor servative credit policy. These unfavorable factors will increase the financing cons of enterprises and cause enterprises to reduce their investment in innovation, whi a restraining effect on innovation. (3) As economic policy uncertainties increase, m risks increase and corporate innovation activities themselves have certain risks. T perposition of risks will bring more adverse effects to companies, and some com will choose to reduce their innovation investment. Therefore, too high uncertainty i nomic policies will also inhibit corporate green innovation. To sum up, economic policy uncertainty not only stimulates corporate R&D i ment to promote corporate innovation capabilities, but also inhibits corporate green vation due to risks brought about by policy uncertainty. The relationship betwe uncertainty of economic policy and the green innovation of enterprises under the bined effect of the suppression mechanism and the promotion mechanism may be n ear, as shown in Figure 2. When the level of economic policy uncertainty is low, the ness environment fluctuations and financing constraints faced by enterprises ar tively small, and the incentive effect brought by policy uncertainty to enterprises w set the adverse effects brought about by policy uncertainty. The uncertainty of eco policy promotes corporate innovation. When economic policy uncertainty further market risks and financing constraints faced by enterprises further increase. As eco policy uncertainty increases, enterprise innovation will be inhibited. There is an in U-shaped relationship between economic policy uncertainty and enterprise inno ability, which promotes first and then inhibits. Sample Selection and Data Sources This paper selects the financial data of China's A-share-listed companies from to 2019 as a research sample to study the impact of economic policy uncertainty on rate green innovation. The following processing is performed on the original data: clusion of the sample of enterprises' with special processing, such as ST and ST* d the observation period. The annual profit of such enterprises is continuously lost likely to have the risk of delisting. Such abnormal financial data of enterprises wil an impact on the results of the empirical inspection. (2) Exclusion of the samp Sample Selection and Data Sources This paper selects the financial data of China's A-share-listed companies from 2008 to 2019 as a research sample to study the impact of economic policy uncertainty on corporate green innovation. The following processing is performed on the original data: (1) exclusion of the sample of enterprises' with special processing, such as ST and ST* during the observation period. The annual profit of such enterprises is continuously lost and is likely to have the risk of delisting. Such abnormal financial data of enterprises will have an impact on the results of the empirical inspection. (2) Exclusion of the samples of companies that have been PT during the observation period. This type of company's annual net profit has been lost for three consecutive years and the stock will be suspended. The abnormal financial data of this type of company will have an impact on the results of the empirical test. (3) Exclusion of the sample of companies in the financial and insurance industries. The profit model of this type of company is different from that of other industries, and the reflection of corporate innovation capabilities is different from that of ordinary companies. The financial data of this type of company affects the accuracy of the empirical results. (4) Exclusion of samples of companies with missing important financial data. The financial data of such companies affect the accuracy of the empirical results. In the end, 2356 enterprise samples and 12,141 pieces of data were obtained. Corporate financial data was mainly derived from the Wind database and the CSMAR database, and was supplemented by websites such as Sina Finance and Juchao Information. In order to reduce the influence of outliers, this paper winsorizes 1% on all variables, and in order to alleviate endogeneity, the explanatory variables and all control variables are treated with one period lagging behind. Explained Variables Corporate green innovation (GI) is measured by the logarithm of the number of corporate green patent applications plus one. Considering that the R&D activities of enterprises have a high degree of uncertainty, investment does not necessarily mean that there is output. Compared with R&D investment, innovation output more intuitively reflects the innovation ability of an enterprise. Because the use of green patent indicators can effectively eliminate the impact of other factors (such as innovation subsidies and other policies) on enterprise innovation besides environmental regulatory policies [27], and because of the time lag in the patent authorization process and patents under the influence of human factors such as the preference of authorized examiners, the number of patent applications can truly reflect the innovation level of enterprises more than the number of applications granted. Therefore, in this paper, the number of green patent applications by enterprises plus one is taken as a logarithm to reflect the level of green innovation of enterprises. Explanatory Variables Economic policy uncertainty (EPU) currently focuses on three methods. One is based on the measurement of news media and network information. Choosing influential newspapers, per month for each paper, we conducted a search using search words related to economy and policy uncertainty according to media reports in the word frequency statistics, as well as to the expected changes to the tax provisions of macroeconomic prediction deviation of three indicators to measure the uncertainty of economic policy and standardized processing and build up economic policy uncertainty index [1,28,29]. The second is to measure whether there is uncertainty based on whether there is a change of top government officials [2,3]. The third is to consider if the uncertainty of macroeconomic and financial markets is likely to produce a wide range of influence; some scholars will predict variability or volatility as the uncertainty, using the volatility of economic and financial variables, the changes in stock market returns, or profits level of cross-sectional discrete degree of uncertainty as the proxy variable to indirectly measure economic policy uncertainty [30,31]. Volatility of economic and financial variables or cross-sectional dispersion as a measure of economic policy uncertainty is easy to operate and calculate, but it contains nonmacroeconomic fundamentals such as risk aversion, leverage effect, and inter-firm heterogeneity. Even if economic policies do not change, these non-macroeconomic fundamentals may still lead to changes in the volatility or cross-sectional dispersion of economic and financial variables. Therefore, in this sense, conditional volatility or dispersion is not equal to uncertainty. Therefore, this paper adopts two methods of news media, network information and official communication, respectively, to measure economic policy uncertainty from national and local levels. The economic policy uncertainty index constructed by news media and network information is used to measure the uncertainty of national economic policy, and the index constructed by local government officials' communication is used to measure the uncertainty of economic policy in prefecture-level cities. 1. Measure Economic Policy Uncertainty by News Media and Network Information. The economic policy uncertainty index constructed by Baker et al. is adopted to measure the uncertainty of economic policy [1]. The index, jointly published by Stanford University and the University of Chicago, covers a number of countries around the world. It measures the uncertainty of economic policies of various countries based on representative media and keyword statistics, and is published on the official website of Economic Policy Uncertainty. The index has good continuity and time variability and has been widely recognized by domestic scholars and abroad. The Chinese economic policy uncertainty index is based on the frequency of certain keywords in articles of the newspaper The Hong Kong South China Morning Post. Since the economic policy uncertainty index publishes monthly data, this paper calculates the arithmetic average of monthly data to obtain the annual data and divides the annual data by 100 to keep the order of magnitude consistent. Finally, the economic policy uncertainty index EPU1 is obtained. 2. Measure Economic Policy Uncertainty by the Exchange of Local Officials As disseminators and implementers of national policies, local officials' social networks have a subtle effect on their behaviors and decisions, and influence the formulation of local economic policies. According to the definition of economic policy uncertainty (EPU), economic subjects cannot predict whether, when, and how the government will change economic policies [8]. Not only will frequent change in local officials affect the uncertainty of economic policy, but the influence of the transfer of relationships of local officials will also affect the uncertainty of economic policy. This factor is considered in this paper because of the persistence of local officials' cognition of things and management ideas. If an official is transferred from place A to place B, the economic subject in place B can judge how the government changes economic policies based on the management means and governance ideas of the official in place A, and then determine the uncertainty of economic policies. Therefore, based on the complex network, this paper constructs the local officials' communication network to measure the uncertainty of local economic policy. In the exchange network of official places, the node is a certain place in the network. The side is the transfer relationship between the two places, and the direction of the side is the transfer direction of the official. For example, if the official is transferred from A to B, then the edge points from A to B. The weight of the edge depends on the number of official transfers between the two places and the level of urban development. Because officials conduct their roles not only in the local government but in many other social roles, such as industry association leadership and university professors, this paper defined officials for communication in the network edge as referring to officials in the office who are transferred as different from officials of other forms of flow (such as communication between universities). In an official communication network in the same province, a clique can be observed, and communication between adjacent provinces may not establish the edge. There will always be a transfer of officials from one province to another, which creates a degree of connection with an office that does not have a border of its own. This creates a weak relationship (as shown in Figure 3). There are enough information advantages and control advantages in the whole network, so that the communication network of the official's office has the rationality of existence. From Figure 3, the entire network is divided into three community structures; the two arms of AB and AC are the key to the whole network edge and are also important links between the three community structures as A, B, and C have weak relations. The From Figure 3, the entire network is divided into three community structures; the two arms of AB and AC are the key to the whole network edge and are also important links between the three community structures as A, B, and C have weak relations. The network can provide other offices with a community structure of the internal office without access to official economic policy or experience. Among them, place A is a significant area connecting the entire network. If place A does not exist, the entire network will be disconnected. Therefore, the weak relationship of A is stronger. The transfer of officials through this place can connect the entire network, and there are many channels for obtaining information, which affects the transfer of officials and information transmission of the entire network. In addition, location E is located in the center of the community structure, with a high clustering coefficient and a relatively strong relationship. Officials who pass through the location are transferred more frequently, and economic policies change more frequently. On the basis of sorting out the resumes of local officials, this paper lists the relationship between the places where Chinese officials hold posts formed during the process of performing their duties. Firstly, from the website of the Central People's Government of the People's Republic of China (home page-Overview of China-Personnel change query), a list of the names of the party secretaries of 343 cities at the prefecture level and above in China from 2008 to 2019 was obtained. The resumes of the officials were found by using the database of local government officials of People's Daily online as basic information. Secondly, in order to ensure the authenticity of the communication network relations between the officials and where they work, all the officials with the same name are screened to distinguish whether they are the same person, and each official is given a unique ID code. Again, according to officials' resumes for each successive city list and the municipal party committee secretary of the vice mayor of the above positions of transfer to build the mold "place of employment-place of employment" matrix, the matrix is outlined based on the official flow coupling relationship between different offices. If officials from office i transferred to office j, then the value of matrix (i, j) is 1. Otherwise, the value is 0, and the diagonal between the office and itself is 0. Centrality analysis in complex networks can quantify the importance of nodes in the network through measurement indexes. Quantitative tools commonly used in complex network analysis are as follows: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality [32]. By referring to these four commonly used network measurement indicators, we can calculate the closeness of the entire network and the importance of each prefecture in the network. The higher the importance is, the more frequent the exchanges between the officials of the prefecture-level city are, and the higher the uncertainty level of economic policy is. The economic meaning and the calculation process of the four indicators are as follows: Degree Centrality Index Degree centrality is the most direct metric to describe node centrality in network analysis [33]. The higher the node degree of a node is, the higher the degree centrality of the node is, and the more important the node is in the network. It also means that there are frequent changes of officials in the city. The degree centrality index Degree i is calculated by the following formula. The i is a single place of office. The j is a place of office other than the i of the place of office. If there is at least one exchange of officials between the i and the j of office, the X ji is 1. Otherwise, the X ji is 0. The index g is the total number of places of service, using (g − 1) to eliminate scale differences. Betweenness Centrality Index Betweenness centrality, also known as intermediate centrality, refers to the number of times that the current point serves as the short-circuit bridge between the other two nodes [34]. In this paper, the number of shortest bridges undertaken by a node is divided by the number of all paths, so as to standardize the data processing. The higher the number of times a node acts as a "bridge", the greater its mediation centrality. It also means that the main leaders of this city are transferred to, or transferred from, other cities frequently. The betweenness centrality index Betweenness i is calculated by the following formula. The g jk is the number of shortcuts that must be taken to connect the j and k of the place of service. The g jk(ni) is the number of i in the shortcut path to the j and k of the office. Closeness Centrality Index Closeness centrality reflects the closeness degree between a node and other nodes in the network, which is to take a reciprocal of the shortest distance between the current point and all other nodes [35]. The closer the distance between this point and all other nodes in the network, the greater the closeness centrality and the more it is at the core position of the network. This also means that the city officials have a high degree of communication. The closeness centrality index Closeness i is calculated by the following formula. The d(i, j) is the distance from the i to the j of the post in the network. If a given place is not linked to all places of service, the proximity centrality cannot be accurately calculated through the non-complete relationship. The sum of the number of places directly connected to the place of service is divided and then multiplied by its proportion in the total number of places on the network. Eigenvector Centrality Index Eigenvector centrality is the normalized eigenvector corresponding to the maximum eigenvalue of the adjacency matrix [35]. The greater the centrality of the eigenvector is, the more important the neighbor of the node is, and it is an indication to measure the importance of its neighbor. The eigenvector centrality index Eigenvector i is calculated by the following formula. The b ij is the adjacency matrix and there is at least one official exchange between the i and the j of the post. Then, the b ij is 1, otherwise it is 0. The λ is the maximum eigenvector of the B, and E j is the characteristic value of the j centrality of the place of service. Control Variables Based on the existing literature, important indicators affecting green innovation of enterprises are selected as follows: firm size (Size), corporate profitability index return on net assets (LEV), corporate debt structure index asset-liability ratio (ROE), enterprise asset structure index tangible assets' ratio (Tang), the ratio of cash flow enterprise cash flow status indicators (Cash), and enterprise equity structure index (OC) equity concentration as control variable. Both individual effects and time effects were controlled. Variable definitions and calculation methods are listed in Table A1 which placed in the Appendix A. Model Design In order to test the relationship between economic policy uncertainty and the enterprise of green innovation, this paper first adopts the Hausman test to verify that fixed effect regression should be adopted. Secondly, the likelihood-ratio test was used to prove the existence of the time effect. Finally, considering that the text panel data are in the form of a large sample and consist of a short time period, the White test is adopted to verify the existence of heteroscedasticity in data so a robust standard error is adopted for regression. The fixed effect regression model is as follows. Enterprises' innovation activities are characterized by long-term nature. In order to alleviate the endogenous problem, all explanatory variables and control variables are lagged behind the explained variables by one period. Where, i represents a listed company, and t represents the corresponding year. GI i,t+1 represent the innovation capability of company i in t + 1 year. EPU t represents the uncertainty degree of economic policy in t. Control represents a series of control variables. year is the fixed effect of the year, and ε is the random disturbance term of the fixed-effect model. If β 1 is significantly positive and β 2 is significantly negative, there is an inverted U-shaped relationship between economic policy uncertainty and firm green innovation. If β 1 is significantly negative and β 2 is significantly positive, there is a positive U-shaped relationship between economic policy uncertainty and firm green innovation. Table 1 reports the results of a descriptive statistical analysis of the main variables used in the sample study. The data analysis results show that the mean value of green innovation capability (GI) of enterprises is 2.80, the median value is 2.83, the maximum value is 6.87, and the minimum value is 0. The data distribution is uneven, and the gap between the maximum value and minimum value is large, indicating that there is a big gap in green innovation capability and innovation effect among enterprises. The standard deviation of the economic policy uncertainty index (EPU1) was 1.73, the minimum was 0.99, and the maximum was 7.92. The standard deviation of EPU2 is 0.867, the minimum value is 0.006, and the maximum value is 1.956, indicating that the economic policy of the study sample fluctuates greatly. Table 1 also reports descriptive statistical results of enterprise size (Size), return on equity (ROE), asset-liability ratio (LEV), tangible asset ratio (Tang), cash flow ratio (Cash), and equity concentration ratio (OC). The paper also conducted a Pearson correlation test and a multicollinearity test, and found that there was a significant positive correlation between firms' green innovation (GI) and economic policy uncertainty (EPU). Whether there was a simple linear correlation or an inverted U-shaped relationship between firms' green innovation capability and economic policy uncertainty remains to be further tested. The maximum value of variance inflation factor (VIF) is 3.03, far less than 10, indicating that the model does not have a complete multicollinearity problem. Table 2 reports the regression results of the impact of economic policy uncertainty, EPU, on the enterprise of green innovation GI after controlling the fixed effect of the year. Column (1) is the regression result of the impact of economic policy uncertainty EPU1 on enterprise green innovation GI when measuring economic policy uncertainty according to the Baker index. Column (2) is a local officials' communication network constructed by a complex network model to measure the uncertainty of economic policy and obtain the regression result of the impact of economic policy uncertainty EPU2 on enterprise green innovation GI. The results of both empirical tests show that the regression coefficient of the primary term of EPU is significantly positive at the 1% level, and the regression coefficient of the second term of EPU is significantly negative at the 1% level, indicating that there is an inverted U-shaped relationship between economic policy uncertainty, EPU, and enterprise green innovation, GI. In the initial stage, the increase in economic policy uncertainty will promote the green innovation of enterprises, but the high uncertainty of economic policy will inhibit the green innovation of enterprises. A Utest test is further conducted to verify the inverted U-shaped relationship between economic policy uncertainty and enterprise green innovation, and the critical value of the impact of economic policy uncertainty on enterprise green innovation is determined. The results show that the p value is less than 0.05, which rejects the null hypothesis and verifies the inverted U-shaped relationship between economic policy uncertainty and enterprise green innovation. The critical value of the national economic policy uncertainty index EPU1 is 6.413, and the maximum value of EPU1 is 7.719; the minimum value of EPU1 is 0.989, and the critical value 6.413 is within the value range of 0.989-7.919. This indicates that the uncertainty of China's economic policy from 2008 to 2019 is distributed on both sides of the critical value. In some years, the economic policy changes are too frequent, and the uncertainty is high, which inhibits the green innovation of enterprises. The critical value obtained by the Utest test has practical significance. When EPU1 is less than 6.413, the increase in economic policy uncertainty will promote enterprises' green innovation; when EPU1 is higher than 6.413, the increase in economic policy uncertainty will inhibit enterprises' green innovation. When the economic policy uncertainty index EPU1 reaches 6.413, it has the strongest promoting effect on the green innovation of enterprises. The mean value of EPU1 is 2.730, which is far less than the inflection point of 6.413, indicating that the uncertainty of national economic policy is on the left side of the inverted U-shaped curve in most years in China, which can promote the improvement of enterprises' green innovation ability. At the regional level, the inflection point of EPU2 is 0.071. The maximum value of EPU2 is 1.956, and the minimum value of EPU2 is 0.006. The inflection point 0.071 is within the value range of 0.006-1.956. The mean value of EPU2 is 0.867, which is higher than the inflection point 0.071, indicating that the uncertainty of economic policies in most regions of China is on the right side of the inverted U-shaped curve, and the change of regional economic policies is too frequent, which inhibits the improvement of enterprises' green innovation ability. Therefore, at present, China's national macroeconomic policy has a more obvious role in promoting enterprise green innovation. Local governments should improve the stability of economic policies and reduce frequent changes in economic policies. Robustness Test In order to ensure the robustness of the empirical test results, the following robustness analysis is conducted in this paper. Re-Calculation of Economic Policy Uncertainty Index In the empirical test of this paper, the economic policy uncertainty index (EPU1) is converted into annual data by means of arithmetic average of monthly data. Based on the research of Gu [7], this paper converts the geometric average of monthly data into annual data and repeats the above empirical research process. The regression results are shown in Table 3. The results show that the primary term of EPU index is significantly positive at the level of 1%, and the second term of EPU index is significantly negative at the level of 1%, which again verifies the inverted U-shaped relationship between economic policy uncertainty and the enterprise of green innovation. Re-Measurement of Enterprise Innovation Capability Index This paper interpreted the variable green innovation ability of enterprises by the listed company by taking the green patent applications logarithmic measure. In order to avoid the instability of empirical results caused by missing samples, this article takes the current innovation input of enterprises divided by total assets as the proxy variable of green innovation of enterprises. The above empirical process is repeated and the regression results as shown in Table 4. The empirical test results are consistent with the hypothesis in this paper. Note: t statistics in parentheses, *** represents p < 0.001. The Moderating Mechanism Analysis of the Relationship between Economic Policy Uncertainty and Enterprise Green Innovation This paper further analyzes the moderating mechanism analysis of the relationship between economic policy uncertainty and enterprise green innovation from the perspective of firm behavior choice. First, from the perspective of enterprise financing behavior, according to the theory of financing constraint, innovation behavior is exclusive, and there is often serious information asymmetry between the two sides of the investment, resulting in enterprise innovation often being faced with serious financing constraints. In enterprises with different financing constraints, the impact of economic policy uncertainty on green innovation will be different. Secondly, from the perspective of enterprise investment behavior, under the background of limited enterprise resources, the change of enterprise investment behavior preference will also affect the enterprise of green innovation investment, and then affect the enterprise of green innovation level. Therefore, among firms with different investment behavior preferences, the impact of economic policy uncertainty on green innovation is also different. Innovation requires significant and stable financial support [36]. Internal financing alone can hardly meet the financial needs of innovation, and external financing has become an important source of innovation funding for enterprises [37]. Therefore, the availability of external financing becomes an essential constraint for enterprises to engage in innovative R&D activities. Three factors contribute to the high cost of green innovation financing and the difficulty of obtaining external financing for enterprises. First, innovation activities are characterized by high capital requirements, long payback periods, uncertainty, and high innovation risk, leading banks and credit investors to be reluctant to invest capital in innovation activities [38]. Second, in order to avoid competitors learning their core secrets, companies are reluctant to disclose detailed information about their innovation activities, resulting in a serious information asymmetry between companies and external investors [39,40]. Finally, the results formed by innovation outputs are basically intangible assets, which are not easily collateralized, and thus R&D enterprises cannot easily obtain bank loans [41]. As a result, corporations face a serious funding gap in innovation [38], high external financing costs [39], and obvious financing constraints [42]. Unstable funding sources can easily lead to interruptions in firms' innovation activities and constrain their independent R&D [43]. Therefore, financing constraint is an important variable affecting the enterprise of green innovation. With the increase in economic policy uncertainty, market fluctuations and market risks increase, and enterprises face more uncertainty in the operating environment. As a result, the investment risks of external capital providers such as capital market and venture investors increase, so the cost of external financing increases [44]. At the same time, the rising uncertainty of economic policies will aggravate the credit risks of banks, and banks and credit departments will adopt relatively conservative credit policies [45]. Banks and other credit departments will increase their examination of enterprises' loan qualification and solvency, making it more difficult for enterprises to obtain loans from banks, and subsequently, the number of bank loans will decrease. Therefore, rising economic policy uncertainty will increase financing difficulties for enterprises. At this point, enterprises with low financing constraints and abundant capital are more likely to alleviate the impact of economic policy fluctuations on the lack of green innovation funds. However, enterprises with higher financing constraints and capital shortage will be more conservative in the face of the increased uncertainty of the external economic environment caused by the fluctuations of economic policies [46] and reduce the speed of external expansion and investment in green innovation. Therefore, financial constraint difference is an important moderating factor influencing the relationship between economic policy uncertainty and green innovation of enterprises. The Impact of Corporate Investment Behavior Choices Capital is profit seeking. In order to obtain high returns from investment in real estate and financial assets, there is an economic phenomenon that enterprises invest a lot of resources in real estate and financial assets. This economic phenomenon reflects the changes in the investment behavior of enterprises and is called the financialization of enterprises. In the context of limited resources, with the increase in the financial degree of enterprises, financialization will have a "crowding out effect" on real investment [47]. Excessive holdings of financial assets will lead firms to divert from their primary business and focus too much on the short-term benefits of financial assets. In the context of limited enterprise resources, investment in financial assets will reduce investment in innovation and weaken the foundation of manufacturing development [48], leading to the gradual deviation of real enterprises from their main business and forming the phenomenon of "hollowing out of manufacturing" [49], which makes enterprises lack sufficient funds. This makes enterprises lack sufficient funds for equipment renovation and product R&D innovation, which weakens their innovation capacity [50,51]. Therefore, financialization is also an important variable affecting the enterprise of green innovation. With the increase in economic policy uncertainty, enterprises' future income, costs, and cash flow are highly uncertain, making it more difficult for enterprises to raise funds [45]. This will exacerbate the finiteness of enterprise resources. Under the background of a possible liquidity shortage, enterprises with a higher degree of financialization of investment behavior will put more resources into investment activities such as investment real estate and financial assets, and have a higher degree of resource crowding on green innovation behavior. This will reduce the uncertainty of economic policy for the promotion of green innovation and increase economic policy uncertainty inhibitory effect of green innovation of the enterprise. On the other hand, enterprises with low financialization of investment behaviors focus more on their main business [49] and invest less resources in investment activities such as real estate and financial assets, occupying less resources related to green innovation behaviors. With the increase in economic policy uncertainty, the inhibiting effect of resource shortage on enterprises' green innovation will be weakened, and the negative impact of the capital chain fracture on production and operation activities will be reduced [52]. Therefore, the financialization difference of firm investment behavior is also an important moderating factor affecting the relationship between economic policy uncertainty and firm green innovation. The moderating mechanism affecting the relationship between economic policy uncertainty and enterprise of green innovation is shown in Figure 4. OR PEER REVIEW 19 of 25 relationship between economic policy uncertainty and enterprise of green innovation is shown in Figure 4. Degree of Financialization of Enterprises Economic Policy Uncertainty Corporate Innovation Capability Financing constraints Inverted U The main measures of financing constraints are the KZ index [53], the WW index [54], and the SA index [55]. Since the KZ index and the WW index contain endogenous financial variables of firms, they will generate measurement bias. Therefore, this paper uses the SA index as a proxy variable for financing constraints. The SA index is proposed by Hadlock and Pierce and consists of two indicators: firm size and firm age [55]. The calculation formula is as follows: SA = 0.737 * Size − 0.04 * Age + 0.043 * Size 2 (6) The SA index is usually negative, and the absolute value of SA is generally used to measure the degree of corporate financing constraints; the larger the absolute value of SA, the more serious the corporate financing constraints. Drawing on the studies of Song et al. and Xiao [56,57], this paper selects the aggregate of trading financial assets, available-for-sale financial assets, derivative financial assets, held-to-maturity investments, investment properties, and long-term equity investments as corporate financial assets based on the balance sheet. The degree of financialization (FIN) of an enterprise is measured by the ratio of the enterprise's financial assets to the enterprise's total assets at the end of the period, and the larger the ratio, the higher the degree of financialization of the enterprise. The main measures of financing constraints are the KZ index [53], the WW index [54], and the SA index [55]. Since the KZ index and the WW index contain endogenous financial variables of firms, they will generate measurement bias. Therefore, this paper uses the SA index as a proxy variable for financing constraints. The SA index is proposed by Hadlock and Pierce and consists of two indicators: firm size and firm age [55]. The calculation formula is as follows: The SA index is usually negative, and the absolute value of SA is generally used to measure the degree of corporate financing constraints; the larger the absolute value of SA, the more serious the corporate financing constraints. Drawing on the studies of Song et al. and Xiao [56,57], this paper selects the aggregate of trading financial assets, available-for-sale financial assets, derivative financial assets, held-to-maturity investments, investment properties, and long-term equity investments as corporate financial assets based on the balance sheet. The degree of financialization (FIN) of an enterprise is measured by the ratio of the enterprise's financial assets to the enterprise's total assets at the end of the period, and the larger the ratio, the higher the degree of financialization of the enterprise. Model Construction In order to test the influence of financialization of firm investment behavior and financing constraint on economic policy uncertainty and firm green innovation behavior, this paper divided the samples into different groups. By observing the inflection point shift and flat or steep curve change of the influence curve, we can judge the change of the impact of economic policy uncertainty on green innovation of enterprises with differences in financialization degree and financing constraints [58]. The average value of SA of the financing constraint index is calculated according to the year. The sample whose SA value is higher than the average value of the current year is categorized as the High FC group, and the sample whose SA value is lower than the average value is categorized as the Low FC group. By referring to the fixed effect regression model (5) constructed in 4.3, the two groups of samples are empirically tested to compare the change of the influence curve of economic policy uncertainty on enterprises' green innovation and judge whether financing constraints affect the relationship between economic policy uncertainty and enterprises' green innovation. Similarly, the average value of the FIN index is obtained by year. The sample whose FIN value is higher than the average value of the current year is categorized as the High FIN group, and the sample whose FIN value is lower than the average value is categorized as the Low FIN group. By referring to the fixed effect regression model (5) constructed in 4.3, the two groups of samples are empirically tested to compare the change of the influence curve of economic policy uncertainty on enterprises' green innovation and judge whether financialization affects the relationship between economic policy uncertainty and enterprises' green innovation. Table 5 reports the test results of the moderating effects of financing constraint and financialization on the relationship between economic policy uncertainty and enterprise green innovation under the control of annual fixed effect. Column (1) is listed as the regression result of the impact of economic policy uncertainty on enterprise green innovation in the sample group with high financing constraints. Column (2) is listed as the regression result of the impact of economic policy uncertainty on green innovation of enterprises in the sample group with low financing constraints. It is found that in the two models, the regression coefficients of economic policy uncertainty EPU1 are all significantly positive at the 1% level, and the regression coefficients of EPU1 quadratic term are all significantly negative at the 1% level. The results indicate that there is an inverted U-shaped relationship between economic policy uncertainty and green innovation in enterprises with high and low financing constraints. The regression models of the influence of economic policy uncertainty on the enterprise of green innovation in two groups of samples were obtained. Formula (7) is the regression model of the impact of economic policy uncertainty on the green innovation of enterprises with high financing constraints. Formula (8) is the regression model of the impact of economic policy uncertainty on the green innovation of enterprises with low financing constraints. Analysis of Empirical Test Results According to the regression model, the curve of the impact of economic policy uncertainty on the green innovation of enterprise with high and low financing constraints can be obtained, as shown in Figure 5. The left side of the inverted U-shaped curve is steeper in the low financing constraint firms than in the high financing constraint firms. This indicates that economic policy uncertainty has a more significant promoting effect on green innovation in enterprises with low financing constraints [58]. Therefore, financing constraints can effectively adjust the impact of economic policy uncertainty on green innovation. According to the regression model, the curve of the impact of economic policy uncertainty on the green innovation of enterprise with high and low financing constraints can be obtained, as shown in Figure 5. The left side of the inverted U-shaped curve is steeper in the low financing constraint firms than in the high financing constraint firms. This indicates that economic policy uncertainty has a more significant promoting effect on green innovation in enterprises with low financing constraints [58]. Therefore, financing constraints can effectively adjust the impact of economic policy uncertainty on green innovation. Table 5 is listed as the regression results of the impact of economic policy uncertainty on the green innovation of enterprises in the high financialization group. Column (4) is listed as the regression result of the impact of economic policy uncertainty on the enterprise of green innovation in the low financialization group. It is found that in the two models, the regression coefficients of economic policy uncertainty EPU1 are all significantly positive at the 1% level, and the regression coefficients of EPU1 quadratic term are all significantly negative at the 1% level. The results indicate that there is an inverted U-shaped relationship between economic policy uncertainty and green innovation in enterprises with high and low financialization. The regression models of the influence of economic policy uncertainty on the enterprise of green innovation in two groups of samples were obtained. Formula (9) Column (3) in Table 5 is listed as the regression results of the impact of economic policy uncertainty on the green innovation of enterprises in the high financialization group. Column (4) is listed as the regression result of the impact of economic policy uncertainty on the enterprise of green innovation in the low financialization group. It is found that in the two models, the regression coefficients of economic policy uncertainty EPU1 are all significantly positive at the 1% level, and the regression coefficients of EPU1 quadratic term are all significantly negative at the 1% level. The results indicate that there is an inverted U-shaped relationship between economic policy uncertainty and green innovation in enterprises with high and low financialization. The regression models of the influence of economic policy uncertainty on the enterprise of green innovation in two groups of samples were obtained. Formula (9) is the regression model of the impact of economic policy uncertainty on the green innovation of enterprises with high financialization. Formula (10) is the regression model of the impact of economic policy uncertainty on the green innovation of enterprises with low financialization. Column (3) in According to the regression model, the curve of the impact of economic policy uncertainty on the green innovation of enterprise with high and low financialization can be obtained, as shown in Figure 6. The left side of the inverted U-shaped curve is steeper in the low financialization firms than in the high financialization firms. It indicates that economic policy uncertainty has a more significant promoting effect on green innovation in enterprises with low financialization [58]. Therefore, financialization can effectively adjust the impact of economic policy uncertainty on green innovation. According to the regression model, the curve of the impact of economic policy uncertainty on the green innovation of enterprise with high and low financialization can be obtained, as shown in Figure 6. The left side of the inverted U-shaped curve is steeper in the low financialization firms than in the high financialization firms. It indicates that economic policy uncertainty has a more significant promoting effect on green innovation in enterprises with low financialization [58]. Therefore, financialization can effectively adjust the impact of economic policy uncertainty on green innovation. Conclusions and Recommendations This paper examined the impact of economic policy uncertainty on green innovation at both national and regional levels. The Baker index based on news media and network information was used to measure the uncertainty of national economic policy, and the official exchange index based on the complex network was used to measure the uncertainty of economic policy in prefecture-level cities. It was found that there is an inverted U-shaped relationship between economic policy uncertainty and firms' green innovation capability. Moreover, the uncertainty index of national macroeconomic policy is mostly on the left side of the inverted U shape, which can promote the improvement of enterprises' green innovation ability. However, too frequent changes in regional economic policies will inhibit enterprises' green innovation ability. This paper further analyzed the moderating effect of financialization of investment behavior and financing constraint on the impact of economic policy uncertainty on the green innovation of enterprises from the perspective of investment and financing behavior choice. It was found that the impact of economic policy uncertainty on green innovation is more obvious for firms with low financing constraints and low financialization. The research content of this article still has some limitations. In the future, the authors intend to further explore the causes and mechanisms of the impact of economic policy uncertainty on green innovation. This paper finds that the impact of economic policy uncertainty on green innovation presents an inverted U shape. However, there is no further analysis of the reasons for maintaining the inverted U-shaped relationship between them. Macroeconomic policy changes will affect enterprises' choice of green innovation behavior from many aspects. However, it is very difficult to find instruments affecting one of many endogenous variables, but not the other. This also makes it difficult to analyze the reasons and ways that economic policy uncertainty affects the enterprise of green innovation. Of course, this is also the direction and breakthrough point of the author's future research. Based on the above findings, this paper makes the following recommendations to governments and enterprises. Conclusions and Recommendations This paper examined the impact of economic policy uncertainty on green innovation at both national and regional levels. The Baker index based on news media and network information was used to measure the uncertainty of national economic policy, and the official exchange index based on the complex network was used to measure the uncertainty of economic policy in prefecture-level cities. It was found that there is an inverted Ushaped relationship between economic policy uncertainty and firms' green innovation capability. Moreover, the uncertainty index of national macroeconomic policy is mostly on the left side of the inverted U shape, which can promote the improvement of enterprises' green innovation ability. However, too frequent changes in regional economic policies will inhibit enterprises' green innovation ability. This paper further analyzed the moderating effect of financialization of investment behavior and financing constraint on the impact of economic policy uncertainty on the green innovation of enterprises from the perspective of investment and financing behavior choice. It was found that the impact of economic policy uncertainty on green innovation is more obvious for firms with low financing constraints and low financialization. The research content of this article still has some limitations. In the future, the authors intend to further explore the causes and mechanisms of the impact of economic policy uncertainty on green innovation. This paper finds that the impact of economic policy uncertainty on green innovation presents an inverted U shape. However, there is no further analysis of the reasons for maintaining the inverted U-shaped relationship between them. Macroeconomic policy changes will affect enterprises' choice of green innovation behavior from many aspects. However, it is very difficult to find instruments affecting one of many endogenous variables, but not the other. This also makes it difficult to analyze the reasons and ways that economic policy uncertainty affects the enterprise of green innovation. Of course, this is also the direction and breakthrough point of the author's future research. Based on the above findings, this paper makes the following recommendations to governments and enterprises. Suggestions for the Government to Formulate Economic Policies Government policies are an important tool for macroeconomic regulation and control, and the innovation activities of enterprises are subject to both the "facilitating effect" and the "inhibiting effect" of economic policies, resulting in a non-linear relationship between the policy uncertainty and innovation capability of enterprises. The relationship between policy uncertainty and firms' innovation capacity is non-linear. Therefore, when formulating policies, policymakers should fully consider the issues of gains and losses and costs and benefits. Furthermore, policymakers should consider the incentive effects and negative impacts of policy uncertainty on enterprises' micro behavior and reasonably grasp the frequency and magnitude of policy adjustments. Additional recommendations include increasing efforts to promote the reform and development of the financial market, reducing financing costs, and broadening financing channels. Enterprise innovation activities need to inject a large amount of capital, and some enterprises still have the problem of expensive and difficult financing. The government should focus on optimizing the financing environment of the financial market, widening enterprise financing channels, reducing enterprise financing costs, and thus alleviating enterprise financing constraints and solving the issue of expensive and difficult financing for enterprises. Establishing a sound market mechanism further improves the investment environment for enterprises, promotes enterprises to gain from the real economy, and reduces the financialization of enterprises. The sound market mechanism can promote the rational allocation of resources, improve the price mechanism, stimulate the innovation vitality of enterprises through perfect market allocation, prompt enterprises to take the initiative to reasonably apply funds in innovation projects, and give full play to the role of government macro control. Suggestions for Improving the Green Innovation Capacity of Enterprises Enterprises should look at policy fluctuations rationally. The increase in economic policy uncertainty is a coexistence of risks and opportunities for enterprises. If enterprises regard policy uncertainty as a single risk, it will affect their innovation activities and hinder their development. If enterprises regard policy uncertainty as an opportunity, they will be hit hard by pursuing profits and blindly carrying out innovation activities. Enterprises need to rationally analyze the opportunities and risks brought by economic policy uncertainty to gain an advantageous position in the competition. Finally, strengthening the control of enterprise capital is important. Adequate cash flow is the basis for maintaining the daily operation of the enterprise, which helps the enterprise fully grasp opportunities and make flexible decisions, and is conducive to sustainable development. It is particularly important for enterprises to strengthen their own financial control in daily operation, strengthen the control of cash flow, and improve capital management, especially when there is a high degree of uncertainty in economic policies. Conflicts of Interest: The authors declare no conflict of interest.
2022-02-27T16:27:05.834Z
2022-02-24T00:00:00.000
{ "year": 2022, "sha1": "3255cf7ed909e0b4f9c9f34df68278248a6d3f22", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/5/2627/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "93ecaa56c84bad922f5a147685a69d4570f5e734", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [] }
248569705
pes2o/s2orc
v3-fos-license
Radiomics based on biparametric MRI for the detection of significant residual prostate cancer after androgen deprivation therapy: using whole-mount histopathology as reference standard We aimed to study radiomics approach based on biparametric magnetic resonance imaging (MRI) for determining significant residual cancer after androgen deprivation therapy (ADT). Ninety-two post-ADT prostate cancer patients underwent MRI before prostatectomy (62 with significant residual disease and 30 with complete response or minimum residual disease [CR/MRD]). Totally, 100 significant residual, 52 CR/MRD lesions, and 70 benign tissues were selected according to pathology. First, 381 radiomics features were extracted from T2-weighted imaging, diffusion-weighted imaging, and apparent diffusion coefficient (ADC) maps. Optimal features were selected using a support vector machine with a recursive feature elimination algorithm (SVM-RFE). Then, ADC values of significant residual, CR/MRD lesions, and benign tissues were compared by one-way analysis of variance. Logistic regression was used to construct models with SVM features to differentiate between each pair of tissues. Third, the efficiencies of ADC value and radiomics models for differentiating the three tissues were assessed by area under receiver operating characteristic curve (AUC). The ADC value (mean ± standard deviation [s.d.]) of significant residual lesions ([1.10 ± 0.02] × 10-3 mm2 s-1) was significantly lower than that of CR/MRD ([1.17 ± 0.02] × 10-3 mm2 s-1), which was significantly lower than that of benign tissues ([1.30 ± 0.02] × 10-3 mm2 s-1; both P < 0.05). The SVM feature models were comparable to ADC value in distinguishing CR/MRD from benign tissue (AUC: 0.766 vs 0.792) and distinguishing residual from benign tissue (AUC: 0.825 vs 0.835) (both P > 0.05), but superior to ADC value in differentiating significant residual from CR/MRD (AUC: 0.748 vs 0.558; P = 0.041). Radiomics approach with biparametric MRI could promote the detection of significant residual prostate cancer after ADT. INTRODUCTION The global incidence of prostate cancer has been increasing in most countries; 1 in addition, 20% of newly diagnosed prostate cancer cases in Northern and Western Europe are advanced or metastatic disease, and in China, this proportion is 68%. 2,3 Androgen deprivation therapy (ADT) is a key primary treatment for advanced and metastatic prostate cancer 4 and is an important neoadjuvant therapy before radiotherapy and surgery. Currently, the assessment of ADT treatment effect in prostate cancer is mainly based on the serum prostate-specific antigen (PSA) test, but this test has shortcomings. First, it cannot assess the changes in primary and metastatic lesions discriminatively. 5 Second, neuroendocrine differentiation often occurs in hormonally treated prostate cancer, 6 leading to a low PSA even when cancer has progressed, thus nullifying PSA monitoring of such tumors. Conventional magnetic resonance imaging (MRI) was considered unsuitable for the assessment of prostate cancer after ADT because the ORIGINAL ARTICLE Radiomics based on biparametric MRI for the detection of significant residual prostate cancer after androgen deprivation therapy: using wholemount histopathology as reference standard ADT by comparing the pre-and post-ADT MRI images without full pathological correlation. The selection and delineation of lesions were based on pre-ADT MRI and focused on significantly visible cancer, leading to bias in the results. Even in a study based on whole-mount pathology, the change in lesion volume between pre-and post-ADT images, rather than lesion appearance on the post-ADT images, was used to assess the treatment effect. 14 Since pre-ADT data might not be available for every patient, the exact efficiency for detecting residual lesions mainly based on post-ADT images needs to be investigated. Moreover, after ADT, T2WI is considered unsuitable for the detection of residual disease in clinical applications, but the change in ADC value in prostate cancer was controversial in previous studies. Some studies have suggested a significant increase in ADC value after ADT, 11,13 but one study showed an inconspicuous change. 15 Therefore, a new method is needed to analyze post-ADT images for prostate cancer. With the rise of radiomics, many studies have investigated the use of radiomics features, especially texture features, for prostate cancer patients; these features have shown a promising ability to differentiate between tumor and benign tissues. [16][17][18] Texture features for prostate cancer 19 can be derived using gray-level co-occurrence matrices (GLCMs) and aim to separate or classify different tissue types, and different statistical features can be extracted from GLCMs, such as Haralick features. Evaluating prostate cancer MRI data with the GLCM approach might improve tumor detection after ADT. In two recent studies, textural features could distinguish tumors from benign tissues after ADT even in cases with low contrast between the tumor and surrounding tissue, 12,19 but these studies were not based on wholemount pathology and did not assess whether the lesions responded to ADT. The exact value of radiomics methods for detecting prostate cancer after ADT remains unclear. Therefore, the aim of this study was to use the radiomics method to analyze post-ADT bpMRI images to assess prostate cancer and benign peripheral zone (PZ) tissues, and to investigate the ability of this approach to detect significant residual prostate cancer; in addition, to ensure that all the analyses were based on whole-mount histopathology, only patients who underwent prostatectomy after neoadjuvant ADT were enrolled in the study. PATIENTS AND METHODS This retrospective study was approved by the Ethics Review Board of Fudan University Shanghai Cancer Center (Shanghai, China; No. 2005217-2). The requirement for informed consent was waived because it was a retrospective study and we used noninvasive methods. Patients From January 2015 to May 2021, 92 patients who underwent prostatectomy after neoadjuvant ADT and had preoperative MRI scans were retrieved from our history system according to the following criteria: (1) had clinically significant prostate cancer (Gleason score >6, greatest percentage of cancer >50% and more than two positive cores) confirmed by biopsy before ADT, with post-ADT pathology confirmed by radical prostatectomy; (2) underwent pre-ADT bpMRI examinations within 4 weeks before or after biopsy, with bpMRI images of identified prostate cancer; (3) underwent post-ADT bpMRI examinations in our hospital within 2 weeks before surgery; (4) treated for more than 3 months with a complete androgen blockade with bicalutamide plus ADT with goserelin, leuprolide, or abiraterone (based on the discretion of the treating physician); and (5) only underwent ADT. Among these patients, a total of 60 patients with significant residual lesions confirmed by pathology and 32 patients with pathologic complete response or minimum residual disease (CR/MRD) were enrolled. The clinical data of these patients are listed in Table 1. Histopathology analysis After radical retropubic prostatectomy, the intact specimens were inked for laterality and fixed in formalin overnight at room temperature. Care was taken in each case to maintain the orientation of each slice of the prostate so that the same side was routinely cut (i.e., the superior or inferior edges for each prostate cross-section), thus allowing for relatively equal spaces between hematoxylin and eosin (HE) sections. Subsequently, 5-μm tissue sections were cut, mounted on glass slides, and stained with HE. The lesions were assessed and recorded from whole prostate samples by a dedicated central pathology genitourinary pathologist with more than 16 years of experience in genitourinary pathology. All of the lesions were identified by whole-mount pathology in the positive region confirmed by pre-ADT biopsy. The lesions were divided into two categories, significant residual disease and CR/MRD, because patients with MRD and CR have similar prognoses that are much better than the prognosis of patients with significant residual disease. 20 Pathological CR was defined according to the previous literature 21 based on features such as reduction in gland size with decreased glandular density and increased periglandular density, as well as almost complete degeneration of cancerous cells. MRD was considered when the largest cross-sectional bidimension of the residual lesions was shorter than 5 mm. Significant residual lesions were identified as lesions larger than 5 mm. The outlines of the lesions (residual disease and CR/MRD) and benign PZ tissues were drawn on HE slices for further analysis; if MRD lesions were intermixed with CR tissue, they were delineated together. For each patient, one side of the noncancerous PZ confirmed by pathology was also delineated on all slices as the benign tissue control; if the volume of the PZ was reduced due to involvement of the cancer and difficult to assess, it was excluded. Region of interest (ROI) delineation ROIs of significant residual disease, CR/MRD, and benign tissue in the peripheral zone were contoured using ITK-SNAP software, version 3.4.0 (www.itksnap.org), an open-source image processing software program. ROIs were confirmed on all bpMRI maps and contoured together by two radiologists with 5 years (ZZC) and 13 years (XHL) of experience in prostate MR imaging. The ROIs were drawn on the MRI images in ITK-SNAP according to the labeled HE slices under the direction of the pathologist (HLG), as shown in Figure 1 and 2. The location and border of the lesions or benign tissues were identified based on the location of the ejaculatory ducts, the dimensions of the prostate, any identifiable benign prostate hyperplasia (BPH) nodules, and the approximate distance from the base or apex. For each lesion, all positive MRI slices confirmed by pathology were included in the ROIs. If the patients had some suspicious lesions or lesions with unclear borders, the pre-ADT MRI images would be referenced for confirmation. Any differences in measurement were resolved by consensus. Radiomics Radiomics analysis was performed using Artificial Intelligence Kit software (Artificial Intelligence Kit version 3.0.0.R, GE Healthcare, Shanghai, China). In total, 381 features were extracted for each ROI, including 39 histogram features, 54 texture features, 119 GLCM features, and 169 run-length matrix (RLM) features. The mean ADC value was also extracted as one of the histogram features from the ROI on the ADC map. Since the intensity range of T2WI images was not always consistent among different patients, before radiomics feature extraction, an image normalization process was used to normalize the gray values of T2WI images with Python programming software (version 3.6, Python Software Foundation; www.python.org). In this process, the image was normalized by centering its intensity at the mean value with standard deviation (s.d.). The gray values of the MRI scans were normalized using the standardization method with a scale of 100. The equation for image normalization was as I normalize = (I − µ 1 )/σ 1 × 100, where I denotes the original intensity of the T2WI image, I normalize is the normalized intensity of the T2WI image, μ 1 is the mean value of the image intensity, and σ 1 is the standard deviation of the image intensity. Next, ROI data of the significant residual disease and benign tissue in each subgroup were randomly divided into training and validation sets, and radiomics features were extracted for each patient in Artificial Intelligence Kit software. The support vector machine-based recursive feature elimination (SVM-RFE) algorithm was applied to order the features and select the optimal features according to their importance, as described in a previous study. 22 The selected features were then used to build prediction models (training and validation models) for significant residual cancer with the following classifier methods successively using the software: decision tree, naive Bayes, K-nearest neighbor, logistic regression, support vector machine (SVM), bagging, random forest, extremely randomized trees, AdaBoost, and gradient boosting tree. Receiver operating characteristic (ROC) curve analysis was used to assess the efficiency of each model, and the logistic regression method showed a slightly larger area under the ROC curve (AUC) for the validation models than the other methods. Theoretically, the cost function of logistic regression diverges faster than the other classifier methods, so logistic regression is more sensitive to outliers, which might remedy the relatively lower sensitivity and higher variability of MRI for prostate cancer to some extent. Therefore, logistic regression was selected as the final method for differentiation. A similar process was performed on the significant residual disease and CR/MRD lesion data to construct models for predicting significant residual tissue disease, and on the data of CR/ MRD and benign tissue to construct models for predicting a CR/ MRD tissue. Logistic regression also showed the highest AUCs and was selected as the final method for differentiation. Statistical analyses All of the statistical analyses were performed with dedicated software (Stata Statistical Software, version 10; Stata Corp LP, College Station, TX, USA), and P < 0.05 was considered statistically significant. Independent t-tests and the Chi-square test were applied to determine significant differences in the patients' clinical characteristics. The ADC values of residual disease, CR/MRD, and benign tissue were compared with a one-way analysis of variance (ANOVA) with Bonferroni's correction, which would require a P value of 0.05/3 = 0.0167 or less to be considered significant; if P < 0.0167, a series of paired t-tests were performed between each pair of tissues in the set. For each pair of tissues with significantly different ADC values, ROC analysis was used to differentiate the two tissues in the validation dataset in the radiomics analysis. The radiomics signatures were entered into the Stata system for logistic regression analysis. In the training set of significant residual cancer and benign tissue data, univariate logistic regression analysis was performed for each potential predictive factor for residual disease. Next, the features found to be statistically significant in univariate logistic regression analysis were then analyzed with multivariate logistic regression analysis for model construction. Similar processes were performed for the significant residual disease and CR/MRD lesion data to construct the logistic regression model for predicting significant residual tissue, and a model for predicting CR/MRD with the CR/MRD and benign tissue data was also constructed. ROC analysis was used to evaluate the discriminative ability of the models between significant residual disease and CR/MRD tissues, between CR/MRD and benign tissues, and between significant residual disease and benign tissue, and compare the models with the ADC values. The differentiation efficiencies were compared between radiomics features and ADC values based on the AUC value. Comparison of clinical data for patients with significant residual cancer and CR/MRD No significant differences were found between the significant residual cancer and CR/MRD groups in age, initial PSA, M stage, or Gleason score (GS) (all P > 0.05). The post-ADT PSA was higher for patients with significant residual disease (P = 0.019). The median duration of ADT was longer in the CR/MRD patients than that in residual patients (P = 0.003). Differentiating among significant residual disease, CR/MRD lesions, and benign tissue using the ADC value In total, 100 significant residual lesions (mean diameter: 1.8 cm, range: 0.5-3.5 cm), 60 CR/MRD lesions (mean diameter: 1.4 cm, range: 0.5-2.7 cm), and 70 benign tissues were included in the final analysis. The distribution of samples in the training and validation sets is listed in Table 2. The ADC value (mean±s.d.) of significant residual lesions ([1.10± 0.02] × 10 -3 mm 2 s -1 ) was significantly lower than that of CR/MRD lesions ([1.17± 0.02] × 10 -3 mm 2 s -1 ), which was, in turn, significantly lower than that of benign tissue ([1.30± 0.02] × 10 -3 mm 2 s -1 ), with P = 0.021 and 0.001, respectively (Figure 1 and 2). In the validation set, the AUC of the ADC value was 0.792 for differentiating between CR/ MRD and benign lesions, 0.835 for differentiating between significant residual disease and benign lesions, and 0.558 for differentiating between CR/MRD and significant residual lesions (Figure 3). Differentiating among significant residual disease, CR/MRD lesions, and benign tissue using the radiomics method In the assessment of the significant residual disease and benign tissue radiomics data in the training set, the values of five radiomics features (one GLCM feature and one histogram feature from the ADC map, one GLCM feature from DWI, and two GLCM features from DWI) were positively correlated with the risk of significant residual tissue (all P < 0.05; Figure 4). The prediction model for residual lesions based on these features showed an AUC of 0.921 for the detection of significant residual lesions. In the validation set, this model showed an AUC of 0.825 for the detection of significant residual lesions, which was similar to that of the ADC value (P = 0.917; Figure 3). In the assessment of the CR/MRD and benign tissue radiomics data in the training set, the values of two radiomics features (one histogram feature from the ADC map and one GLCM feature from DWI) were positively correlated with CR (both P < 0.05). The values of three radiomic features (one RLM feature and two GLCM features from the ADC map) were negatively correlated with CR (all P < 0.05; Figure 4). The prediction model showed an AUC of 0.853 for the detection of CR/MRD. In the validation set, this model showed an AUC of 0.766 for the detection of CR/MRD, which was similar to that of the ADC value (P = 0.672; Figure 3). In the assessment of the CR/MRD and significant residual disease radiomics data in the training set, the values of four radiomics features (two GLCM features from DWI and two GLCM features from the ADC map) were positively correlated with CR (all P < 0.05). The values of five radiomics features (two GLCM features from DWI, two GLCM features from the ADC map, and one RLM feature from DWI) were negatively correlated with the risk of significant residual disease (all P < 0.05; Figure 4). The prediction model showed an AUC of 0.854 for the detection of significant residual tissue. In the validation set, this model showed an AUC of 0.748 for the detection of significant residual cancer, which was significantly higher than that of the ADC value (P = 0.041; Figure 3). DISCUSSION Our study proved that radiomics methods combined with bpMRI could assess the appearance of prostate cancer after ADT and differentiate significant residual prostate cancer from benign and CR/MRD tissues. In our study, the ADC values of significant residual disease and CR/MRD cancer were significantly lower than those of benign tissue, similar to previous studies. [11][12][13] After ADT, the ADC of benign prostate tissue declined due to glandular atrophy, fibrosis, basal cell hyperplasia, and stromal hypercellularity, as well as reduced overall glandular stromal tissue and gland volume. In prostate cancer with a response to ADT, the ADC value increased due to the net decrease in glandular ducts (i.e., net decrease in cellular size or number in tumors) within atrophic prostate cancer tissue as a result of apoptosis. However, the gap between the two tissues remains remarkable. The ADC of significant residual cancer remained low; thus, the differentiation of significant residual disease or CR/MRD tissue from benign tissue was reliable. 13,14 Such a result could further enable the possibility of delineating cancer in patients using hormonal treatment, which is important for subsequent radiotherapy and other therapies. The application of radiomics revealed new parameters for the detection of CR/MRD cancer but did not achieve a significantly improved result. On the one hand, this study proved that the ADC value could still play a dominant role in the delineation of prostate cancer due to its reliability and convenience. On the other hand, the radiomics results prove the possibility of using machine learning based on these features to detect cancer in the future, and this approach would be faster than human diagnosis. In the differentiation of significant residual lesions and CR/MRD tissue, the ADC value showed a low efficiency. The ADC of significant residual cancer was significantly lower than that of CR/MRD, but the gap was minimal, limiting the differentiation of the two tissues. This outcome might be explained by how we drew the ROIs based on the whole amount of pathological data, which included different percentages of benign tissue. Moreover, during ADT therapy, cell death and atrophy of the gland occur simultaneously, and histological changes vary in the initial months of ADT, rendering the change in ADC more uncertain. Thus, the overall change in ADC value might not be sufficient for the assessment of cancer after ADT. No features from T2WI added to the efficiency of distinguishing significant residual disease from CR/MRD tissue, consistent with the previous view that a change in T2WI signal in prostate cancer after ADT, could nullify the detection of residual lesions. 23 The textural features of the ADC map and DWI were more sensitive to such complex changes. Significant residual tissue is associated with lower short-run emphasis from DWI and inverse difference moments from DWI and ADC maps. Short-run emphasis is employed to measure short-run distribution. The inverse difference moment (IDM) measures local homogeneity and is high when the local gray level is uniform and the inverse GLCM is high. Regions with significant residual tissue always contain more complex tissue, such as glandular tumors with different responses to ADT and atrophy of the gland, thus increasing local heterogeneity and reducing these parameters. Interestingly, in terms of the texture parameter correlation and GLCM entropy offsets, the correlations between pathology and these parameters of DWI were all opposite to those for the ADC map. Image-based correlation measures the similarity of the gray levels in neighboring pixels. Entropy is a measurement of the randomness of intensity images. Although the mechanism of such differences remains unclear, these measures contributed 8 of the 11 effective parameters. These textural features are sensitive to minimal changes and the homogeneity of the whole tissue and thus show more details than the overall ADC map. The results point to the importance of radiomics features on DWI and ADC maps for identifying the status of cancer after ADT. The assessment of prostate cancer and detection of residual disease after ADT are important to the planning and prognosis prediction of subsequent therapy, even in patients who will undergo surgery. 20 In recent years, prostate-specific membrane antigen-positron emission tomography/computed tomography (PSMA-PET/CT) imaging has been applied to patients treated with ADT to assess treatment response and for the early detection of castration-resistant lesions, and some studies have demonstrated that PSMA-PET/CT might be a suitable quantitative imaging modality for patients after neoadjuvant ADT. 24,25 However, PSMA-PET also has disadvantages, such as false-negative findings in tumors with no or faint PSMA expression, 26 lower resolution for visualizing the prostate structure, and higher cost than MRI. Therefore, we believe that in clinical practice, radiomics methods combined with bpMRI remain a potential method for assessing the response to ADT in prostate cancer. To the best of our knowledge, studies of radiomics based on bpMRI for the detection of residual lesions have not been reported before, but there have been a few radiomic studies comparing lesions and normal tissue after ADT. Our study was partly consistent with previous studies in which radiomics could provide exact efficiency for the differentiation of cancer and benign tissue, even after ADT; moreover, the number of textural features included in our study was similar to that in the study by Hedgire et al. 12 but was much smaller than that in the study by Daniel et al. 19 The reason might be because the ROIs were delineated on T2WI with reference to the ADC value after ADT, 19 which might have caused bias because only lesions with remarkable residual disease and dense structures were included, thus increasing the difference between tumors and benign tissues. Meanwhile, in our study, the volume of the PZ shrank after ADT, which reduced the number of voxels; thus, there were fewer effective features associated with the tissues in the PZ. There were some limitations in our study. First, the study design was retrospective, which might have led to selection bias. For instance, most of the patients who underwent surgery after ADT completely or partly responded to ADT, and patients with progressing diseases were seldom included because of the lack of surgery. The ADT protocol and duration also varied, and the patients with CR/MRD had much longer treatment duration than those with significant residual disease. Second, some small lesions might have been missed on MRI after ADT. Third, several patients had more than one lesion, which might have influenced the uniqueness of individual lesions. Fourth, although our results suggested the potential of radiomics methods for the detection of residual prostate cancer after ADT, no definitive conclusions regarding the use of this method in clinical practice can be reached until a larger number of patients are prospectively evaluated. CONCLUSIONS Our study proved that a radiomics method based on bpMRI could differentiate significant residual prostate cancer after ADT from CR/ MRD lesions and benign tissue, suggesting a new method for the assessment of prostate cancer after ADT. : Distribution and ranking of optimal radiomic features for distinguishing (a) between significant residual and CR/MRD lesions, (b) between significant residual disease and benign tissue, and (c) between CR/MRD and benign tissue. The name of each feature was listed on the left side of corresponding column. Inverse Difference Moment, GLCM energy and GLCM entropy measure the local homogeneity, overall homogeneity and randomness of gray levels of image respectively, in one or more directions. The offset number represents the number of interval pixel between the neighbor point of measurement. Short-run emphasis measures the distribution of the short homogeneous runs (small batch of pixels) in an image, and correlation measures the similarity of the grey levels in neighboring pixels. Those parameters were listed in a format of "name_direction_offset number". The column represented the contribution of each feature to the differentiation task. The higher the column was, the greater the contribution. The color of each column represented the MRI modality. CR/MRD: complete response or minimum residual disease; MRI: magnetic resonance imaging; DWI: diffusion-weighted imaging; ADC: apparent diffusion coefficient; GLCM: gray-level co-occurrence matrix; s.d.: standard deviation. c b a AUTHOR CONTRIBUTIONS XHL and LPZ conceived and designed the study. XHL, ZZC, and WJG collected the data. HLG performed the pathological analyses. WL and YZ performed the radiomics analyses of images. BNZ performed the statistical analyses. XHL and ZZC wrote the manuscript with input from all coauthors, and LPZ reviewed and made amendments to the manuscript. All authors read and approved the final manuscript. COMPETING INTERESTS Yong Zhang, PhD, is a research scientist from MR Research department of GE Healthcare, a US company that manufactures medical facilities, especially diagnostic imaging system; his positional title in the company is leading professional band, he does not hold any share of this company, and he declares no competing interests. The other authors declare no competing interests.
2022-05-10T06:23:36.341Z
2022-05-03T00:00:00.000
{ "year": 2022, "sha1": "1c6b0ddcc788b5b37abdedfc25f0a53d6456066d", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/aja202215", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e43dce35fcc88f63244b3e31ac2b827bf804647", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257896583
pes2o/s2orc
v3-fos-license
Novel Approaches to Postnatal Prophylaxis to Eliminate Vertical Transmission of HIV Despite progress in providing antiretroviral therapy to pregnant women living with HIV, a substantial number of vertical transmissions continue to occur. Novel approaches leveraging modern potent, safe, and well-tolerated antiretroviral drugs are urgently needed. INTRODUCTION W ith new advances in antiretroviral drugs for HIV prevention and treatment, as well as increasing coverage globally, the landscape around vertical transmission of HIV is rapidly changing.It is estimated that 82% of pregnant women globally received antiretroviral therapy (ART) in 2021. 1 Increasingly, women in sub-Saharan Africa are transitioning to optimized dolutegravir-based antiretroviral regimens, offering greater potency and durability.Women with HIV also have access to better care, including integrated antenatal and HIV services, viral load monitoring, multi-month ART dispensing, and patientcentered differentiated service delivery models with postnatal follow-up of the mother-infant dyad. 2 However, a substantial number of vertical transmissions continue to occur perinatally and during breastfeeding, driven by shortfalls in testing, treatment coverage, adherence, and retention in care among mothers.Among an estimated 160,000 children with new HIV infections globally in 2021, 48% had mothers who had not started ART, 22% had mothers who discontinued their treatment, and 8% had mothers on treatment but were not able to maintain virologic suppression. 1 Furthermore, incident HIV infections among pregnant and breastfeeding women contribute to an increasing proportion of new pediatric infections. 3ostnatal prophylaxis-the provision of antiretroviral drugs to HIV-exposed infants-remains a key tool to reduce vertical transmission.In the absence of effective maternal ART, postnatal prophylaxis has been demonstrated effective in the prevention of vertical transmission around the time of delivery and during breastfeeding.Current World Health Organization (WHO) guidelines for postnatal prophylaxis are designed primarily to reduce the risk of transmission around the time of delivery, with regimens that are risk-stratified depending on maternal treatment status, timing of ART initiation, and virological suppression (Figure 1). 4 Infants born to women who start ART late in pregnancy or at the time of delivery are at high risk of HIV acquisition.For these high-risk situations, the WHO guidelines recommend giving postnatal prophylaxis to the infant for 12 weeks (Figure 2). 5 These guidelines have been adopted widely, with countries customizing to their contexts.But their effective implementation has been limited for several reasons.Risk stratification, which identifies infants requiring more complex postnatal prophylactic regimens at birth, can be difficult to implement, especially in resource-limited settings with limited access to viral load monitoring.Furthermore, there are challenges to administering antiretroviral drugs to newborns.Those currently in use, nevirapine and zidovudine, require daily or twice daily oral administration and are delivered as syrups dispensed in bulky bottles with the potential for unintentional disclosure of the maternal HIV status.Because these drugs are no longer considered optimal for treating HIV infection, the decreasing global market for them has led to challenges in supply security.Finally, postnatal prophylaxis is not currently routinely recommended by the WHO for infants aged older than 12 weeks.This leaves breastfeeding infants vulnerable to transmission if their mothers experience viremia secondary to lapses in adherence or treatment failure or if there are drug stock-outs.Some countries have adapted the guidelines to include prophylaxis during breastfeeding. To further reduce the risk of HIV acquisition during breastfeeding, there is a need for more effective approaches to infant prophylaxis that are easier to implement, safe when used for prolonged durations, take advantage of modern drugs, and capitalize on novel delivery platforms.In 2021, the WHO and the International Maternal Pediatric Adolescent AIDS Clinical Trials Network convened a workshop to accelerate the research and development of new agents and new approaches to postnatal prophylaxis.The second meeting of the series was held June 8-10, 2021.In this article, we present the data reviewed and the ideas generated by workshop participants to identify potential innovative strategies for postnatal prophylaxis to prevent HIV vertical transmission. WHAT IS THE EVIDENCE BASE FOR CURRENT APPROACHES TO POSTNATAL PROPHYLAXIS? The first evidence that antiretroviral drugs reduced the risk of vertical transmission-and one of the earliest success stories in the HIV epidemic response-came from the Pediatric AIDS Clinical Trials Group 076 study. 6Zidovudine was given to pregnant women from 14 weeks of gestation through labor and delivery and to their infants for the first 6 weeks after birth; this strategy reduced transmission risk by more than twothirds.Subsequent trials demonstrated benefit from shorter courses of maternal/infant antiretroviral drugs, [6][7][8] from the use of infant prophylaxis alone, 9 and from early initiation of infant prophylaxis after birth. 10Studies in the late 1990s and early 2000s demonstrated the benefit of single-dose nevirapine prophylaxis but also identified the first concerns about the development of HIV drug resistance among infants acquiring HIV infection despite postnatal prophylaxis. 11][14][15][16] In 2010, the HIV Prevention Trials Network 040/Pediatric AIDS Clinical Trials Group 1043 study demonstrated that for infants of mothers who delivered before starting treatment, 2-drug or 3-drug short-course regimens to the infant provided equivalent protection against infection, with peripartum transmission rates of 2%-3%. 179][20][21] Similar results have been reported for both daily lamivudine as well as lopinavir/ritonavir used for prophylaxis during breastfeeding. 22Of note, no additional benefit was reported when zidovudine was added to nevirapine for infants during breastfeeding in 1 trial. 18In the era of universal ART and with increasingly effective antiretroviral regimens for pregnant and breastfeeding women, the low rate of vertical transmission has made it more difficult to conduct studies to evaluate new agents and approaches to infant prophylaxis.Novel study approaches, such as conducting a Bayesian trial design used to analyze data from the PHPT5 study in Thailand, are being considered to allow new strategies for postnatal prophylaxis to be studied efficiently. 23,24ore effective approaches to postnatal HIV prophylaxis are needed that are easier to implement, safe for prolonged use, take advantage of modern drugs, and capitalize on novel delivery platforms. WHAT ARE THE CHALLENGES SPECIFIC TO STUDYING PHARMACOKINETICS AND DETERMINING DOSING FOR NEONATES? Many currently approved antiretroviral drugs like dolutegravir used in the treatment of children with HIV have untested potential as potent agents for postnatal prophylaxis.A new generation of long-acting antiretroviral drugs, such as injectable cabotegravir, has been approved by the U.S. Food and Drug Administration for treatment and prophylaxis for adolescents and adults.Recent studies suggest that combinations of broadly neutralizing antibodies could potentially be effective for prophylaxis in adults.But the pharmacokinetics of all of these products must be established in neonates and infants before they can be trialed for efficacy.Drug development and dose-finding for neonates is particularly challenging not only because of difficulties in performing clinical research in this vulnerable population but also because of highly variable pharmacokinetics during the first months of life.Additionally, the necessity for multiple blood draws, safety procedures, and frequent study visits soon after birth can be a burden for families.Drug dosing for children older than 2 years can be relatively accurately estimated by extrapolating adult doses using allometric scaling according to body weight, which empirically describes the nonlinear relationship between drug elimination and body size.However, this is not applicable for neonates as the rapid maturation of the physiological processes underlying drug absorption, distribution, metabolism, and excretion immediately after birth must also be considered.Rapid increases in antiretroviral drug metabolism and/or elimination can occur during the first few weeks of life, necessitating very low doses at birth followed by frequent dose increases during the first weeks of life. 25,26The challenges are even greater for preterm neonates, in whom the maturation timelines of gestational and postnatal age overlap. 27hus, the pharmacokinetics of antiretroviral drugs cannot be simply extrapolated from studies in older children, and the optimal dose in neonates must be confirmed through dedicated clinical trials. 28tudies of neonatal "washout" are an efficient way to gain insight into the neonatal metabolism of drugs before formal neonatal pharmacokinetic dosing trials.Washout studies assess a drug taken by the mother that crosses the placenta; through repeated infant blood sampling after birth, this washout data can help provide an estimate of drug clearance over the first days and weeks of life.For newer antiretroviral drugs, drug exposure targets for postnatal prophylaxis are similar to those for treatment. 28Several ongoing studies aim to expand antiretroviral drug options for neonates.The PETITE study has recently assessed the pediatric solid "4-in-1" granule formulations of abacavir, lamivudine, and ritonavir-boostedlopinavir (Cipla, Ltd) in term neonates exposed to HIV. 29 Unfortunately, early data on the 4-in-1 formulation revealed low lopinavir plasma concentrations, 29 and administering higher doses of the fixed-dose combination is not possible without risk of overexposure to the abacavir and lamivudine components.Subsequently, the PETITE study is now assessing the separate solid formation of abacavir/lamivudine pediatric dispersible tablets and lopinavir/ritonavir granules in neonates.Dolutegravir-based treatment is currently recommended for the treatment of children aged at least 4 weeks and weighing at least 3.0 kg; studies of the pharmacokinetics and safety of dolutegravir are planned.The long dosing intervals offered by new long-acting antiretroviral drugs and broadly neutralizing antibodies would address many barriers in the implementation of neonatal prophylaxis, but these products will also require pharmacokinetic study in neonates and infants.For example, data about the drug release following intramuscular injections of long-acting drugs in neonates is needed to ensure therapeutic levels are rapidly achieved after birth.Initial pharmacokinetic and safety studies of subcutaneous injections of VRC01 and VRC01LS in infants are encouraging.Pharmacokinetic characteristics suggest the agents could be dosed infrequently (e.g., every 12 weeks) during breastfeeding. 30,31 WHAT CAN WE LEARN FROM STUDIES OF HIV ANTIRETROVIRAL PROPHYLAXIS IN ADULTS ABOUT TARGET PRODUCT PROFILES? Within the context of HIV prevention in adults, the ideal product profile for an agent includes efficacy, safety, high partitioning into genital tissue compartments, prolonged activity with convenient dosing, high barrier to resistance, unique resistance profile (i.e., one that would not compromise other drugs, especially first-line regimens), no significant drug-drug interactions with commonly coadministered medications, and low cost. 32The product must also be easy Drug development and dose-finding for neonates is particularly challenging because of difficulties in performing clinical research in this population and highly variable pharmacokinetics during the first months of life. to implement, encompassing issues of acceptability, discretion, low likelihood of perpetuating stigma, and congruence with sexual practices.Tenofovirbased preexposure prophylaxis possesses many of these attributes but lacks discretion (oral administration is visible) and has suboptimal acceptability secondary to the requirement for daily dosing.These deficiencies, despite extraordinary efficacy, have limited its population-level benefit.The dapivirine ring has demonstrated a modest 30% reduction in HIV incidence, has high acceptability in some cisgender female populations, and is now recommended by the WHO. 33,34Long-acting injectable cabotegravir has been shown to be superior to daily oral tenofovir-based preexposure prophylaxis and has recently obtained U.S. Food and Drug Administration approval.6][37] With a long half-life, islatravir promised the possibility of monthly oral or every 6-12 month implants; unfortunately, signs of lymphocyte toxicity placed the development of this agent on hold. 38A recent decision was made to continue the development of islatravir for treatment, but the prevention program was discontinued. 39Subcutaneous lenacapavir dosed every 6 months is in advanced-stage clinical trials.One concerning feature of all longacting formulations is the "tail" of subtherapeutic concentrations that occur when subsequent doses are missed.This tail of low concentrations could not only lower efficacy as prophylaxis but also select for resistance in the case of HIV-acquisition during this period.For these reasons, clinical outcomes in the context of missed doses must be evaluated for all long-acting products.The role of broadly neutralizing antibodies in adult preexposure prophylaxis remains unclear, given the successes of smallmolecule agents. 40It is yet untested whether a combination of multiple antibodies can effectively prevent HIV transmission in human populations. WHAT IS THE POTENTIAL ROLE OF PASSIVE IMMUNIZATION IN POSTNATAL PROPHYLAXIS? While current approaches to prophylaxis against HIV infection in adults and children are based primarily on small-molecule drugs, it is important to remember that antibodies can provide powerful protection against viral infections.The gold standard for both prevention of vertical transmission and the generation of life-long immunity is the hepatitis B prevention program, which is highly effective when implemented during prenatal, delivery, and neonatal care visits.This maternalinfant targeted program employs both risk-based and universal prevention strategies: (1) passive immunization with hyperimmune globulin at delivery in infants of mothers with evidence of active hepatitis B infection; (2) antiviral treatment of mothers with high viral replication; and (3) universal active immunization at delivery and throughout infancy that can generate life-long protective immunity. 41Theoretically, this framework could be applied to the prevention of perinatal/postnatal HIV transmission.The addition of passive immunization of the infant with a combination of broadly neutralizing antibodies at birth to maternal and infant antiretroviral drugs is an attractive option, particularly in high-risk situations, such as detectable maternal viral load, acute maternal infection, or in areas of high HIV prevalence where acute infections are more likely. 42,43Safety and pharmacokinetic studies in infants demonstrated that the subcutaneous administration of a CD4 binding site-directed antibody, VRC01 and its long-acting version VRCO1LS, in the first few days after birth had good tolerability, was safe, and persisted above a protective level for 8 weeks in more than 95% of infants. 44Moreover, studies in a nonhuman primate model of infant HIV infection revealed that broadly neutralizing antibody-based interventions could both provide prophylaxis and act as treatment, most effectively when administered within hours after birth. 45Of note, a broad array of broadly neutralizing antibodies is in the pipeline, and discussion is ongoing about how best to test different combinations of products for preventing postnatal infection. 46Finally, as active immunization strategies for HIV improve in their ability to generate broadly neutralizing antibody responses and with the understanding that the infant immune system generates broadly neutralizing responses more frequently during infection than that of adults, initiation of a multidose immunization in the vaccine schedule for children aged younger than 5 years may be the ideal population for active vaccination to achieve broad neutralization responses before adolescence. 47The combination approach of ART during pregnancy and breastfeeding, short-term antiretroviral prophylaxis to the infant around delivery, paired with broadly neutralizing antibody-based passive immunization for high-risk infants, and universal active HIV vaccination for all infants-mirroring hepatitis B prevention programs-could be the formula to eliminate pediatric HIV infections and generate long-term immunity. The addition of passive immunization of the infant with a combination of broadly neutralizing antibodies at birth to maternal and infant antiretroviral drugs is an attractive option particularly in high-risk situations. DO POSTNATAL PROPHYLAXIS STRATEGIES NEED TO BE RISK STRATIFIED? One of the central challenges in designing new approaches to postnatal prophylaxis is how to accommodate infants at different levels of risk during each period of exposure-in utero, intrapartum, and during breastfeeding.Traditionally, guidelines have recommended different regimens for the immediate postnatal period based on maternal risk factors (documented viremia and/or the recent start of ART).Furthermore, there is little evidence and hence limited guidance to address new or escalating risk if a mother becomes viremic during breastfeeding.A "1 regimen for all" approach would be ideal to overcome the challenges of implementing a risk-stratified approach to postnatal prophylaxis, depending on the feasibility and safety of the regimen.One strategy would be to use potent 3-drug antiretroviral regimens that could serve dual roles as "presumptive treatment" for infants with in-utero HIV infection while providing a high level of protection for those at high risk of intrapartum/early postnatal acquisition.However, using 3-drug antiretroviral regimens as prophylaxis for infants at low risk would be resource intensive and places those infants at potential additional risk of toxicity.It is possible that some agents in the pipeline, such as broadly neutralizing antibodies, could be both highly potent and safe enough to be used as routine postnatal prophylaxis for all infants at birth and throughout breastfeeding, independent of the risk of HIV acquisition.Arguments in favor of and against maintaining risk-based approaches to postnatal prophylaxis are summarized in Table 1. SHOULD ALL INFANTS RECEIVE POSTNATAL PROPHYLAXIS WHILE BREASTFEEDING? There are also many questions about the added value of providing prophylaxis to infants during breastfeeding.While current WHO guidelines recommend breastfeeding without any prophylaxis, after completion of the perinatal regimen, if a mother is on ART and virally suppressed, sustaining complete adherence throughout breastfeeding can be difficult, and episodes of viremia are not uncommon. 5,48,49Prophylaxis to the infant could be protective if maternal viral suppression is not sustained, but, to date, there is no evidence of the added value of infant prophylaxis when the mother is on treatment, nor is it clear how high adherence would be to this strategy when mothers are poorly adherent to their own regimen.Particularly with increasing interest in breastfeeding in high-income settings, the role of infant prophylaxis during breastfeeding is being questioned.In many settings, infants are being prescribed prolonged antiretroviral drug regimens during breastfeeding using a risk-based (when maternal viremia is detected) or a "treat all approach" of prophylaxis to all breastfeeding infants throughout the period of exposure.Arguments for and against infant prophylaxis during breastfeeding are summarized in Table 2. In Favor Against Risk of HIV acquisition is not uniform among exposed infants; different approaches are needed to address different scenarios; patient-centered approach tailoring response to individual infant. Risk may become more uniform (and low) in near future, with rapid scale-up of more potent, efficacious, and tolerable maternal treatment with dolutegravir. Low-risk infants avoid unnecessary antiretroviral drug exposure and the associated potential toxicities. A low rate of transmission persists even among low-risk infants, suggesting potential benefit from additional agents to all exposed infants. High-risk infants benefit from more aggressive management with multiple drugs/agents. No evidence exists to support the efficacy of multiple drug perinatal prophylaxis when mothers are on effective treatment, with studies performed in the era of dolutegravir-based treatment. Stratification aligns risk (toxicity): benefit (prophylaxis efficacy) of approaches with the transmission risk. Risk is difficult to assess and dynamic.Perinatal risk assessment depends on testing and medical records that are not always available.Over the duration of breastfeeding, individual maternal risk can change and can be difficult to assess without frequent visits and viral load testing. Stratified approaches optimize health system resource use, aligning cost of more intensive regimens with target population that will derive the most benefit. Risk assessment adds complexity and is itself resource intensive, requiring testing and visits for mothers.It can be challenging for health systems and clinics to stock and implement multiple regimens for infants. A "1 regimen for all" approach would be ideal to overcome the challenges of implementing a risk-stratified approach to postnatal prophylaxis but would depend on the feasibility and safety of the regimen. WHAT ARE POTENTIAL APPROACHES TO POSTNATAL PROPHYLAXIS IN THE NEAR AND FAR FUTURE? There are several antiretroviral drugs currently available for the treatment of infants with HIV and others in the pipeline that hold potential for postnatal prophylaxis. 50We summarize these agents in Table 3 and depict potential strategies for employing them in Figure 3. Currently Available Options Dolutegravir is a highly potent antiretroviral drug that may play an important role in postnatal prevention. 51While dosing in neonates is under study, the dolutegravir once-daily dispersible tablet is easy to administer and well tolerated in children aged 4 weeks and older. 52It could be used as single-drug prophylaxis in the perinatal period for infants at low risk of perinatal transmission and throughout breastfeeding, but such a strategy would require assessing the risk of selecting for dolutegravir resistance in the event of breakthrough infections.4][55] It could be paired with dolutegravir for a higher degree of antiviral activity and potentially protect against selection of resistance mutations those who acquire HIV. Abacavir is another agent that has a long record for treatment and safety in children; recent data support dosing in full-term neonates aged younger than 4 weeks. 56The combination of lamivudine and abacavir with dolutegravir could be used perinatally for high-risk situations.This combination is already used for treating children globally, increasing provider comfort, and easing supply chain issues.Dolutegravir alone or paired with lamivudine could be used for prolonged postnatal prophylaxis during breastfeeding.Lopinavir/ritonavir has well-established efficacy for both treatment and prevention of HIV during infancy 54,57 and could also be utilized in combination regimens as presumptive treatment/postnatal prophylaxis in the perinatal period for high-risk situations. 58Given the challenging pharmacokinetics of lopinavir/ritonavir and dolutegravir in premature infants, it is likely that nevirapine, lamivudine, and zidovudine will remain the best current option for premature neonates until pharmacokinetics and safety of newer drugs are defined for that population.Furthermore, all of the currently available options require daily oral administration throughout the period of exposure posing substantial adherence challenges as well as the potential for inadvertent disclosure of maternal HIV status. TABLE 2. Rationale for and Against Whether All Infants Should Receive Postnatal Prophylaxis While Breastfeeding In Favor Against A large portion (50%) of vertical transmission currently occurs during the breastfeeding period. Studies have not shown that adding infant prophylaxis to effective maternal treatment further reduces transmission risk. Risk of transmission throughout breastfeeding is dynamic, with maternal viremia difficult to monitor or predict; maternal viremia during breastfeeding is common even among mothers who maintain suppression during pregnancy. More effective oral treatment with dolutegravir and new long-acting formulations offer the prospect of unprecedented coverage and durability of virologic suppression in breastfeeding women. Maternal adherence to treatment is difficult to sustain throughout the breastfeeding period; approaches to support nonadherent women to achieve viral suppression and to predict lapses in adherence are inadequate. Predictors of maternal nonadherence have been identified (including younger age, new HIV diagnosis, late presentation to care, and nondisclosure) and can be used to target additional prevention measures. Infants deserve resources and interventions that offer direct protection and do not rely on maternal treatment. Limited resources should focus on optimizing maternal adherence and access to good care. Routine infant care can serve as a platform to maintain infants on prophylaxis throughout breastfeeding. It is difficult to maintain infant prophylaxis over long periods of time; there is significant loss to follow-up by 1 year of life. New injectable and long-acting formulations limit the visibility of infants receiving prophylaxis and could reduce concerns about stigma. Providing prophylaxis to infants raises issues of disclosure of maternal infection status. Many mothers fall out of care, thus, interventions that do not depend on maternal clinic attendance are needed. New point-of-care viral load testing will make monitoring of mothers easier. Simplified, safer options that have potential for greater efficacy for postnatal prophylaxis are in development. Addressing underlying drivers of maternal treatment failure will benefit both the infant and the mother. Nevirapine, lamivudine, and zidovudine will likely remain the best current options for premature neonates until pharmacokinetics and safety of newer drugs are defined for that population. Options in Development In the development pipeline for treating HIV, there are several long-acting agents and formulations that are very appealing as prophylaxis for infants.Agents that are injectable, either as intramuscular (e.g., cabotegravir) or subcutaneous (e.g., lenacapavir [GS6207]) formulations, could surmount challenges of adherence that arise from administering daily oral medications to infants.However, it is likely that dosing intervals obtained in adults will be difficult to achieve in the neonatal period, given the changes in clearance and growth that occur over the first weeks and months of life. Given their distinct clearance mechanisms and safety, broadly neutralizing antibodies could be ideal and potentially cost effective if effective combination regimens were to be identified that could be used universally or in high-risk settings. 59In future studies, all of these novel agents and approaches would need to be compared to the standard of care and the approach of no postnatal prophylaxis at all for infants at low risk of transmission. CONCLUSION Until all the "leaks" in coverage and treatment failures among mothers with HIV are addressed, postnatal prophylaxis will continue to play an essential role in global efforts to eliminate new pediatric HIV infections and maximize HIV-free survival.Several postnatal prophylaxis strategies have the potential to provide more effective and feasible options for infants and their mothers.In the short term, evaluating universal and riskstratified approaches that combine legacy and novel antiretroviral drugs that are also used for treatment and passive immunization approaches appears to be the most feasible and expeditious route to advance beyond legacy postnatal prophylaxis regimens.These new strategies will require rapid investigation of the most appropriate dosing in preterm and term neonates with pharmacokinetic modeling and studies.In the long term, pipeline products that allow for less frequent administration open the door to a more integrated approach where a single well-tolerated and effective strategy could be administered across the risk spectrum perinatally and postnatally.These options will require building on the evidence generated in the adult population, defining clearly preferred product characteristics, and actively investigating novel molecules in neonates and infants. 32Close collaboration between researchers, community representatives, industry, regulators, and policymakers will be the critical ingredient to ensure HIV-free survival for all infants with HIV exposure.a For premature babies, zidovudine and nevirapine should be used. FIGURE 2 . FIGURE 2. Current Postnatal Prophylaxis Guidance for Infants at High Risk of HIV Transmission a FIGURE 1 . FIGURE 1.Current Algorithm for HIV Transmission Risk Stratification a FIGURE 3 . FIGURE 3. Novel Approaches to Infant Postnatal Prophylaxis TABLE 3 . Agents With Potential Novel Roles as Postnatal Prophylaxis
2023-04-02T15:20:19.074Z
2023-03-31T00:00:00.000
{ "year": 2023, "sha1": "16fc41a555d810445970a9cfe16fed160a77102b", "oa_license": "CCBY", "oa_url": "https://www.ghspjournal.org/content/ghsp/early/2023/03/31/GHSP-D-22-00401.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f4b83385d0fa3f3c8cfaa9b00dfcb1c3c9aa31b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
218680100
pes2o/s2orc
v3-fos-license
Performance of Sesamia nonagrioides on cultivated and wild host plants: Implications for Bt maize resistance management Abstract BACKGROUND Sesamia nonagrioides is an important maize pest in the Mediterranean basin that is effectively controlled by Cry1Ab‐expressing maize (Bt maize). The continued cultivation of Bt maize in Spain exerts high selection pressure on the target pests, which could lead to the development of resistance. Provision of refuges of non‐Bt plants is an essential component in the high‐dose/refuge (HDR) strategy to delay resistance evolution. Here we analyze the suitability of cultivated (rice and sorghum) and wild (Johnsongrass, cattail, common reed and giant reed) plants, reported as hosts of S. nonagrioides, for larval development and oviposition of this pest compared to maize, and we evaluate their potential role in delaying resistance development to Bt maize. RESULTS Bioassays conducted with plant pieces or whole plants showed that the larval cycle could only be completed in the three cultivated plants and in Johnsongrass. Females showed a strong preference for ovipositing on maize in comparison with sorghum or rice. Although young larvae consumed more sorghum than maize in two‐choice bioassays, both larvae and adults had a better performance (shorter larval period and higher pupal weight, fecundity and fertility) when larvae fed on maize throughout their larval stage than when they fed on sorghum or rice. CONCLUSION None of the alternative hosts of S. nonagrioides tested here should be considered as natural unstructured refuges within the HDR strategy for Bt maize and this pest in Spain, as some of the necessary requirements to fulfill this strategy would not be met. © 2020 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. INTRODUCTION The Mediterranean corn borer, Sesamia nonagrioides Lefèbvre (Lepidoptera: Noctuidae), is the most damaging pest of maize in the Mediterranean basin. 1 First instar larvae of this species bore into maize plants shortly after the eggs hatch and feed inside the stalks for the rest of the larval stage, 2 which greatly limits the efficacy of chemical sprays to control them. 3 The introduction in Spain of genetically modified (GM) maize varieties that express the Bacillus thuringiensis toxin Cry1Ab (Bt maize) in the late 1990s marked a breakthrough in the management of this pest, given their high effectiveness in its control. This led to the rapid implementation of this technology in Spain, especially in areas of high infestation like the Ebro Valley (northeast of Spain), where yield losses were greatest, 4,5 so that about 60% of all the maize grown in this area in the last 5 years is Bt. 6,7 The intensive and continuous cultivation of Bt maize exerts a high selective pressure over the pests they target and could lead to resistance evolution, considered as one of the main threats for its long-term sustainability. 8,9 The strategy known as high-dose/ refuge (HDR), adopted in the European Union (EU) to delay resistance evolution to Bt crops, involves (i) using crop varieties that produce high concentrations of the Bt toxin, capable of killing all or nearly all individuals heterozygous for resistance, 10 and (ii) setting refuges close to the Bt fields. In principle, refuges can be non-Bt varieties of the Bt crop or any other plant species, as long as they produce large enough pest populations consisting of viable, susceptible insects that could mate with the potentially resistant homozygous individuals that might survive in the Bt crop. [11][12][13] Structured refuges, i.e. allotting an area near to the Bt field to growing varieties of the same crop not expressing the Bt trait, are the most commonly used type of refuge. For instance, their adoption is mandatory in the EU, where they have proved to successfully delay resistance development to Bt maize in the target pests S. nonagrioides and Ostrinia nubilalis Hübner, 14,15 and also in the USA for the cultivation of Bt maize expressing one insecticidal protein and targeting Lepidopteran pests. 16 Unstructured refuges, on the other hand, consider alternative host plants of the target pest, such as other crops and weeds that grow close to the Bt crop, as the source of susceptible individuals. 17 These unstructured refuges have been shown to be effective in delaying resistance evolution to Bt cotton of different polyphagous pests in China and the USA, 18,19 and their use is approved as part of the resistance management program of pyramided Bt cotton in some areas of the USA. 16 Moreover, some authors have argued that in some cases using natural refuges could reduce and sometimes even suppress the need to plant structured refuges composed of non-Bt varieties. 20 The potential of alternative plant species used by S. nonagrioides to serve as unstructured refuges in Bt maize resistance management has not been studied to the date. To evaluate whether alternative hosts could be useful in insect resistance management (IRM) plans it is key to evaluate the performance of this noctuid pest on these plant species, as well as its preference between them and its main host, maize, to learn whether large and healthy populations of the pest could build up in the alternative hosts. In this context, determining the free amino acid and soluble sugars content in the tested plant species could provide important information, since these parameters have been reported to influence oviposition preference and to stimulate larval feeding in a range of insect species, including noctuids. 21,22 The inadequate or poor implementation of IRM strategies to delay resistance evolution has led several insect pests to develop resistance to a range of Bt crops expressing different Bt toxins. 23 Sesamia nonagrioides, however, has remained susceptible to Cry1Ab expressing maize varieties, which have been cultivated in Spain for over 15 years, 15 even though a resistance allele to Bt maize has been reported in a population from the Ebro Valley. 24 A resistance evolution model for S. nonagrioides was recently developed considering more than 20 variables known to affect the rate of resistance development, including aspects of the pest biology and genetics as well as agronomic practices in that area. 4 This model considered S. nonagrioides as a functional monophagous species on maize, its primary host, in the Ebro Valley. Nevertheless, despite its high level of specialization in maize, S. nonagrioides has shown a certain degree of polyphagy, since it has been recorded in a wide range of cultivated and wild host species of the Poaceae, Cyperaceae and Thyphaceae families across its distribution range. 25,26 Regarding cultivated plants, S. nonagrioides is known to be a major pest of sorghum 27,28 and rice. 29,30 The degree of polyphagy has usually been considered an important factor for resistance development, although some studies have not found a clear causal relationship between both evolutionary phenomena. 31 In the case of Bt crops, resistance evolution is expected to occur faster in monophagous pests that feed on the modified plant species, which are subjected to a higher selective pressure in comparison with polyphagous pests. 32,33 Gaining knowledge about the range of hosts that can be used by S. nonagrioides and its preference for them is essential for the management of this corn borer in Bt maize. Furthermore, the use of different types of refuges (crop vs non-crop) is also important in the context of IRM strategies, as changes in agronomic practices affecting the availability of these hosts may influence the effectiveness of IRM programs. 34 Thus, the aim of this study was to investigate whether a range of cultivated and wild plants, reported to be potential hosts of S. nonagrioides, are suitable for the larval development and oviposition of this pest, as compared to maize, its main host. The results obtained will shed light on the possibility of considering these plants as unstructured refuges for Bt maize within the HDR strategy, which in turn will help to improve the management of resistance of S. nonagrioides to this GM crop and contribute to fine-tune the S. nonagrioides resistance evolution model. Plant material We tested the suitability for oviposition and larval performance of S. nonagrioides of three cultivated plants [Zea mays (maize), Oryza sativa (rice) and Sorghum bicolor (sorghum)] and four wild host plants that are frequently found within maize fields [the weed Sorghum halepense (Johnsongrass)] or close to them [Typha domingensis (cattail), Phragmites australis (common reed) and Arundo donax (giant reed)]. All species belong to the family Poaceae, except cattail, which belongs to the family Typhaceae. Maize (var. DKC4795) and sorghum (var. Express Rojo) plants were grown in potting soil (Compo Sana Universal, CompoAgricultura S.L., Barcelona, Spain), whereas rice plants (var. Gleva) were grown on a mixture of 63.5% peat, 36.5% vermiculite and 0.63 g of CaCO 3 per liter of soil. Johnsongrass (collected in San Fernando de Henares, Madrid, Spain) and giant reed (Instituto Nacional de Tecnología Agraria y Alimentaria, Madrid, Spain) were also grown in potting soil, whereas a mixture of 50% potting soil and 50% river sand was used to grow cattail (collected in the stream Pantueña, Madrid, Spain) and common reed (Ecodena S.L., Sevilla, Spain). All plants were grown in 25 cm diameter × 24 cm high pots, and maintained in a greenhouse at 25 ± 3°C, relative humidity of 75 ± 10% and 16:8 (L:D) photoperiod. Insect rearing Insects used in the assays came from a laboratory colony of S. nonagrioides of the Centro de Investigaciones Biológicas (Madrid, Spain) reared on a meridic diet, as described in González-Núñez et al. 3 The oviposition cages consisted of a 12.5 cm diameter × 12 cm high pot with 8-10 V3 maize seedlings, enclosed by a 12 cm diameter × 30 cm high see-through and colorless Plexiglas cylinder covered on top by a mesh that allowed ventilation, and 8-10 pairs of adults were placed inside. After 7 days, egg clusters were collected from the plants and placed on moistened filter paper for egg hatching. The whole rearing process took place in growth chambers (Sanyo MLR-350 H, Sanyo, Japan) at 25 ± 0.3°C and 16:8 (L:D) photoperiod. Performance of S. nonagrioides on different host plants 2.3.1 Preliminary oviposition bioassay A preliminary no-choice oviposition test using 10-20 replicates per plant species, each of them consisting of three confined pairs of S. nonagrioides per arena, was performed to confirm that the seven hosts selected according to the available bibliography were suitable for oviposition in the conditions used for this study. The results showed that females laid a significant number of fertile eggs in all the plants (on average, more than 400 eggs were recovered per replicate, data not shown). Therefore, larval performance was assessed on all seven hosts in two ways: by using excised parts of leaves and stems and using whole plants. Performance on excised parts of plants Individualized neonates (<24 h) were confined in plastic boxes 4 cm diameter × 2 cm high and fed ad libitum with fresh pieces of leaves and stems of each plant species. All boxes were examined daily, and the dates of molting, pupation and adult emergence were recorded. Between 60 and 144 larvae were used for each plant species. A control to ensure that the population was in optimal condition was set up with 102 larvae fed with the same meridic diet used to rear the laboratory population. The length of the larval cycle was determined by counting the number of days it took each larva to reach the pupal stage from the start of the experiment, and the longevity of the adults was calculated as the number of days between emergence and death of the adult. Pupae were weighed 24 h after pupation. To evaluate dietary effects on adult performance, individual pairs of adults were placed in arenas consisting of 6 cm diameter × 6 cm high pots with three V3 maize seedlings confined by a ventilated, seethrough plastic cylinder (5.4 cm diameter × 15.5 cm high) for mating and oviposition. Egg clusters were collected 7 days later and the number of eggs determined using a stereomicroscope (Leica M125, Leica Mycrosystems, Germany). Eggs were placed on top of moistened filter paper in plastic boxes for hatching and their viability recorded. These assays were carried out in growth chambers at 25 ± 0.3°C and 16:8 (L:D) photoperiod. Additionally, the standardized growth index (SGI) was estimated for each larva of each host species tested 35 : SGI = pupal weight (mg)/length of larval period (days). Performance on whole plants Six neonates (<24 h) of S. nonagrioides were placed on the leaf sheaths of leaves 3, 4 and 5 (two larvae per leaf), with the exception of P. australis, in which three neonates were used (one per leaf) due to the narrow diameter of the stem in this species. The main stem of rice plants was considered for infestation. Plants were then confined within a ventilated methacrylate cylinder and watered regularly during the running time of the experiment. Between 25 and 27 days after infestation the plants were dissected and the larval recovery rate, measured as the percentage of initial larvae that were recovered at the end of the bioassay, was recorded in each plant, as well as larval weight and larval stage (L1-L6 for first to sixth instars, respectively) of the recovered larvae. The assays were performed with V6-V8 plants of all plant species except rice, which was used when the plants reached the panicle formation phase, and they took place in a greenhouse, using 7-22 plants per species, at 25 ± 3°C and 16:8 (L:D) photoperiod. Larval feeding preference Feeding preference was evaluated by two-choice and no-choice bioassays. Since maize is the primary host of S. nonagrioides in Spain, this species was used as the reference host for comparison with the other two cultivated hosts, rice and sorghum. The three species were planted at the same time and offered to S. nonagrioides females when maize plants reached the V8 phenological stage. These experiments were performed using leaf disks as an appropriate proxy to assess feeding preferences of S. nonagrioides larvae. Two-choice assays Two-choice assays were conducted to examine feeding preferences of S. nonagrioides larvae between maize and rice or sorghum. Given that this corn borer is known to feed on the three tested species, feeding preference was considered as a significantly higher consumption of one of the two species used in the assay. The choice arena consisted of a Petri dish (60 mm diameter × 5 mm high) coated on its bottom with a 2.5% agar solution. Leaf disks (8 mm diameter) containing the mid-rib were excised from rice, maize and sorghum plants with a cork borer and fitted into the holes punched in the agar layer, alternating leaf disks of the two species (maize-rice or maize-sorghum) in the agar arena. A recently molted (<24 h) second instar larva weighing 0.50-1.25 mg (mean ± 1SD) was placed in the center of the dish after a 6-h starvation period. All dishes were sealed and placed in a growth chamber at 25 ± 0.3°C and complete darkness for the duration of the assay. Twenty replicates of each combination were evaluated. The experiment concluded when larvae in an external control that only contained maize disks had consumed approximately 50% of the plant material. Both the initial and final fresh weights of larvae and leaf disks, measured separately for each plant species, were recorded. Larvae were then frozen at −20°C and afterwards dried in an oven at 60°C for 48 h to estimate their dry weight. Uneaten leaf disks were cleaned of frass and ovendried following the same procedure. The preference index proposed by Kogan and Goeden (1970) 36 was calculated as a measure of larval preference on a dry weight (DW) basis: preference index (C) = 2A/(M + A) where A is the consumption of alternative host (%DW) and M is the consumption of primary host (%DW). This index can range between 0 and 2, so that C = 1 indicates that larvae do not feed preferentially on either plant, whereas values lower than 1 denote a preference for the primary host and values higher than 1 indicate larvae feed preferentially on the alternative host. Additionally, the nutritional indices described by Farrar et al. 37 were calculated. The relative consumption rate (RCR) was estimated separately for the two plant species in each two-choice assay: where DW i is the initial dry weight of leaf disks (mg), DW f is the final dry weight of leaf disks (mg), LW i is the initial larval dry weight (mg) and D is the duration of the assay (days). Initial dry weight of leaf disks was calculated from their fresh weight using an equation that relates both parameters, obtained for each plant species by weighing 10 batches of six freshly excised leaf disks and weighing them again after 48 h at 60°C. Similarly, LW i was calculated using an equation obtained by measuring the fresh and dry weights of 373 L2 larvae in the same weight range as those used in the assays. All weights were determined using an analytical balance (Mettler-Toledo AX205, Mettler-Toledo International Inc., Columbus, OH, USA). No-choice assays These tests were performed similarly to two-choice assays, but all disks in the agar arena corresponded to the same plant species. Seventeen replicates were tested for maize, 16 for sorghum and eight for rice. In this case, the experiment concluded when larvae in the maize assay had consumed approximately 75% of leaf disks. Three nutritional indexes were estimated: the RCR (described above), the relative growth rate (RGR) and the efficiency of conversion index (ECI), 37 so that: where LW f is the final larval dry weight (mg), LW i is the initial larval dry weight (mg) and D is the duration of the assay (days), and ECI (%) = (RGR/RCR) × 100. Oviposition preference on cultivated hosts Two-choice assays were carried out to determine the oviposition preference of females between the primary cultivated host (maize) and rice or sorghum. For each replicate, two males and a female were confined in a choice arena consisting of a 25 cm diameter × 24 cm high pot with one maize plant and either a rice or a sorghum plant, covered with a mosquito net with a diameter of 56 cm and a height of 230 cm, attached by a cable to the roof of the greenhouse, that gave the moths enough space to move freely. All host species were sown at the same time and exposed to S. nonagrioides adults when maize plants reached the V8 phenological stage. After 7 days, all the adults were recovered and plants were examined. Fecundity was estimated as the total number of eggs laid per female and plant and egg viability was estimated a week later as described in section 2.3. Female moths were dissected to check their mating status, and only replicates in which a mated female was recovered were considered as valid. Twenty replicates of each option (maize-rice or maize-sorghum) were set up. These assays took place in the greenhouse at 25 ± 3°C and 16:8 (L:D) photoperiod. Free amino acid and free sugar content of the cultivated hosts The free amino acid and free sugar content was assessed in maize, sorghum and rice leaves used in the feeding assays to determine if they could have an effect on the choice of the plants, since these compounds have been proved to influence insect preference and performance between hosts in some lepidopteran species. 38,39 The extraction method followed to estimate the quantity of free amino acids was that used in Ximenez-Embún et al., 40 based in the technique described in Hacham et al. 41 Three samples (20-30 leaf disks/sample) of each plant species were frozen at −80°C and grinded using a mortar to obtain approximately 100 mg of leaf material per sample, whereupon 600 μL of water:chloroform: methanol (3:5:12 v/v/v) extraction buffer was added to each sample. This was followed by a 4-min centrifugation at 4°C and 14 000 rpm, after which the supernatant was transferred to a new tube and the pellet was resuspended in 600 μL of extraction buffer and centrifuged again. The supernatant was pooled with that obtained in the previous centrifugation and 300 μL of chloroform and 450 μL of double-distilled water were added to each sample. After a final 2-min centrifugation of the samples at the same temperature and speed, the top layer of the solution containing the amino acids was transferred to a new tube and placed in a SpeedVac Concentrator Savant SVC-100H (ThermoFisher Scientific, Wilmington, DE, USA) overnight. When all the solvent was evaporated, the samples were taken to the Protein Chemistry Service at the CIB (CSIC, Madrid), where their amino acid content was determined using a Biochrom 30 Amino Acid Analyser (Biochrom, Cambridge, UK). For this purpose, the samples were first resuspended in 100 μL of sodium citrate loading buffer at pH 2.2 and 10 μL of each sample was injected in the analyzer. The free amino acid content in each sample was estimated on a dry weight basis. Determination of the plants' free sugar content was performed on dry plant material. Leaf disks excised from the leaves used in the assays were dried at 75°C for 48 h and then ground to obtain a fine powder. Three samples (≈3 mg of leaf powder per sample) were considered per plant species. Each sample was homogenized in 650 μL of 95% ethanol and heated at 80°C for 20 min, followed by centrifugation at 10000 rpm for 10 min and collection of the supernatant in a tube. This process was repeated two more times. The supernatants of each sample were pooled and divided into two 750 μL replicates, which were dried in a Speed-Vac Concentrator Savant SVC-100H for approximately 12 h. Each sample was then resuspended in 500 μL of double-distilled water and 1 mL of 0.2% anthrone in 95% sulfuric acid (v/v) was added to each of them. After 15 min of incubation at 90°C the absorbance of each sample at 630 nm was measured in a VERSAmax microplate reader (Molecular Devices Corp., Sunnyvale,USA). Statistical analysis Prior to the statistical analysis of the results, normality (Kolmogorov-Smirnov test) and homocedasticity (Levene test) were checked in all variables, and those that did not comply with these requirements were transformed to arcsin√x or log(x + 1) for percentages or continuous variables, respectively. A significance level of ⊍ = 0.05 was considered, and all analyses were performed using the statistical software SPSS (SPSS Statistics 24.0, IBM, USA). In larval development assays using parts of plants, one-way ANOVA followed by either a Dunnett's t-test (when variances were homogenous) or a Dunnett's T3 test (when variances were not homogenous) 42 were carried out to study whether length of the larval cycle, pupal weight, adult longevity and SGI in the different plant species differed significantly from the values recorded in maize. A Student's t-test was carried out to check for differences between host species in fecundity and fertility for adult pairs resulting from larvae fed on maize and sorghum, the only two species in which adults could be set up for mating and oviposition. In assays that considered whole plants, one-way ANOVA followed by Dunnett's tests were used as described above to study whether larval recovery rate and mean weight per instar were different in the alternative hosts in comparison with maize. Differences in fecundity and fertility between maize and sorghum or rice were analyzed by paired Student's t-tests, comparing the values resulting from subtracting the value of the variable measured in the alternative host from the value measured in maize. Likewise, differences in RCR between species in larval feeding two-choice assays were analyzed with paired Student's t-tests following the same procedure. In no-choice larval feeding assays, one-way ANOVA followed by a Dunnett's t-or T3 tests was performed to determine if host species had a significant effect on RCR, RGR and ECI. Differences between alternative hosts and maize in their free sugar and total free amino acid content were analyzed with one-way ANOVA followed by Dunnet's t-tests. RESULTS 3.1 Performance of S. nonagrioides on different host plants 3.1.1 Performance on excised parts of plants Sesamia nonagrioides only reached the adult stage when larvae were fed with pieces of three out of the seven tested plant species: maize, sorghum and Johnsongrass, with survival rates to the adult stage of 63.7%, 23.6% and 9.9%, respectively. Larvae fed on common reed and cattail died mostly during the early larval stages, while larvae fed on rice and giant reed died at more advanced larval stages, so that no pupae were obtained in any of these four species (Fig. 1). Larvae feeding on maize usually underwent five molts before they completed their larval stage, whereas supernumerary molts were common in individuals fed on the alternative hosts. A high survival rate to adulthood was recorded in larvae fed on a meridic diet (91.2%), indicating the Focusing on the three plant species where S. nonagrioides completed its development (maize, sorghum and Johnsongrass), it had a better performance when fed with maize than with any one of the others, as indicated by the significantly shorter duration of the larval development and the higher adult longevity, SGI and pupal weight observed in larvae fed on maize (Table 1). Average fecundity and fertility were also higher in adult pairs derived from larvae fed with maize. Since females and males emerged at different times, no couples could be set up for mating and oviposition in adults resulting from larvae fed with Johnsongrass (Table 1). Performance on whole plants Individuals of S. nonagrioides were recovered 25-27 days post infestation in six out of the seven plant species tested, whereas no larvae were recovered from giant reed. The recovery rate of larvae at the end of the bioassay was very low in common reed and cattail (5.6% and 5.0%, respectively), so these plants were excluded from the statistical analyses. When the experiment was stopped, nearly 4 weeks post infestation, all larvae were expected to be at least L5, since the average duration of the larval stage in S. nonagrioides larvae reared at 25°C and a 16:8 photoperiod was estimated in 32.5 days. 43 However, only in maize all the larvae recovered were L5 or bigger (22.5% L5, 57.5% L6 and 20.0% pupae). Nearly 69% of the larvae recovered from sorghum were L2-L4, and around 31% were L5-L6 larvae. Likewise, 83.1% of the larvae recovered from Johnsongrass were L2-L4 larvae, and 16.9% were L5-L6 larvae. Finally, 42.5% of the individuals recovered from rice plants were L3-L4 larvae, and 57.5% were L5-pupae (Fig. 2). Average larval weight was significantly higher in L5 larvae recovered from maize than in those recovered from sorghum, rice and Johnsongrass. Sixth instar larvae and pupae were also heavier when recovered from maize plants with regard to those from rice plants, whereas the difference was not significant in the case of L6 larvae recovered from sorghum (P = 0.054) (Fig. 3). Larval feeding preference 3.2.1 Two-choice assays When larvae of S. nonagrioides could choose between maize and rice or sorghum, they exhibited different feeding preferences. Larvae chose to feed on maize rather than on rice when they were provided with both species, as shown by the significantly higher value of the relative consumption rate in maize (t = 3.78, P = 0.001). However, this index had a significantly higher value in sorghum when larvae could choose between this species and maize (t = −2.33, P = 0.031) ( Table 2). These results are supported by the values of the feeding preference index (C), which indicated that S. nonagrioides larvae preferred maize to rice in rice-maize Thirty-four adults emerged from larvae fed on sorghum; 10 couples were set up for mating and oviposition. c Eleven adults emerged from larvae fed on Johnsongrass. Female and male adults emerged at different times, so no couples could be set up. ¶ SGI, standardized growth index. assays (0.80 ± 0.09), whereas sorghum was the preferred foliar tissue in maize-sorghum (1.20 ± 0.10) assays. No-choice assays The RCR of leaf disks was significantly higher in maize compared with sorghum and rice. However, the RGR of larvae fed on sorghum was higher than in those fed on maize. This indicates that larvae convert the ingested sorghum to biomass more efficiently than in the case of maize, resulting in a significantly higher ECI in sorghum (51.6 ± 8.3%) in comparison with maize (18.7 ± 1.0%). On the other hand, both RGR and ECI were higher in maize in comparison with rice (Table 3). Oviposition preference on cultivated hosts Females of S. nonagrioides showed preference for ovipositing on maize plants in comparison with sorghum or rice, so that the average fecundity on maize was significantly higher than that recorded in sorghum or rice (15-and 14-fold, respectively). Figure 3 Larval and pupal weight per host species a (mean ± SE) in S. nonagrioides recovered from infested plants. Common reed and cattail have been excluded because the recovery rate of larvae at the end of the bioassay was below 10%. No larvae were recovered from giant reed. *Significantly different from values obtained in maize (one-way ANOVA followed by Dunnett's t-test, P < 0.05). However, no significant differences were observed between hosts regarding fertility in maize-sorghum or maize-rice assays ( Table 4). The average adult recovery rate per replicate was 80 ± 7% in maize-sorghum assays and 93 ± 3% in maize-rice assays. Free amino acid and free sugar content of the cultivated hosts Free sugar content was significantly higher in rice leaf tissue compared to maize, whereas no differences were observed between maize and sorghum. On the other hand, the total free amino acid content was significantly higher in both sorghum and rice plants with regard to maize (Table 5). DISCUSSION Sesamia nonagrioides was able to lay eggs on all seven plants studied under no-choice conditions. However, based on the results of the bioassays in which larvae were reared either on pieces of the plants or on whole plants, its larval cycle could only be completed in the three cultivated species testedmaize, sorghum and riceas well as on the weed Johnsongrass, which shares similarities with sorghum, given that it is a hybrid of S. bicolor and S. propinquum. 44 Common reed, giant reed and cattail were low-quality hosts for larval development. Even though S. nonagrioides has been reported to feed on these species, all the studies reporting this behavior comprised African and Macaronesic populations, [45][46][47][48] so the differences observed between them and the Spanish one might be due to host specialization in S. nonagrioides populations from different areas in which host availability differs. Different studies in noctuids have reported the important role of sensory and physical cues, such as surface texture or stem thickness, in the acceptability of a host species or plant part for oviposition. 49,50 In this vein, two other noctuid pests of maize, Busseola fusca and Mythimna unipuncta, have been observed to lay eggs on man-made structures that resemble the narrow slit used for ovipositor insertion in maize plants, emphasizing the important role of this kind of cues in eliciting oviposition in both species. 51,52 Similarly, under laboratory conditions females of S. nonagrioides have been observed to lay viable eggs in artificial structures that mimic the tight gap between the stem and the leaf sheath (personal observation), which is used by the females of this species to insert the ovipositor. 53 This suggests that, in the absence of its main host, S. nonagrioides females will lay eggs on a wide range of plant species, even when they are unsuitable for larval development, as already observed in this and other noctuid species. [54][55][56] Our results agree with the narrower larval feeding range often observed in lepidopteran species in comparison with wider host acceptance for oviposition. 57 When females could choose between the primary cultivated host (maize) and rice or sorghum in two-choice bioassays, they showed a strong oviposition preference for maize, as evidenced by the more than 10 times higher fecundity recorded in this host with regard to the other two species. Even though sorghum and maize plants are phylogenetically very close and they share remarkable similarities in their architecture, 58 the preference of S. nonagrioides females for laying eggs in maize rather than sorghum has been previously reported by Dimotsiou et al. 56 This is in line with the better performance of S. nonagrioides observed on maize, since larvae generally showed higher mortality, delayed developmental time, reduced growth, smaller larval and pupal size, shorter adult life span and reduced fecundity and fertility in the alternative hosts in comparison with maize in both larval performance assays. These results are consistent with the 'preference-performance' hypothesis that has been observed in several lepidopteran species, 59,60 which predicts that, generally, adults lay their eggs preferentially in hosts that are optimal for the development of their offspring. 61 The differential development observed depending on the host could lead to asynchronies in the biological cycles, which in turn would result in different times of emergence of the adults from different plant species in the field. This could have important implications in terms of resistance management, as the main function of the refuges is to provide susceptible adults that will emerge at the same time as the resistant adults that may emerge in Bt maize fields. Therefore, developmental delay together with poorer quality adults produced in alternative hosts would make them not a good option as refuges for use within the HDR strategy. Sugars and amino acids detected by chemoreceptors have been reported to play a major role in discrimination between plants for oviposition in some lepidopteran species. 21,62,63 However, results are inconclusive when it comes to noctuid species. 22,64 Our study does not reveal an oviposition preference of S. nonagrioides for any of the plants based on their free sugars content, since, even though no differences in this parameter were found between maize and sorghum leaves, adults laid significantly more eggs on maize. However, maize had a significantly lower value in the total content of free amino acids in comparison with sorghum and rice, which could partially explain the differential oviposition preference between maize and these species. The preference for laying eggs on maize shown by S. nonagrioides did not match the feeding preference for sorghum compared to maize shown by second instar larvae in twochoice bioassays, as expressed by RCR values. This contradiction has also been observed in S. exigua, which preferred maize for oviposition and other plant species for larval feeding, 65 but it differs from results reported in other species, where female oviposition preference for different host plants was positively correlated with larval feeding preference. 21,66,67 A significantly higher content of free amino acids, which have been proved to stimulate feeding in other noctuid pests, 38,68 was detected in sorghum leaf tissue compared to maize. However, this would not be a major driver of S. nonagrioides larval preference and performance, since, even though rice contained higher levels of free amino acids than maize, larvae consumed more maize in maize-rice choice assays and showed higher rates of relative consumption, growth and efficiency of conversion in maize than in rice under no-choice conditions. In the same line, the results of choice and non-choice assays show that the higher content of free sugars recorded in rice leaves in comparison with maize or sorghum did not stimulate feeding in S. nonagrioides, in contrast to the phagostimulant effect of sugars reported in other insect species. 39,69 On the contrary, the low relative growth and efficiency of conversion rates observed in larvae fed with rice leaf disks suggest a low nutritional quality and digestibility of this tissue for S. nonagrioides. This might not prove applicable to rice stems, given that a small percentage of S. nonagrioides larvae that were placed as neonates in rice plants completed the larval cycle. Nevertheless, even though a high proportion of larvae was recovered from infested rice plants, these larvae were developmentally delayed and their growth had been restricted in comparison with larvae recovered from maize plants. Altogether, our results indicate that only three out of the six tested species were suitable hosts for S. nonagrioides, apart from maize. Nevertheless, none of the potential alternative hosts tested here should be used as natural unstructured refuges for Bt maize and considered within the HDR strategy. This is because even if these plant species are present and abundant near or within (i.e. Johnsongrass) Bt maize fields in Spain and they coincide with maize in both space and time, they do not comply with other requirements that must be met by unstructured refuges, that is, that they are able to host a large population of high-quality moths, and that no asynchrony exists between these moths and those emerging from Bt fields. [70][71][72] For some polyphagous target pests, natural refuges can be very effective, if the host plants in the refuge can produce sufficient numbers of high-quality moths. 71 Thus, natural hosts have been proved to contribute to delay resistance development to Bt cotton in China and the USA in the polyphagous and highly dispersive pests Helicoverpa armigera, H. zea and Heliothis virescens. 18,19,73 However, they have not been considered as effective refuges for Bt maize for Ostrinia nubilalis in France and the USA, and for several stem-boring pests of maize in Africa. 72,74,75 Based on our findings, the tested hosts would also not be suitable for S. nonagrioides, which in Spain is considered an oligophagous or even facultatively monophagous species on Z. mays. Nonetheless, it must be kept in mind that S. nonagriodes may use these plants as refuges for short periods of time under adverse circumstances, e.g. to escape from Bt maize. 76 The results reported in this study confirm the premises assumed in the resistance evolution model of S. nonagrioides to Bt maize in the Ebro Valley regarding the low number of wild or cultivated alternative hosts plants for this noctuid pest. 4 Furthermore, they suggest that unstructured refuges composed of the tested plant species would not help delay the development of resistance to Bt maize. Therefore, refuges for susceptible individuals should continue to be composed of non-Bt maize plants, and it is desirable that compliance with refuge requirements increases from the 92% reported in 2017 77 in maize-based agro-ecosystems.
2020-05-19T13:02:29.557Z
2020-05-17T00:00:00.000
{ "year": 2020, "sha1": "197a877a3c5d0be32eae14775a627f074be370b0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ps.5913", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "67233601912ccc741b2761e176e4e71cc1044673", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55611868
pes2o/s2orc
v3-fos-license
Determining the knowledge of food safety and purchasing behavior of the consumers living in Turkey and Kazakhstan Complete and balanced nutrition with reliable food consists of the basis of health and protective health services. Therefore, the current study was carried out to determine the knowledge of food safety level and purchasing behavior of 668 consumers living both in Turkey (n=348) and in Kazakhstan (n=320) and to compare the results. Volunteered consumers for the research were given a face to face interview between March and September 2010. It was found that the knowledge of purchasing behavior (14.43±2.56) of food safety (20.82±4.20) of the consumers living in Turkey was higher compared to those living in Kazakhstan (11.84±2.92, 14.74±3.86 respectively) and that the difference between the two countries was statistically significant (p<0.01). In addition, a positive correlation was found between knowledge of food safety and purchasing behavior (r=0.541, p< 0.01); age and purchasing behavior (r=0.325, p< 0.01) and knowledge score of food safety (r=0.148, p< 0.01). INTRODUCTION Food-borne diseases constitute a common public health at a global scale. Every year, millions of people worldwide die and many are hospitalized from foodborne diseases and illnesses as a result of consumption of contaminated food (Knight et al., 2003). World governments concentrate their efforts on improving food safety, in order to promptly and properly respond to the increasing types and incidents of foodborne diseases.Food-borne infections are placed in the core of primary community health concerns, by both advanced and developing countries of the world (Baş, 2004;Eren, 2007).While it is hard to predict the actual number of incidents of food-borne diseases, it is a known fact that many lives were lost to diarrhea caused by food and water-borne microbiological agents, tolling around 1.8 million minors during 1998 and 2.1 million people, during 2000, in the developing world (except China).In industrial states of the world, on the other hand, it is stated that every one individual in a group of three is *Corresponding author.E-mail: ntekgul@gazi.edu.tr or nevintekgul@gmail.com.Tel: + 90 312 2162604. affected by food-borne diseases each year and almost 30% of the population in advanced countries are presented with food-borne diseases (Baş, 2004).In the US, approximately 76 million incidents of food-borne diseases are reported to take place in average during any year, where 325,000 people are institutionalized, and 5,000 ending up dead (Mead et al., 1999;WHO, 2002).There have been 29,901 cases of Salmonella paratyphii infection, 21,068 cases of dysentery and 8,824 cases of Hepatitis-A infections in 2004, Turkey, according to the data supplied by the Ministry of Health.Data available on food-borne diseases and food poisoning fail to reflect the actual situation, as there is not any statutory requirement in effect, for the reporting of food-borne or related diseases, in Turkey (Sanlier, 2009).Research made on recorded incidents of food poisoning among the consumer public in Kazakhstan, revealed no relevant data. The economic outcomes of food contamination and food-borne diseases are presumed to be in a range of 3.3 to 12 million dollars for the US, as attributable to pathogens, generating some 6.5 to 35 billion dollars cost for the central government, on an annual basis, as a result of food-borne diseases, during 1995.The five major food-borne epidemics that occurred in England and Wales in 1996 were predicted to cost 300 to 700 million pounds sterling, including medical treatment costs and claims associated with deaths throughout these disasters.Predictions state that every 1 out of 10 persons in the UK or 1 out of 12 people in the US suffer from food-borne diseases each year, entailing to dramatic financial troubles (Redmond and Griffith, 2003).The predicted annual cost of 11,500-days of food poisoning cases for Australia has been calculated to be 2.6 million Australian Dollars (WHO, 2002).The customers represent the final link in the food safety chain.The purchasing power and level of awareness of the consumers is an important factor for ensuring food safety (Alpuğuz et al., 2009).The poor hygienic treatment of food during storage, processing and preparation may help creation of an environment suitable for bacterial growth, including the fast and easy spreader species such as Campylobacter, Salmonella and other infectious agents (Baş et al., 2006).Many people are poisoned from day to day, for consuming food produced in nonhygienic environments, lacking sufficient knowledge or training on hygiene, using unclean water or due to inefficient storage conditions, lack of cleaning or mixing of chemicals with foodstuffs (Sanlier, 2009). Food can be mishandled at many places during food preparation, handling and storage and several studies indicate that consumers have inadequate knowledge about procedures needed to prevent foodborne illnesses at home (Mederios et al., 2001;Meer and Misner, 2000;Redmond and Griffith, 2003;Woodburn and Raab, 1997).The prevention of foodborne illnesses requires educating food consumers on safe food handling practices (Jevsnik, Hlebec and Raspor, 2008).However, prior to education, it is important to assess food safety issues relevant to consumers.It has been demonstrated that level of education affects the level of knowledge or awareness in any casual consumer, in combination with age, sex and level of income (Angelillo et al., 2000;Redmond and Grifith, 2003;Bermudez-Millan et al., 2004;Mitkakis et al., 2004;Röhr et al., 2005;Sanlier, 2009;Sanlier, 2010).A majority of the consumers in Netherlands have been revealed to perceive the expiry dates marked on product labeling as the storage time for food, but having no idea of the fact that such dates become ineffective, once the product's package is actually opened.It has also been observed that respondents with kids of four or lower age were more careful and attentive on food product inserts than older consumers, who preferred to follow their experience patterns when storing food presenting no or little knowledge about the storage conditions of or the newly emerging products.There was a great gap in knowledge among respondents, on methods for storing food (Terpstra et al., 2005). Varying demographies and life styles entail to situations that make life threatening, great epidemics out of foodborne diseases, in combination with the extraordinarily Sanlier et al. 2725 dangerous species of microorganisms and highly resistant bacteria (Haapala and Probart, 2004). The increasing need for education on food safety has just recently been noticed in the US and EU, with the early sparkles of national initiatives aimed at effectively educating the young consumers and especially the potential food preparers of the future.Consequently, the need becomes eminent in this conjuncture, for education.There is benefit in expanding the outreach of consumer educations to cover wider communities through mass media, common public and formal education starting at early childhood.It is among the fundamental duties of the government to safeguard social wealth, improve and maintain high levels of health conditions, ensure full public access to healthier and high quality foodstuffs and retain comprehensive control of food from production stage to consumption by the end user, in order to ensure physically sound and mentally healthy newer generations (Anonymous, 2001). Besides, there is not any public authority vested with the power and responsibility to carry out the controls regarding food safety, despite the lack of legislative arrangements to govern the issue, which is alarming, in both Turkey and Kazakhstan.Therefore, this study intends to demonstrate what attitudes are adopted by consumers living both in Turkey and Kazakhstan, from different cultural and educational backgrounds, in time of purchasing, as well as their levels of knowledge on and practical use of food safety. MATERALS AND METHODS This study was performed between March and September 2010 on a total of 668 individuals from Turkey (348) and Kazakhstan (320), consisting of 310 males and 358 females, who had shown full consent to attend it on a voluntary basis, to compare the purchasing behaviors and levels of knowledge on food safety, in both countries.The respondents were given a short brief on the subject and purpose of this study and general rules to follow, at the beginning.Survey forms prepared for the purpose were effectively used by the authors themselves through face-to-face dialogs.The average age for respondents from Turkey and Kazakhstan were 32.87 ± 9.60 years and 27.72 ± 10.96 years, respectively. Instrumentation There are 30 questions aimed at determining the level of knowledge in respondents on food safety and 20 expressions intended to identify their purchasing behaviors, on scale put up by the researchers, utilizing related articles (Haapala and Probart, 2003;Unusan, 2007;Sanlier, 2009).A pilot study has been performed on a group of 50 consumers, to check whether the questions on the scale were understood or not, and the forms were then reviewed and revised, with minor changes made in unclear questions.The answers given to questions relating to food safety and purchasing behaviors were evaluated as true and false.Scoring has been made so that a "True" answer would yield one point while a "False" one return "0" point.The information questions about food safety were evaluated in a score range of 0-30, while statements concerning purchasing behaviors covered a range of 0-20.Furthermore, the survey form was checked for reliability, as a result of which, the cronbach alpha values were found to be 0.73 on the purchasing behavior scale and 0.79 on knowledge on food safety scale. Data analysis The data thus obtained were evaluated using the SPSS 13.0 statistical calculations software bundle.For each answer provided to food safety knowledge and purchasing behavior inquiries, the responses given by the consumers are broken down in a table both in numbers and percentage, and comparisons made based on countries employed the x 2 test technique.The total scores were then calculated on both the food safety knowledge and purchasing behavior scales, which were subsequently subjected to comparisons between the two countries using the Student t test, with given arithmetical means ( χ ) and standard deviation (SD) values.Also, the food safety knowledge scores, purchasing behavior scores and ages of consumers were correlated to study the relationships in between, while Pearson correlation factor (r) was used to determine the direction and level of the relations.The evaluations made took statistical significance level as 0.05 and 0.01. RESULTS Total of 348 Turkish respondents with the percentage of 48.0% male, %62.9 were married and 33.6% high school graduates while the Kazakh respondents were found to be 55.3% females, 56.6% singles and 34.4% high school graduates (Table 1).Generally speaking, the true rates in answers provided by Turkish consumers to questions on purchasing behaviors were higher than those provided by their Kazakh peers ad there was a statistically significant difference between the true answering rates based on each country (p<0.01,p<0.05).An investigation of the true answering rates for statements on purchasing behavior immediately revealed that 98.3% of the Turkish consumers said "I check the product package for soundness", 98.0% said "I look at the expiry dates on labels when purchasing products", 97.7% said "I check the cleanliness of the store or sales point where it purchase my food", 96.0% said "I check the confirmation seal of a veterinary body when buying meat", 93.4% said "No additives in foodstuffs, that is what matters", 92.5% said "I totally reject and return a product which I later discover to be defective", 92.0% said "I check if the product I bought has any adverse affects on human health", 88.8% said "I check if the product package is made of materials which would not harm or damage the contained food product", 85.9% said "I strictly follow the instructions printed on the label when storing or cooking the product", 84.8% said "I read the label information provided on packages, before I buy foodstuffs", 83.9% said "I can comfortably consume any product regardless of where and how they were prepared and whether they are hygienic or not" 81.0 % said "I am ready to pay more for food products grown without the use of agricultural growth hormones", 80.2% said "Food should have good nutritional qualities, before good taste", and finally 76.1% said "I always take into account the nutritional value when I purchase food products".The above rates for Kazakh consumers have been 94.1, 88.8, 85.3, 83.8, 57.2, 78.4, 71.9, 75.6, 79.7, 73.1, 58.1, 50.0, 61.6 and 65.9%, respectively.There has been a statistically significant difference between the rates of accuracy of both country's in correctly identifying the true answer in statements on purchasing behavior (p<0.01,p<0.05), (Table 2). However, the true answering rates of consumers of both countries for certain statements relating to purchasing behavior were found to be considerably low.The Turkish consumers performed low and returned less correct answers to the statements "Food sold in hypermarkets and big shopping malls are of high quality" by 43.4%, "Ads give all what we need to know about the product" by 40.8%, "Brands always contain high quality stuff" by 36.2%, "Food with higher nutritional qualities are always more expensive" by 31.0%,"The promotional stuff (gifts) given with foodstuffs influence my purchasing decisions" by 30.2% and "The price is what drives my decision on which foodstuff to purchase" by 12.9%.The same situation is also true for Kazakh consumers.Their true answer ratings to the above statements were found to be 35.0,26.3, 31.6,20.9, 28.1 and 18.8%, respectively (Table 2). Basing on the results obtained from Table 3, only 4 out of a total of 30 statements concerned with food safety have been found to have no significance in statistical terms, between the consumers of the two countries (p >0.05).A majority (95.4%) of Turkish respondents correctly affirmed the statement "Surfaces to be used for preparation of foodstuffs should be cleaned before operation", while only a few (30.7%) could have managed to give a true answer to the statement "Milk sold on streets may only be used after treatment with heat for half an hour".For the Kazakh side, a majority (78.4%) of the consumers correctly identified the statements "Peelable fruit and vegetables should be flushed with fresh running water" while only a few (12.8%) made the correct point about the statement "Leftovers should be put inside the fridge in no later than two hours of consumption". The true answering rates of Turkish resident consumers to questions regarding food safety were found to be higher than Kazakh consumers.For instance, Turkish consumers correctly affirmed the statements "Surfaces to be used for preparation of foodstuffs should be cleaned before operation" ( 95.4%), "Peelable fruit and vegetables should be flushed with fresh running water" (93.7%), "Poultry like chicken, turkey and etc. should be washed before being cooked" (93.1%) "Hands are sources of contamination for food-borne diseases" (92.2%), "Hands contain the most intense populations of microorganisms in a body" (89.1%), "The bacteria passing to the food from the hands may create harmful toxins in the food" (86.5%), "Raw food and cooked food should be stored separately" (85.1%), "Thawed meat should not be frozen again" (83.6%), "Food containing cans with lumps and protrusions are inconvenient for use" (83.0%), and "Canned food may be stored in shelves of their original warehouses" (79.6%).The true answering rates for the above questions, of Kazakh consumers have been 74.4, 78.4, 76.9, 77.8, 64.4, 57.5, 72.2, 43.1, 68.8 and 59.4%, respectively.There has been a statistically significant difference between the rates of accuracy of both country's in correctly identifying the true answer to above statements on food safety (p<0.01),(Table 3). Some of the statements on food safety were correctly answered by the consumers from both countries by less than 50%.While Turkish consumers correctly assessed the statements "Wiping the used surfaces of a meat cutting board right after use with a piece of paper towel would prevent bacterial growth before the board can be used for cutting any other food product" (43.7), "Food can be checked for taste to determine whether it is safe or not" (43.4%), "Frozen meat can be thawed over countercentral heating" (41.7%), "A wiping cloth can be used as a cleaning material when preparing meals" (35.3%) and "Milk sold on streets may only be used after treatment with heat for half an hour" (30.7%) the Kazakh side's rate of accuracy in providing the right answers have been 23.4,18.8, 25.9, 17.5 and 21.3%, respectively.There has been a statistically significant difference between the rates of accuracy of both country's in correctly identifying the true answer to above statements on food safety (p<0.01),(Table 3).While the Turkish consumers scored 14.43 ± 2.56 for purchasing behavior and 20.82 ± 14.74 for food safety knowledge tests, their Kazakh peers performed 11. 84 ± 2.92 and 14.74 ± 3.86, respectively.The difference between the two study groups were found to be statistically significant (p<0.01). Finally, the purchasing behavior score of the consumers were analyzed as compared to their food safety knowledge scores and relations between their ages, and the resultant findings compiled into Table 5. A positive and statistically significant correlation (r=0.148,p<0.01) was found to exist between the food safety knowledge and purchasing behavior scores (r=0.541,p<0.01), ages and purchasing behaviors (r=0.325,p<0.01) and food safety knowledge scores and ages of the consumers. DISCUSSION When consumers purchase foodstuffs, they guide the way in which the food safety system operates to the extent of the selectivity and rationalism reflected by their attitudes.In addition, they demand all standardscompliant, reliable, healthy and inexpensive food items and thereby ensure that food production plants and outlets operate in compliance with applicable laws on food, international norms and standards.Aware consumers also set the quality of food inspection ad controls conducted by the government to protect them.Consumers group after becoming aware individuals to form into non-governmental organizations to enforce and ensure the effective operation of the food safety system, while pressing the government to enact laws for the protection of consumer rights (Dağ and Merdol Kutluay, 1999).Albayrak (2000) and Kucukkose (2002) found that consumers mostly check the product expiry dates, production dates and overall packing of foodstuffs, whether the packages are recyclable or not, type and quality of the material in which they are manufactured, their suitability for containing food and the state of soundness they present.Kolodinsky et al. (2008) observed that price is the topmost motivator of food purchasing behaviors and that the energy, nutritional elements and especially the amount of fat in the food as stated on product label have more or less influence on the choices of consumers.Alpoguz et al. ( 2009) have found in a study they performed on students that the students would never regard whether the expiry dates are overdue or packages are opened, when they buy foodstuffs, however, almost half of the youth never read information provided on product labels when purchasing packed food.Another study conducted in Italy showed that the relatively expensive sale prices of vegetables and fruit grown through organic farming methods influence the will to buy, in the consuming public, to purchase such products, due to low income levels (Boccaletti and Nardella, 2000).The contemporary changes in the areas of education, communication and technology also reflect on purchasing behaviors among the consumer public, changing their nutritional habits and cultures as a result of changes in the social culture caused by globalization (Öztop and Babaoğul, 2004).The dazzling urbanization rates, vast diversification of products, ads communicated through mass media, rise in the per capita average income and women's integration into the business life affect the perspective and perception of food products in the consumer and therefore the purchasing behaviors.A consumer check of the food product in time of buying is essential for protecting the health of the consumer, while preventing him or her from being deceived economically.This study has revealed the need on the part of consumers living in both countries for having access to educational facilities to improve their inefficient purchasing behaviors in a more cautious manner, despite the fact that Turkish consumers appear to be more aware about the food purchasing behaviors (Table 2). Lack of food safety entails to territorial and global problems.Food-borne diseases are frequently seen and reported in almost any country whether advanced or underdeveloped, although they differ more or less from one country to another, depending on social life styles and economic conditions (Unusan, 2007;Sanlier, 2009).It is crucial that conditions of hygiene are ensured in all processes through production to customer offering of foodstuffs, while keeping the consumer public well informed about the supply and use of safe food.Therefore, the accessibility of food should be handled as one and common concern with all its integrity, and the entire process from production to marketing through the distribution network should be brought under permanent control (Anonymous, 2001).The urgent need for protecting and preserving the health of consumer in terms of balanced and sufficient food consumption, which is a critical factor in people's gaining and retaining the ability to live, raise and age completely free of any immediate threats of diseases by consuming reliable (healthy) and quality food products and protection against all kinds of deceit when purchasing food highlight to the significance of the matter (Trepka et al., 2006). Roseman and Kurzynske, in a study they performed recently (2006), found that age, sex, income and educational levels all influence the food safety knowledge and behaviors of the consumers. Other studies performed show that more information and higher perception is possessed in women then men (Bruhn and Schutz, 1999;Bryd-Bredbenner et al., 2008) and in adults than youth (Sanlier, 2009) in terms of food safety.Another study demonstrates that there is insufficient knowledge among the consumer public on food-borne diseases, hand-washing routines, purchasing food, separating raw and cooked food, thawing and cooling of frozen food and consumption of raw eggs and therefore, the obvious need for consumers to undertake education on food safety (Surujlal and Badrie, 2004).It has been reported in a study conducted with the aim to determine knowledge, attitudes and behaviors on food-borne diseases and food processing practices of Italian Mothers, that 36.0% of the moms studied knew or heard about pathogenic microorganisms.It was also observed that level of educatedness is an indicator of this knowledge and older and more educated women among the respondents have shown a positive attitude and approach to food-borne diseases at a high degree (Angelillo et al., 2001).In another study examining the food safety knowledge and attitudes of consumers, it was clearly shown that a majority of consumers were lacking any information about typhoid, gastro-intestinal inflammation and amebiasis, despite being knowledgeable in such food-borne diseases as cholera, food poisoning and jaundice (Sanlier et al., 2010).In a further study performed on US consumers, it was found tat consumers were especially clueless about microorganisms that cause food-borne diseases and foodstuffs being under threat of these microorganisms (Wilcock et al., 2004). A recent study attempted to assess the level of knowledge in 904 consumers on food preparation and storage techniques both before and after a one week long education, using the survey method.The resultant findings revealed that knowledge of consumers were incomplete and faulty for the most part, while the rate of wrong information dropped after the education.For example, while only 31.7% of the respondents revealed knowledge of the fact that fridge temperature should be maintained in a range of 0 to 40°C, this rate grown to 78.4% after education.Besides, the numbers of people who had stored raw meat and cooked food in a wrong way in their refrigerators were declined to 63 and 65%, from a baseline of 144 and 133, before the education (Ghebrehewet and Stevenson, 2003).As this study clearly suggests education on food safety has a great influence on the consumer.Earlier studies also demonstrated the need in consumers for education on food safety (Bruhn andSchutz, 1999, Wilcock et al., 2004;Medeiros et al., 2004;Baş et al., 2006;Unusan, 2007).Most of the consumers in Italy recognize Staphylococcus Aureus (92.9%) and Colostridium botulinum (87.5%) as food-borne pathogens.A 53% of the consumers believe that instant food would elevate he risk for food poisoning.The ratio of people knowing the requirement to separate raw food from cooked ones to those not knowing is 84.6 %.A 90.4% of the consumers know that thawed food should never be frozen again (Angelillo et al., 2001).In another study, knowledge of Turkish consumers about meat purchase, storage, preparation, cooking and serving in the domestic kitchen were investigated and it was found that many individuals failed to store meat at the correct temperature or did not defrost meat correctly.It was also reported that food handling practices differed according to socioeconomic group and the level of education of the consumers were noted (Karabudak, Bas and Kızıltan, 2008).In addition to the survey studies concerning food safety, there have been also some observation based studies, where people are found to not follow many food safety rules when preparing meals.A 97% of the individuals volunteering the study has indicated that they would wash their hands with soap under running water, before preparing food.A 89% of the individuals who stated that meat cutting boards should be washed through with flushing water and soap, although only 60% were putting this practice in everyday life (Bermudez-Millan et al., 2004).A study conducted in the US showed that although 86% of the consumer public are aware of the fact that hand-washing practice prevent food poisoning, only 66% actually washed their hands and only after touching raw meat and poultry flesh (Wilcock et al., 2004 ). At the end of this study, it was found that Turkish consumers had better levels of knowledge and information about food safety than their counterparts in Kazakhstan, but they still were below the sufficient levels (Tables 3 to 5).Although food safety lyes within the common authoritative and responsibility frames of the government, the food industry and the consumer, greater burden falls upon the government as the ultimate body responsible for setting and enforcing legal arrangements covering the food sector (Soydal, 1999).Governments have to establish an environment that, in addition to ensuring social, political and economic stability and justice, would bring peace and develop appropriate policies accordingly.With a global view, active cooperation seems a must between world governments inter alias and with UN institutions, financial institutions, intergovernmental organizations and non-governmental organizations to ensure food safety for all (Özel, 2003).The first measure to take and initial step toward performing a risks analysis in the field of food safety should be to educate consuming public on food safety.Savvy consumers present a motivating power for producers and industrialists in producing safe foodstuffs and for the government in establishing wide and effective control over food.Not only the food producers but also the food industrialists should assume offering safe food to consumer public as a social liability. Misinformation of the public on food safety should be prevented.Professionals scientists and media should assume responsibility for this matter.The results obtained from the present study brings highlight to the importance of education once again, for which reason, there is a felt need to educate the consuming public on food safety.The data gathered from this study have revealed that there is an urgent need for food safety education in this target group.An effective food safety education program should cover information concerning temperature control of food, proper food preparation practices, prevention of cross contamination, suitable clean up procedures, causative foodborne illness agents, high risk groups, and other contributing factors to foodborne diseases and prevention strategies (Osaili et al., 2011).However, means should be provided to help Sanlier et al. 2731 seeding messages that any food safety education program would deliver in the minds of the consumers.Following its completion, the education instructions should be repeated at regular intervals to ensure that knowledge learned throughout the classes entail to attitude and attitude results in behavior, with assurance of the continuity of education through surveillance controls.It is of common belief and opinion of the authors of this study that common research and studies to be performed through increasing cooperation between Turkey and Kazakhstan, two countries with a common past and culture would contribute much to raising public awareness.In the meanwhile, proper inclusion should be given to ensuring food safety in action plans, inter sectoral cooperation should be developed between the industries of both countries and efforts to be pursued in that context should gain effectiveness and speed in both states. Table 2 . The distribution of consumers' food purchasing behavior. Table 3 . Distribution of correct answer on the food safety knowledge guestionnaire. Table 4 . Knowledge of food safety and purchasing behavior score according to countries. Table 5 . Correlation between age, participant scores of food safety knowledge and purchasing behavior (r).
2018-12-06T09:56:15.139Z
2011-09-16T00:00:00.000
{ "year": 2011, "sha1": "5c02f6a00aefbf3f79be41305bbf16c570afc415", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/0B646BA12808.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5c02f6a00aefbf3f79be41305bbf16c570afc415", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
96438541
pes2o/s2orc
v3-fos-license
Transport and chaos in lattice Sachdev-Ye-Kitaev models We compute the transport and chaos properties of lattices of quantum Sachdev-Ye-Kitaev islands coupled by single fermion hopping, and with the islands coupled to a large number of local, low energy phonons. We find two distinct regimes of linear-in-temperature ($T$) resistivity, and describe the crossover between them. When the electron-phonon coupling is weak, we obtain the `incoherent metal' regime, where there is near-maximal chaos with front propagation at a butterfly velocity $v_B$, and the associated diffusivity $D_{\rm chaos} = v_B^2/(2 \pi T)$ closely tracks the energy diffusivity. On the other hand, when the electron-phonon coupling is strong, and the linear resistivity is largely due to near-elastic scattering of electrons off nearly free phonons, we find that the chaos is far from maximal and spreads diffusively. We also describe the crossovers to low $T$ regimes where the electronic quasiparticles are well defined. Most strongly correlated metals exhibit "strange" or "bad" metal behavior with a linear-in-temperature (T ) resistivity, with values which can exceed the Mott-Ioffe-Regel limit 1 .Recent studies [2][3][4][5][6][7][8][9] (and in some earlier related work 10 ) have shown that such behavior appears naturally in lattice models of coupled 'islands', with each island described by a N orbital Sachdev-Ye-Kitaev (SYK) model [11][12][13] of random all-to-all two-body (fourfermion) interactions.When the coupling between the islands is a two-body interaction 2,3 , we obtain a non-Fermi liquid metal with a T -independent resistivity.However, with a one-body hopping between islands as in Fig. 1 (the hopping can be random or non-random), [4][5][6][7] we obtain a linear-in-T resistivity for E c T U , where U is the root-mean-square interaction strength within an island, t 0 U is the root-mean-square one-body hopping, and E c = t 2 0 /U .We note in passing that a pair of SYK islands of Majorana fermions with identical two-body interactions, coupled by one-body hopping, have been used to describe eternal traversable wormholes in a dual gravity theory [14][15][16] .We also note that we take the large N limit with t 0 /U fixed, with couplings in the Hamiltonian scaled t 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " z h s y V Y P o K B p 5 c R 5 F K r V S g Q H s I 0 I = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m 0 o M e i F 4 8 V 7 Q e 0 o W y 2 m 3 b p Z h N 2 J 0 I J / Q l e P C j i 1 V / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e k E h h 0 H W / n c L a + s b m V n G 7 t L O 7 t 3 9 Q P j x q m T j V j D d Z L G P d C a j h U i j e R I G S d x L N a R R I 3 g 7 G t z O / / c S 1 E b F 6 x E n C / Y g O l Q g F o 2 i l B + y 7 / X L F r b p z k F X i 5 a Q C O R r 9 8 l d v E L M 0 4 g q Z p M Z 0 P T d B P 6 M a B Z N 8 W u q l h i e U j e m Q d y 1 V N O L G z + a n T s m Z V Q Y k j L U t h W S u / p 7 I a G T M J A p s Z 0 R x Z J a 9 m f i f 1 0 0 x v P Y z o Z I U u W K L R W E q C c Z k 9 j c Z C M 0 Z y o k l l G l h b y V s R D V l a N M p 2 R C 8 5 Z d X S e u i 6 l 1 W v f t a p X 6 T x 1 G E E z i F c / D g C u p w B w 1 o A o M h P M M r v D n S e X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H w U Q j Z 4 = < / l a t e x i t > U < l a t e x i t s h a 1 _ b a s e 6 4 = " J 7 n H Z U p E v 0 N a 6 j j L t r j g h V t N k z g = " > A A A B 6 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l U 0 G P R i 8 c W T F t o Q 9 l s J + 3 a z S b s b o R S + g u 8 e F D E q z / J m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M B V c G 9 f 9 d g p r 6 x u b W 8 X t 0 s 7 u 3 v 5 B + f C o q Z N M M f R Z I h L V D q l G w S X 6 h h u B 7 V Q h j U O B r X B 0 N / N b T 6 g 0 T + S D G a c Y x H Q g e c Q Z N V Z q + L 1 y x a 2 6 c 5 B V 4 u W k A j n q v f J X t 5 + w L E Z p m K B a d z w 3 N c G E K s O Z w G m p m 2 l M K R v R A X Y s l T R G H U z m h 0 7 J m V X 6 J E q U L W n I X P 0 9 M a G x 1 u M 4 t J 0 x N U O 9 7 M 3 E / 7 x O Z q K b Y M J l m h m U b L E o y g Q x C Z l 9 T f p c I T N i b A l l i t t b C R t S R Z m x 2 Z R s C N 7 y y 6 u k e V H 1 L q t e 4 6 p S u 8 3 j K M I J n M I 5 e H A N N b i H O v j A A O E Z X u H N e X R e n H f n Y 9 F a c P K Z Y / g D 5 / M H s i m M 3 A = = < / l a t e x i t > FIG. 1. Schematic of the lattice of SYK islands, each with N orbitals with two-body interaction U .The islands are coupled with one-body hopping t0. with N as in (3.1), and so implicitly assume e.g. that t 0 U/N .A remarkable consequence of the SYK description is that it opens up insightful connections between strange metal transport and many-body quantum chaos 12,[17][18][19][20] .The chaos is characterized by a butterfly velocity, v B , and a Lyapunov rate, λ L , and it has been argued 18 that under certain conditions there is an upper bound on the Lyapunov rate λ L ≤ 2πT as T → 0. We can combine these chaos characteristics to obtain a 'chaos diffusion constant' D chaos = v 2 B /λ L .Using insights from holographic models, Blake 21,22 argued that there was a close connection between D chaos and the diffusivities of strange metal transport.Subsequent work noted that while additional parameters appeared in the value of the charge diffusivity 23 , there was indeed a close connection 2,24,25 between the values of D chaos and the energy diffusivity, D E .The close connection between chaos and energy diffusion is also a central feature of recent quantum hydrodynamic descriptions [26][27][28] of strongly interacting fluids. In the first part of the present paper, we study the coupled SYK models with one-body hopping introduced by Song et al. 4 .We will extend their transport results to computations of out-of-time-order correlators (OTOCs).In extracting the chaos parameters from the OTOCs, we will employ recent insights on the structure of OTOCs by Gu and Kitaev 29 .They argued that large N systems of the type we examine have OTOCs in frequency (ω) and momentum (q) space of the form OTOC(q, ω) ∼ 1 N cos(λ L (q)/(4T )) where the Lyapunov rate λ L (q) is now q-dependent.Differing ways of extracting the butterfly velocity, v B , from the q dependence of λ L (q) have been discussed in the lit-erature.Gu and Kitaev argued that in a regime close to maximal chaos, the appropriate method relies on the pole of Eq. (1.1) which appears when the Lyapunov rate the maximal value.This happens (as we will show by explicit computation in our model) when the momentum is purely imaginary, q 1 = i|q 1 |.From the value of q 1 , we can now define a butterfly velocity and a chaos 'diffusion constant' by (1.3) We will compute the value of D chaos for the model of Song et al. 4 ; we find that it closely tracks the energy diffusivity, D E , in the incoherent strange metal regime, as was noticed in earlier models 24,25 . The second part of our paper will study the role of phonons in strange metal transport.Our motivation is drawn from recent observations of the thermal diffusivity of a strongly-coupled 'electron-phonon soup' in cuprate superconductors 30,31 .Here we will employ the model of strong electron-phonon coupling introduced by Werman et al. 32,33 , and combine it with the model of strong electron-electron interactions by Song et al. 4 .However, our approach does have some limitations, and so a direct contact with observations 30,31 is not possible at this stage.In our framework, the phonons largely act as a heat bath of free oscillators, which influences the electron dynamics.However, the feedback from the electrons to the phonon dynamics is small, and so it is not appropriate to consider the combined system as a single chaotic soup characterized by a single butterfly velocity. We will show that, provided the electron-phonon coupling is not too strong, the phonons do not alter the basic characteristics of the strange metal theory without phonons discussed in Section III, and summarized in Fig. 2. The main influence of the phonons is in altering the slope of the linear-in-T resistivity, and various related numerical prefactors.These corrections are characterized by a single dimensionless parameter gt 0 /U , where g measures the strength of the electron-phonon coupling.This additional parameter introduces a degree of non-universality in our results, which we expect will be overcome in a theory which uses a more self-consistent approach. For large electron-phonon coupling gt 0 /U 1, the linear-in-T resistivity persists, but the chaos properties are far from maximal and exhibit diffusive chaos propagation.A summary of the crossovers in the transport and chaos properties in a model with both electron-electron and electron-phonon interactions appears in Fig. 2. The plan of our paper is the following.Section II will recall the description of structure of scrambling from Ref. 29.We will start Section III by reviewing previous results on SYK-based strongly correlated metal (the t-U model).Then we present the calculation of the scrambling rate and butterfly velocity in this t-U model.We < l a t e x i t s h a 1 _ b a s e 6 4 = " d / Q M M H d Z V f v d 9 t f y C P a r Y x h Y U e s = " > A A A D G X i c n V J N b x M x E N 1 d v k r 4 S u H I Z U Q W q Q h I d j c E g g R S V V q J A x J F a t p K c V h 5 v U 5 i x b t e b C d q Z P l v c O G v c O E A Q h z h x L / B 2 6 Y S o H J h J E u j N / P e e J 6 d V Z w p H U U / / e D c + Q s X L 6 1 d b l y 5 e u 3 6 j e b 6 z X 0 l 5 p L Q A R F c y M M M K 8 p Z S Q e a a U 4 P K 0 l x k X F 6 k M 1 e 1 P W D B Z W K i X J P L y s 6 K v C k Z G N G s H Z Q u u 5 3 U E Y n r D S E l p p K 2 w i R n A p 4 D i h n q u J 4 q f S S U 0 B j i Y l J r E H q n d Q G V c z a F b h n z U 5 K L N w H V G T i y B B R K m 1 D h J z U G R q u H c 1 w V W H r h q y w + L G t J d + a X i f 5 X 1 3 E 3 d I 5 T l 9 Z x 3 T S C T h F e O j I W E 8 J 5 u a 1 h Y 2 J T q P O 4 J 4 T g X C R I q w 0 P I N F u h U + g H A 7 3 Q H k L i b F E W y n B s k C y B Q L Z c M G o m V + a l D a b E X t q J v 0 u j 2 I 2 n H U f 9 S L X d J P e v 2 n X Y j b 0 X G 0 v F X s p s 3 v K B d k X j g 6 4 V i p Y R x V e m S w 1 I x w a h t o r m i F y Q x P 6 N C l J S 6 o G p n j l 7 V w 1 y E 5 j I V 0 p 9 R w j P 7 O M L h Q a l l k r r P e U / 1 d q 8 G z a s O 5 H v d H h p X V X N O S n A w a z z l o A f U 3 g Z x J S j R f u g Q T y d x d a z e c 0 c 4 D 1 X A m n G 4 K / 0 7 2 k 3 b c b c d v k t b m 1 s q O N e + 2 d 8 f b 8 G L v i b f p v f R 2 v Y F H / P f + R / + z / y X 4 E H w K v g b f T l o D f 8 W 5 5 f 0 R w Y 9 f R 2 X 8 L w = = < / l a t e x i t > Heavy Fermi liquid < l a t e x i t s h a 1 _ b a s e 6 4 = " J m U E 3 2 G g X a o H p T Q Y w I j d M o E N z i g = " > A A A B + 3 i c d V B N S 8 N A E N 3 4 W e t X r E c v i 0 X w V J K U Y o 9 F Q X q s Y D + g D W W z n b Z L d 5 O 4 u y m G 0 r / i x Y M i X v 0 j 3 v w 3 b t s I K v p g 4 P H e D D P z g p g z p R 3 n w 1 p b 3 9 j c 2 s 7 t 5 H f 3 9 g 8 O 7 a N C S 0 W J p N C k E Y 9 k J y A K O A u h q Z n m 0 I k l E B F w a A e T q 4 X f n o J U L A p v d R q D L 8 g o Z E N G i T Z S 3 y 7 U g U x T f A 1 S M M z Z X c I G f b v o l C q O 4 1 b L 2 C k 5 S x j i V g w 8 7 G Z K E W V o 9 O 3 3 3 i C i i Y B Q U 0 6 U 6 r p O r P 0 Z k Z p R D v N 8 L 1 E Q E z o h I + g a G h I B y p 8 t b 5 / j M 6 M M 8 D C S p k K N l + r 3 i R k R S q U i M J 2 C 6 L H 6 7 S 3 E v 7 x u o o d V f 8 b C O N E Q 0 t W i Y c K x j v A i C D x g E q j m q S G E S m Z u x X R M J K H a x J U 3 I X x 9 i v 8 n L a / k l k v e j V e s X W Z x 5 N A J O k X n y E U X q I b q q I G a i K J 7 9 I C e 0 L M 1 t x 6 t F + t 1 1 b p m Z T P H 6 A e s t 0 + 9 1 5 R B < / l a t e x i t > T < l a t e x i t s h a 1 _ b a s e 6 4 = " g A M 3 F b 8 A 0 v 1 p 7 U / x i W A 5 K u M K j e U = " > A < l a t e x i t s h a 1 _ b a s e 6 4 = " B V t g / A R 3 / V 3 e W f J P a 0 t e I r g / q 6 U = " A 9 + u P 9 8 O 6 8 v 9 7 D 8 2 j N W 2 Z 2 0 S t 4 j 0 + 7 7 K y 9 < / l a t e x i t > Incoherent metal < l a t e x i t s h a 1 _ b a s e 6 4 = " j S / l q w P 7 a Nearly-free electrons R e r N d F 6 5 K V z x y g H 7 D e P g F 0 j J X Q < / l a t e x i t > Strong electron-phonon scattering < l a t e x i t s h a 1 _ b a s e 6 4 = " R j J k e y 0 K 6 V a u + 0 q e I C f p n Y P I < l a t e x i t s h a 1 _ b a s e 6 4 = " z y d w f A j g c h W m k I w q m Y N 9 a K Y f A Z I = " > A A A D S H i c l V L P i 9 N A F J 5 k / b H W X 1 0 9 e n n Y C I J S k t R i P S j L u g s e P K z Q 7 i 5 0 u m E y m b R D J z + Y m R R L y J / n x a M 3 / w Y v H h T x 5 i R p w V 1 U 8 M E w 7 3 3 f e 3 x v P i b M B V f a d T 9 b 9 s 6 V q 9 e u 7 9 7 o 3 L x 1 + 8 7 d 7 t 6 9 E 5 U V k r I J z U Q m z 0 K i m O A p m 2 i u B T v L J S N J K N h p u H x d 8 6 c r J h X P 0 r F e 5 2 y W k H n K Y 0 6 J N l C w Z 5 3 j k M 1 5 W l K W a i a r j o P l I o O X g C O u c k H W S q 8 F g x o M X H j S J h 7 g W B J a j s / 9 q j w K q L l q q s F w z q v S 1 G 0 2. Crossovers as a function T for gt0/U 1 and gt0/U 1, where g is a dimensionless measure of the electron-phonon coupling.The two chaos velocities v * and vB are defined as in Ref. 29.The resistivity is ρ (in units of h/e 2 ), the thermal conductivity is κ (in units of k 2 B T / ), and the thermal diffusivity is DE.The chaos exponent λL, and the diffusivities D chaos and D * are defined in Section II.There is near-maximal chaos and front propagation with velocity vB only in the "incoherent metal" regime which has v * < vB.The other regimes have v * > vB and diffusive propagation of far-from-maximal chaos.Here κ phonon refers to the phonon drag correction, discussed in Section V E. The values above do not include the saturation effects discussed in Section V D. discuss a generalization of the t-U model to incorporate phonons in Sections IV-VII. II. DESCRIPTION OF SCRAMBLING In this section, we review the description of scrambling in a many-body system, following Ref.29.We are going to define three quantities which we will calculate in Section III for the t-U model.They are the scrambling rate λ L , short-distance scrambling diffusion coefficient D * and the long-distance scrambling diffusion coefficient D chaos . A. Electron Out of Time Order Correlator We will use the following out-of-time-order correlator (OTOC) to characterize the scrambling: , where y 4 = exp(−βH) Z . (2.1)Here t 1 ≈ t 2 β, t 3 ≈ t 4 ≈ 0, and the operators are evenly spaced along the imaginary time circle for our convenience.In the time range β < ∼ t λ −1 L ln N , the OTOC is expected to grow exponentially, where t is the center of mass time separation: and λ L is the Lyapunov exponent or scrambling rate.We comment on the regularization y in the above definition.For a thermalizing system, we expect that the details of the regularization will be washed out after a thermal scale 1/β, and therefore will not affect the Lyapunov exponent 18 .Technically speaking, the Bethe-Salpeter equation approach we use searches for unstable eigenmodes on double Keldysh countour from a generic initial condition.The shift of regularization will affect the definition of the Wightman propagator G W at short times (see Appendix D), which enters into the Bethe-Salpeter equation.Consequently, for the eigenmodes of the equation, only the short time behavior is affected, not the Lyapunov exponent that characterizes the growth of chaos in a large time window.For the SYK model, our expectations were verified in Ref. 34, and we expect similar results here. In general, scrambling can propagate in space and thus the OTOC depends on x.If we Fourier transform position x to momentum q, we will obtain a q-dependent scrambling rate λ L (q).For now we are interested in the temporal growth of the OTOC, and consider the translationally invariant scrambling rate λ L ≡ λ L (q = 0).Due to the presence of SYK type interactions, we expect that at high temperatures T E c , λ L saturates the chaos bound, i.e. λ L /T ≈ 2π, and at low temperatures T E c , we expect that λ L is given by the Fermi liquid inelastic scattering rate, λ L /T ∼ T /E c (see Fig. 2). B. Spatial Propagation of Scrambling and Butterfly Velocity We shift the focus to the spatial propagation of scrambling.To discuss the propagation, it is convenient to discuss the Fourier transform of OTOC in both space and time where , and by time-translation symmetry we have only integrated over three time variables.As mentioned before, λ L depends on momentum, and this encodes the information about scrambling propagation.The exponential growth in Eq. (2.2) is translated to a pole singularity OTOC(q, ω) ∼ c/(ω − iλ L (q)). x < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 R 5 H j q c x 8 f t 4 r P e N b / q 4 Z A J z In region B, the OTOC shows a wave-front propagation as well as maximal chaos, see (2.14).In region C, the OTOC does not grow.In regiond D, the OTOC has saturated. As discussed in Section I, we will analyze the qdependent OTOC using the ladder identity of Ref. 29: the prefactor of the OTOC contains a pole in q, as in Eq. (1.1): The pole sits on imaginary q-axis at q 1 = i|q 1 |, where λ L (q 1 ) = 2πT , as noted in Eq. (1.2). To obtain the propagation of scrambling, we Fourier transform the OTOC back to real space: where we have evaluated the frequency integral by picking up the ω-pole and omitted regular factors which does not change the qualitative behavior of the OTOC.At large t and x, the above integral can be evaluated using saddle point approximation.For convenience we set d = 1 but the following discussion easily generalizes to higher dimensions.Demanding the exponent in (2.6) be stationary with respect to q, we obtain the saddle point q * on the imaginary q-axis defined by and the saddle point yields Recalling the definition of q 1 in Eq. (1.2), we note that if |q 1 | < |q * |, i.e. the pole sits between the saddle point L (i|q|) < l a t e x i t s h a 1 _ b a s e 6 4 = " I i s B B R 5 n u F V s 3 J e T 5 6 B f S + / s W 8 = " > Schematic behavior of Lyapunov exponent λL(q) on the imaginary q-axis at strong and weak interaction respectively (from Ref. 29).The pole in the prefactor of the OTOC sits at q1, where λL(q1) = 2πT .The butterfly velocity vB (1.3) is the slope of blue lines.The threshold velocity v * is the tangent slope (red lines) of λL(q) at q1.At strong interaction (a) v * < vB and at weak interaction (b) v * > vB. and the real q-axis, we will hit the pole when we deform the integration contour, and we must include the pole contribution to the OTOC: where v B was defined in Eq. (1.3).As shown in Fig. 4, λ L (i|q|) is a convex function of |q|, and therefore we can rewrite the condition |x|/t > v * ≡ iλ L (q 1 ). (2.10) We refer to v * as the threshold velocity with the following meaning: If |x/t| > v * , q 1 will be hit during the deformation of integration contour, so the pole will contribute to the OTOC and vice versa.We now proceed to discuss behaviors of the OTOC in different regimes.Let denote the scrambling time after which, the OTOC saturates to O(1).Then the propagation of the OTOC has the following behavior (see Fig. 3): 1.In the incoherent metal regime T E c , λ L (0) is close to maximal and we have v * < v B (see (a) of Fig. 4). At short distances v B (t − t scr ) < |x| < v * t and |q * | < |q 1 |, so the OTOC receives contribution only from the saddle point, and it has a diffusive behavior in region A of Fig. 3.We further assume that q * is small so that a Taylor expansion of λ L (q * ) is valid, and we arrive at a diffusive OTOC: where the diffusion coefficient D * is defined by The form of the λ L (q) is similar to the one appears in the linearized reaction-diffusion equation (also known as the Fisher-Kolmogorov-Petrovskii-Piskunov equation, for its relation to OTOC see e.g.Ref. 35).However, the comparison is merely formal, the diffusion coefficient D * here does not necessarily correspond to physical transport, in particular it does not agree with the energy diffusion as will be shown in Fig. 7.In contrast, for the weakly coupled theories [35][36][37] where a quasi-particle picture still applies, the coefficient D * could be related to the diffusion pole of the Green's function. For the incoherent metal, we do not know the exact microscopic origin of D * since it is strongly interacting.We may speculate that the appearance of D * as well as the slow down of the butterfly effect (comparing to the maximal chaos at long distance discussed below) are attributed to the "conformal matters" inherited from the SYK model. At long distances max{v * t, v B (t−t scr )} < |x| < v B t and |q * | > |q 1 |, so the OTOC contains contribution both from the saddle point and the pole. Since the pole contribution grows with time at maximal chaos rate, it dominates the OTOC.The OTOC shows a wave-front propagation with maximal chaos in region B of Fig. 3: For later comparison to energy transport, it is convenient to introduce a 'diffusion coefficient' (as in Eq. (1.3)) (2.15) We refer to D * as short-distance diffusion coefficient and D chaos as long-distance diffusion coefficient. 2. In the Fermi-liquid regime, λ L (0) is far from maximal and v * > v B (see Fig. 4(b)).As a consequence, even if the pole contributes to OTOC it is exponentially small relative to the non-growing part, so we always observe a diffusive OTOC as in Eq.(2.12). In terms of Fig. 3, region A now dominates the chaos and region B completely disappears. III. SCRAMBLING AND BUTTERFLY VELOCITY IN THE t-U MODEL A. The t-U model In this section, we review the basic properties of the t-U model 4 .In the t-U model, there is a SYK-type island on each site of the lattice.Each SYK island consists of N flavors of electrons, and the electrons interact with each other via a four-fermion random interaction U abcd,x .Electrons can also hop to adjacent islands with a random amplitude t ab xx .The Hamiltonian of the t-U model illustrated in Fig. 1 is Here c ax is the electron annihilation operator, where x labels lattice site, and a labels flavor.t ab xx and U abcd,x are Gaussian random couplings satisfying z is the coordination number of the lattice.We will work in the limit of U t 0 and large N .The analysis of Ref. 4 shows the theory has a coherence energy scale E c = t 2 0 /U .Various properties of the system, such as the entropy S, the conductivity σ, and the thermal conductivity κ/T , are universal functions of T /E c .When T < E c , the system demonstrates a heavy-Fermi liquid like behavior: the entropy is proportional to T , with a large slope compared to free fermions.The resistivity ρ = 1/σ and the inverse thermal conductivity T /κ grows quadratically in T , suggesting the existence of quasiparticle excitations.When T > E c , the system behaves as an incoherent metal, where the entropy saturates to a constant value predicted by the SYK model, and both ρ and T /κ grows linearly with T . B. Green's Function In this section we review Green's functions of the t-U model in both imaginary time and real time.The imaginary time Green's function is useful for thermodynamics, and the real time Green's function is useful for transport and scrambling. Imaginary Time We start with the imaginary time Green's function.We first perform disorder averaging over t and U , and due to the self-averaging property of SYK model, we can do this with a single replica.After that we introduce the Green's function bilinear , and the self-energy Σ(τ 1 , τ 2 ) as a Lagrange multiplier to impose the definition, and we can obtain the imaginary time action S β [G, Σ] as (3.4) In the large N limit, we can obtain the equation of motion from saddle point expansion: (3.5) The above equations are solved numerically by combining iteration and fast Fourier transform (FFT).See Appendix A for more details. Real Time Next, we turn to the computation of real time Green's function.We use the Keldysh formalism (see Appendix B) to compute the retarded and the advanced Green's functions.In the Keldysh formalism, the time contour for path integral is doubled to a forward branch s = + and a backward branch s = −.The Keldysh action is where S is the original action and ψ + , ψ − are fields supported on the forward and the backward branch respectively. We perform the disorder average over t, U of the Keldysh action, and then introduce the Green's function bilinear iG ss x (t, t ) = (1/N ) a c axs (t)c † axs (t ) and the Lagrange multiplier Σ ss x (t, t ) to impose the definition. As a result, we obtain the Keldysh action S K which reads Here s, s = ±1 denote the two branches, and the Pauli matrix σ z acts on ss indices.The Green's functions G ss can be combined into retarded, advanced and Keldysh Green's functions using Keldysh rotation (3.7) The equations of motion can be obtained by varying the Keldysh action S K .However there are multiple solutions corresponding to different temperatures.We fix the temperature by supplementing the fluctuation dissipation relation The equations of motion now take the following form (3.9)The above equations can also be solved using iteration and FFT (see Appendix A). C. Computation of the OTOC We will use the kinetic equation method to numerically obtain the scrambling rate λ L (q) as a function of momentum q and then use the ladder identity to compute diffusion coefficients D * , D chaos which were defined in Section II. We first derive equations for the following retarded (3.10)Here, we have introduced two types of OTOCs f 1 , f 2 because in the complex SYK model there are two ways to arrange fermionic arrows, as shown in the following diagramatical representation: The retarded OTOCs f 1 and f 2 have the same Lyapunov exponent as the OTOC defined in (2.1) but have simpler diagrammatic expansions. The Fourier transform f 1 (q, Ω, Ω , ω) is defined similarly as in Eq.(2.4), and the Fourier transform of f 2 is defined as where the signs of t 43 and t 21 are opposite to Eq.(2.4). To get the Lyapunov exponent, we search for poles of f 1 and f 2 at ω = iλ L (q).The retarded OTOCs f 1 , f 2 have the same Lyapunov exponent λ L (q) as the nonretarded version (2.1), but it does not have the pole structure as Eq.(1.1), so it only has diffusive propagation. As OTOCs, f 1 and f 2 can be conveniently computed using the Keldysh perturbation theory, where the path integral is adapted to include multiple time folds, as is shown in Fig. 5.The time contour C now consists of two real folds and two imaginary segments, with t = 0 identified with t = −iβ. It is convenient to relabel fields on each real fold using r-a variables where i = 1, 2 labels the two real folds as in Fig. 5, and +/− labels the future/past directing segment.More details of Keldysh perturbation theory is included in Appendix B and C. The OTOCs (3.10) can now be written as a path integral over the contour C, for example, and there is a similar expression for f 2 . We then expand Eq.(3.14) in powers of interaction vertices, and sum over all the ladder diagrams to obtain the Bethe-Salpeter equation for f 1 and f 2 at leading 1/N order. The time contour C for calculating OTOC.The contour is drawn such that the real time goes to the left, which is convenient when acting by operators on left. To begin with, it is convenient to sum over the hopping vertices Eq. (3.2), because they contain all the momentum dependence.This step is justified by the fact that the hopping vertex is a momentum-dependent number t 2 0 µ(q) (see (3.16)) in the functional space and therefore it commutes with other operators.Diagrammatically, the sum L(q, Ω, ω) is 15) In terms of propagators and couplings, this is where µ(q) = 1 z a e iq•a is a summation over lattice neighbors.At long wavelength, µ(q) = 1 − αq 2 + • • • , and in practice we take µ(q) = cos(q), i.e. taking the system to be an 1D chain with unit lattice spacing. We move on to include other diagrams.The Bethe-Salpeter equation can be written as where each right/left propagator is a retarded/advanced propagator, each vertical propagator is a Wightman propagator G W (see Appendix.D), and L is obtained from L by reversing all arrows.The Bethe-Salpeter equations above can be simplified in the following manners: 1.The two legs on the right of f 1 , f 2 are not relevant, so we can suppress the Ω dependence. 2. At half filling, G R (t) and G A (t) are pure imaginary, and it follows that L = L. 3. At half filling, the electron Wightman propagator G W (t) is an even function, and it follows that all the electron rung-diagrams agree up to symmetry factor. With the above simplifications, the Bethe-Salpeter equation can be written as (we have suppressed the Ω argument, and numerical factors are explained in Appendix C) where Note here the argument in f 2 has a minus sign to match the convention of Fourier transform.The 1 2 factor is due to combiatorics. The Bethe-Salpeter equations (3.18) is a coupled equation for f 1 , f 2 .To obtain the Lyapunov exponent, we add the two equations together and we obtain an equation for the sum F = f 1 + f 2 : As a sanity check, we can take the t 0 → 0 limit, and we will recover the Bethe-Salpeter equation for the original SYK model 19 .If we repeat the exercise for the difference f 1 − f 2 , we found that it has no exponential growth in t 0 → 0 limit, and we conclude that the sum F is the correct place to look for Lyapunov exponent.Following the approaches of Ref. 24, we will numerically extract λ L (q) from Eq. (3.20).The equation can be written as a matrix equation F = L + (LK) ω,q F and hence F = (1 − (LK) ω,q ) −1 L. As discussed earlier, the exponential growth in time translates to a pole at ω = iλ L (q), and this implies that the matrix M ω,q = 1 − (LK) ω,q is singular at ω = iλ L (q).Our algorithm sweeps ω on the imaginary axis and searches for the point where the smallest eigenvalue of M ω,q vanishes.The details of numerical implementation are in Appendix A. Before moving to the results, we will make a few comments on the relation between the retarded OTOCs f 1,2 and the regular OTOC defined in (2.1). • The retarded OTOCs contain linear combinations of regular OTOCs with the same growing exponent.Therefore, to obtain λ L (q) for each momentum q, one can choose to work on any type of OTOCs.As we have mentioned, we decide to work on retarded OTOCs because they have simpler diagrams.More explicitly, there is no interaction vertex on the imaginary time circle for the retarded OTOCs due to the cancellation among the terms contained in f 1 and f 2 . • We are also interested in the spatial propagation of the scrambling which relies on the pole structure of OTOC in the momentum space.For this purpose, we need to determine the prefactors that are beyond the kinetic equation method and depend on the types of OTOCs we choose.We will apply the ladder identity to achieve the prefactor for the regular OTOC.The decision is explained as follows.Physically, the regular OTOC (2.1) contains all the contributions to the scrambling.In contrast, the retarded OTOCs may exclude some degrees of freedom, e.g. in the single site SYK model, the contributions from the reparametrization modes (i.e.Schwarzian modes) are excluded and the retarded OTOC only contains the "stringy" modes as discussed in Ref. 29 section 5. • As a consequence of the ladder identity 29 , the regular OTOC (2.1) has a pole [cos(λ L (q)/(4T ))] −1 in the prefactor.This pole leads to a sharp butterfly wavefront discussed in Section II B; however the retarded OTOCs do not have a similar pole due to an additional factor cos(λ L (q)β/4) in the numerator.This may be explained by the following simple observation.Let us try to expand e.g.f 1 in (3.10) and collect the OTOCs in the expression: there are two terms with different regularization comparing to the definition in (2.1).More exactly, they differ by an imaginary time evolution ±β/4, and in total give a e iλ L β/4 + e −iλ L β/4 = 2 cos(λ L β/4) factor.This factor exactly cancels the same factor in the denominator coming from the regular OTOC. In the end, let us also comment on the finite N effects in OTOCs.In this paper, we will only work on the leading order in 1/N , namely within the validity that OTOC ∼ e λ L t /N .Physically, this corresponds to the early time regime (i.e.long before the scrambling time/saturation, e λ L t /N 1) where the important physics is the initial growth of scrambling and the propagation of chaos wavefront.In this regime, the scrambling is simple and can be determined by linear equations we derived.On the contrary, the physics in late time generally requires the knowledge of non-linearity, namely the higher order in e λ L t /N effects.In other words, the scrambling time is the time scale when we have to worry about the finite N effect.In terms of diagrams, that means we need to include non-melonic diagrams and diagrams with multiple ladders.The detailed discussion of the nonlinear effects in SYK-like models is an interesting future direction. D. Numerical Result for Scrambling, Comparison to Energy Transport In this section, we present the numerical results for λ L ≡ λ L (q = 0), D * and D chaos , and compare D * , D chaos to energy transport.The scrambling rate is plotted in Fig. 6 and the scrambling diffusion coefficients are plotted in Fig. 7.In Fig. 8, we compare the two characteristic velocites v * and v B . We first comment on the U/t 0 dependence of our result.We found that although the qualitative features are the same, the overall magnitude of λ L does depend on U/t 0 , especially at high temperatures.The reason might be the conformal symmetry breaking by temperature as in the original SYK model.Anyway, we will focus on the results of largest U value in practice which is U/t 0 = 200. The numerical results show the following features: Next, we compare the scrambling diffusion coefficients D * , D chaos to the energy diffusion coefficient D E ≡ κ/C, where κ is the thermal conductivity and C is the heat capacity.κ and C are plotted in Fig. 9.Both κ and C have been computed in Ref. 4, and will be reproduced later in this paper.We found that at low temperatures T E c , D * ≈ D chaos ≈ D E .However, at elevated temperatures, the long-distance scrambling diffusion coefficient D chaos closely follows the energy diffusion coefficient D E , while the short-distance scrambling diffusion coefficient D * has a totally different behavior.This suggests that D chaos may arise from the degrees of freedom that are also responsible for energy transport.Both D E and D chaos grow linearly at low temperatures and saturate at high temperatures.The linear growth parts have approximately the same slope, but D chaos saturates at lower temperature than D E .The difference in saturation might be due to lattice details, because at high temperature q 1 is comparable to inverse lattice spacing.This finding agrees with the conjectured equivalence between energy transport and scrambling propagation 2,21,22,24,25,29 . We also comment on the relation to charge diffusion coefficient D C = σ/K where σ is conductivity and K is charge compressibility.According to Ref. 4, σ is an orderone number at low temperatures and σ ∼ E c /T at high temperatures; K is of the order 1/U at all temperatures.This implies that D C ∼ U at low temperatures and D C ∼ t 2 0 /T at high temperatures.As a result, D C has totally different behavior from D E , D chaos , D * . To summarize, we have confirmed the following features of the t-U model as summarized in Fig. 2: First, both the scrambling rate λ L and the chaos propagation show a crossover behavior from Fermi liquid to SYK maximal chaos, consistent with the qualitative picture in Sec.II.Second, the long distance scrambling diffusion coefficient D chaos approximately equals the energy diffusion coefficient D E in a wide range of temperature. IV. INTRODUCING PHONONS Phonons play an important role in understanding properties of strange metals.In this section, we propose a modification to the t-U model to include effects of phonons, following Werman et al. 32,33 .For simplicity, we use the Einstein model of phonons, i.e. dispersionless phonon.To explore physics above the Mott-Ioffe-Regel (MIR) limit, we send the Debye frequency ω 0 to zero.As the dispersionless phonons are not propagating, we can model them by harmonic oscillators residing on each site.To reflect the fact that in cuprates there are a large number of phonon bands, we add N (N + 1)/2 types of phonons and let them couple to electrons through the Yukawa coupling X abx c † ax c bx .Note that our phonon field is complex and there are N 2 real degrees of freedom. The Hamiltonian consists of four terms: the random hopping term the onsite SYK-interaction term the phonon Hamiltonian and the electron-phonon coupling term Here X abx is phonon field satisfying X abx = X † bax .Other quantities have the same meaning as the t-U model. In this model the dimensionless phonon coupling is defined as g = α 2 /(M ω 2 0 t 0 ).We are interested in the physics when the temperature is much higher than Debye frequency, so we will consider the limit ω 0 → 0, but keep g fixed.Also, the system has been tuned to half filling by setting chemical potential µ to zero.Readers may be concerned that the electron-phonon interaction can shift µ.However, our model has the property that the electronphonon interaction conserves the flavor indexed by a, and it follows that the tadpole diagrams in the self-energy are 1/N suppressed.Consequently, at leading 1/N order the chemical potential shift is zero. If U = 0, the system is analytically soluble (see Appendix E) and it reduces to the electron-phonon system described in Ref. 32: for T t 0 /g the electron-phonon scattering is weak and the electronic quasiparticles are well-defined; for T t 0 /g, the phonons act like static impurities of density proportional to T (from the phonon Bose factor), and this leads to a linear-in-T resistivity.If g = 0, the system is described by a heavy Fermi liquid to SYK crossover 4 , which happens at temperature T ∼ E c = t 2 0 /U .When both U = 0 and g = 0, as we will see below, the competition between electron-phonon and electron-electron interactions is set by the ratio gt 0 /U , as illustrated in Fig. 2. If gt 0 /U 1 the system will first enter the electron-phonon chaos regime as we raise the temperature, and vice versa. A. Keldysh Action and Equations of Motion In this section we discuss the Keldysh formalism for the above Hamiltonian, and explain how to solve for the Green's function.It turns out that despite the addition of a large number of phonons, the problem is still as tractable as the t-U model we started with. We perform disorder average over t, U of the Keldysh action, and integrate out the quadratic phonon field X.Next we introduce the flavor averaged onsite Green's function iG ss x (t, t ) = 1 N a c axs (t)c † axs (t ) and the Lagrange multiplier Σ ss x (t, t ) to impose the constraint.We then obtain the action where is the free phonon contribution, and S tU K is the Kelydsh action (3.6) for t-U model and the last term S e-ph is the electron-phonon interaction: ) is the freephonon propagator.They can also be written in terms of the R,A,K components using Keldysh rotation.In thermal equilibrium, the phonon Green's functions above are given by where the second equation is the fluctuation-dissipation theorem. In the limit T ω 0 , the D K component is dominant, we can approximate Therefore the action S e-ph reduces to ) which has the same form as on-site random hopping term. Variation of the above action yields the saddle point equations (4.13)We see that the equation of motion (4.13) have the same structure as the t-U model counterpart, except that the electron hopping term t 2 0 is enhanced by phonons to t 2 0 + gt 0 T .Physically, the similarity is due to the observation that in the limit ω 0 T , the electron-phonons term plays a similar role as a random on-site t term in the t-U model. The above equations can be written into a dimensionless form in Appendix F, and the two important dimensionless parameters are gt 0 /U and T /E c . A. Deriving Transport Coefficients To leading order, the transport coefficients can be derived by expanding the action (4.6) around the saddle point solution with U (1) phase fluctuations φ and timereparameterization fluctuations to quadratic order, and the transport coefficients can be directly read out as the coefficients of φ 2 and 2 4 .However, for the purpose of diagrammatic expansion, we will use Kubo formulas. Although the Hamiltonian Eq.(3.1) breaks spatial translation symmetry, it respects time translation and U (1) rotation.Consequently we can still use Noether procedure to extract the current operator from the hopping term Eq.(3.2), but the current operator now has explicit spatial dependence. If we perform an infinitesimal inhomogeneous symmetry transformation, for example, a U (1) rotation c x → c x e iεx , the action will change by and we obtain the current operator j xx on the link xx . Following that procedure, we get the charge current j C and the heat current j E : Note that because the phonon modes are purely local, there is no direct contribution above from the phonon degrees of freedom.The on-site current-current polarization function in imaginary time is as usual defined as where the average is over states and disorder.The conper flavor are given by Kubo formulas ) For reference, we list the leading order formulas for DC electrical conductivity, DC thermal conductivity, and optical conductivity respectively. B. DC Resistivity We calculate the DC resistivity using the saddle point Green's function and Eq.(5.7).The resistivity results are shown in Fig. 10 and Fig. 11. The resistivity is a dimensionless quantity, so on dimensional grounds, it should be a function of gt 0 /U and T /E c only.As a sanity check, we calculated ρ at fixed gt 0 /U , T /E c , while varying U/t 0 , and we found that the results are independent of U/t 0 to good precision. For g = 0, the resistivity increases quadratically at low temperature, and becomes linear after the coherence scale E c = t 2 0 /U .This reproduces the result in Ref. 4. In the regime T E c , the resistivity curve can be approximated by the U = 0 result.The approximation works better for larger g.Now shift attention to the high temperature regime T > E c , we see from Fig. 10 that the resistivity is linear in T in this regime.We denote the slope of the curve as k C , that is lim and plot k C in Fig. 11.When gt 0 /U 1, we get the U .When gt 0 /U = 0, we get k C = 1.129, which agrees with the pure SYK result k C = 2/ √ π = 1.128 in 4. We notice that as g increases, k C gets closer to the U = 0 value.The effect of SYK interaction is suppressed by phonon.Interestingly, we find that k C can be fitted pretty well with the function with a = 1.15, b = 0.526, c = 1.130. C. Optical Conductivity The real part of the optical conductivity is shown in Fig. 12. By dimensional analysis, the optical conductivity σ(ω) should be a function of ω/E c , gt 0 /U and T /E c , and our result confirms that. The optical conductivity has a peak at ω = 0, but instead of Lorentzian decay (a character of the Drude peak), has 1/ω decay at large ω, which reflects the 1/ √ U ω SYK spectral weight at high frequencies.Nonzero temperature T and phonon coupling g can cause the peak to get broadened and lowered, but the curve eventually follows the 1/ω behavior.We see that the low frequency behavior of σ(ω) is dominated by phonon, but the high frequency behavior still follows from SYK. D. Resistivity Saturation In Ref. 33, it was found that if we put phonons on bonds between lattice sites, and let them couple to electrons at the two ends of the bonds, there will be resistivity saturation effects.This can be easily seen in our model. E. Thermal Conductivity and Phonon Drag Effect Naively, the thermal conductivity can be calculated using the leading order result (5.8) which only includes effects of electrons.The results are shown in Fig. 13 and Fig. 14 (here κ 0 means κ electron in Fig. 2). The results of electron thermal conductivity are similar to the DC resistivity.We can as well define the slope lim When U = 0, k E = 3gt 0 /(2πU ).When gt 0 /U = 0, we get k E = 0.92, agreeing with k E = 16/π 5/2 = 0.915 of Ref. 4. We also found that k E can be fitted by k E = f (gt 0 /U ; a, b, c) + 3gt 0 /(2πU ), where f is defined by Eq.(5.10) and a = 0.99, b = 0.81, c = 0.932.However, the thermal conductivity that we have calculated does not include the contribution of phonons.Electrons are strongly incoherent due to electron-phonon interactions and the SYK interaction.However, because there are many more phonons, O(N 2 ), than electrons, O(N ), the interaction effects on phonons are diluted and phonons are still well defined quasiparticles with a long lifetime of order O(N ).If we excite an electron in the system, it quickly decays and transfers its energy to phonons.Because phonons are long-lived quasiparticles, we expect them to have a significant contribution to transport.This phenomenon is called "phonon drag", and has been studied in systems without SYK interaction 32 .Results there show that phonon drag is important in energy transport but not in charge transport.In the rest of this section, we will work out the phonon drag correction (denoted as κ 1 here and κ phonon in Fig. 2) to DC thermal conductivity. We recall that there is no intrinsic phonon conductivity, of O(N 2 ), in our model because the phonons are purely local.Then phonons contribute only via the drag effects noted above. Phonon Self Energy As a preparation, we need the phonon self energy to obtain the phonon life-time.Because there are more phonons than electrons, the phonon self energy is of order O(1/N ), so is the phonon decay rate. The dressed phonon propagator is The leading term in 1/N is given in Fig. 15.The expression for Σ ph is Perform Matsubara summation, analytically continue to real frequency iω → ω + iδ, and take the imaginary part, we get where we have used the definitions of the phonon coupling g and the optical conductivity σ(ω), and in the last line we defined the phonon decay rate Γ. Here we see that the Im Σ ph is small, implying that phonons are long-lived (lifetime∼ O(N )) quasiparticles. Correction to Thermal Conductivity In this section we calculate the phonon drag contribution to thermal conductivity, whose diagrams are given by Fig. 16.Naively, these diagrams are sub-leading in 1/N , but because the phonons are long-lived with a decay rate Γ ∼ O(1/N ), the product of two phonon propagators has an O(N ) enhancement: into account, the two diagrams in Fig. 16 are of the same order as the leading order diagrams.The current-current correlation function is Here, the vertex function V is defined as (5.20) In Eq. (5.19) and (5.20), the prefactors are obtained as the following: 1.Each internal phonon vertex is associated with a factor (−α).Each internal hopping vertex is associated with a factor (−t 0 ). 2. The time derivative in the current operator Eq.( 5.3) yields an extra i due to Wick rotation t → −iτ , which leads to an overall minus sign in Eq.(5.19).4. j E j E contains four terms.By simple inspection we see that each term in j E j E leads to two diagrams, so in total there should be eight diagrams, but the V V product in Eq. (5.19)only contains four diagrams.In fact, the missing four terms are those obtained by exchanging ω 1 , ω 2 in Eq. (5.20).The exchange doesn't affect the value of Eq.(5.20) in the DC limit, so we simply account for it by a factor of 2. The two diagrams in We now start evaluating Eq. (5.19) by performing the Matsubara summation over Ω, and we get Next, we use Eq.(5.18), apply the Kubo formula, expand to lowest order in ω 0 and rewrite various constants to obtain: where σ is the DC conductivity.The final step is to evaluate the vertex function |V (ω 0 + iδ, ω 0 − iδ)| to linear order in ω 0 . When performing Matsubara summation in V , the resulting integrand has three branch cuts in the complex µ-plane, the contribution we seek is to integrate along the upper half of the highest cut, and the lower half of the lowest cut, which amounts to making all propagators retarded or advanced: where in the second line, we expanded in ω 0 , integrated by parts and symmetrized the integral because Im [G 4 R ] is odd. In the above integral, the tanh(β /2) term is dominant over the βn F (ε)n F (−ε)ε term, because the first has support over the whole spectrum [−Λ, Λ], Λ ∼ min(U, t 2 0 + gT t 0 ), while the second only has support over [−T, T ], so it is suppressed by higher powers of T /t 0 . The contributions to V from other contours in the complex-µ plane, in fact, can all be arranged to have a factor n F (ε + ω 0 ) − n F (ω 0 ), and are suppressed by the same reason as above. In summary, the phonon-drag contribution to the thermal conductivity is (5.24) As a sanity check, we inspect the U = 0, T t 0 limit.In this limit, (5.27) Further assuming gT t 0 , we get κ 1 ∼ t0 g ( t0 T ) 2 , which agrees with Ref. 32. Results Now we present the numerical results for our model.In Fig. 17, we compare the phonon-drag thermal conductivity κ 1 to the leading order value κ 0 at several g's.We see that at low temperatures κ 1 κ 0 , while at higher temperatures κ 0 κ 1 . FIG. 17.A comparison between the electron thermal conductivity κ0 and the phonon-drag thermal conductivity κ1. In Fig. 18, we plot the total thermal conductivity κ = κ 0 + κ 1 .At low temperature, there is a hierarchy that κ is positively correlated to g, further inspections show that κ is roughly linear in g for large gt 0 /U . At very low temperatures (gt 0 /U )(T /E c ) 1, this linear-in-g behavior might be partially understood using Eq.(5.27),which says κ 1 ∼ gt 0 .However, it is peculiar that this hierarchy can persist to higher temperature ranges where (gt 0 /U )(T /E c ) > 1. As temperature rises, the phonon drag contributions die off, and the hierarchy inverts at around T /E c ∼ 1. The high temperature behavior is mostly dominated by the electron contribution. In Fig. 19, we inspect the violation of Wiedermann-Franz law by plotting the Lorentz ratio L = κρ/T .For g = 0, we get a crossover from L = π 2 /3 at low temperatures to L = π 2 /8 at high temperatures, which agrees with Ref. 4. For non-zero g, we see a huge enhancement of the Lorentz ratio and the Wiedermann-Franz law is strongly violated. VI. THERMODYNAMICS OF THE PHONON MODEL In this section, we compute the entropy and the heat capacity using the imaginary time formalism.At zeroth order, the entropy is dominated by O(N 2 ) species of nearly free phonons, but on top of it there is the O(N ) piece contributed by electrons.We will calculate this electron contribution.The grand potential (action) of the system consists of two parts.The first part is the potential of N (N + 1)/2 species of free phonons.The second part is the potential of electrons, which is given by where N site is the number of lattice sites.Variation of Eq.(6.1) yields the saddle point equation of motion: The above equation can be solved numerically by iteration with fast Fourier transform, see Appendix A. In principle, we can then plug the saddle point solution back to Eq.(6.1) and compute the grand potential.Note that we are working at zero chemical potential, so the grand potential coincides with free energy F, and we might compute the entropy using S = −∂F/∂T = −∂G/∂T .However, if we want to compute the heat capacity C = T ∂S/∂T , we would have to do a second numerical differentiation, which has large error.The solution is to derive an analytic expression for the entropy, so that we can directly evaluate the entropy and differentiate only once to get the heat capacity. The numerical result for entropy and heat capacity is shown in Fig. 20.We found that the entropy follows a universal function S = S T /Ec 1+(gt0/U )(T /Ec) .The universal function S is the same as the one in Ref. 4, which grows linearly at low temperatures and saturates at high temperatures.With the phonons introduced, the entropy is reduced. A possible reason for entropy reduction could be the following.In the original SYK model, the non-zero entropy comes from the exponentially many low lying states of electrons.However, in our model, phonons are welldefined quasiparticles so in the low energy sector there are polynomially many phonon states.Generically, coupling between electrons and phonons makes the low lying states sparser and thus reduces the entropy. VII. SCRAMBLING IN THE PHONON MODEL In this section, we discuss the scrambling properties of the phonon model. A. Electron Scrambling First, we consider scrambling of electrons which can be computed from the electron retarded OTOC (3.10).Much of the discussion of the t-U model carries over to the phonon model.The only addition is to include a diagram with a vertical phonon line, such as Fig. 21.In the zero Debye frequency limit, this new diagram is simply multiplying a constant gt 0 T .As a result, the fast growing part of the OTOC satisfies the following integral equation: F (q, Ω, ω) = L(q, Ω, ω) 1 + gt 0 T F (q, Ω, ω) Following previous procedures, we can compute the scrambling rate λ L , short distance diffusion coefficient D * and long distance diffusion coefficient D chaos .The results are shown in Fig. 22,23,24,25.For the scrambling rate λ L (see Fig. 22), we found that λ L /T is a universal function of the combination (T /E c )/(1 + (T /E c )(gt 0 /U )); so the value of λ L can be deduced from Fig. 6.This result can be understood from two aspects.First, in the equation of motion (4.13), g only appears in the combination t 2 0 + gt 0 T .Second, λ L is obtained by solving the Bethe-Salpeter equation at zero momentum, where it can be shown that g appears as t 2 0 + gt 0 T by re-expanding L(q, Ω). λ L decreases with increasing g because phonons are still well-defined quasiparticles and thus coupling to phonons slows down scrambling. As for the scrambling diffusion coefficients (see Fig. 23 for D * and Fig. 24 for D chaos ), we did not find a universal function as the case of λ L .This is because the phonon coupling term differs from electron hopping term at nonzero momenta. It is interesting to see that the two diffusion coefficients respond to phonons in opposite ways.Phonons help with the short distance diffusion of scrambling but suppress scrambling propagation at long distances. Because coupling to phonon reduces λ L , we expect that v * is larger compared to the t-U model, and hence wavefront OTOC propagation (region B in Fig. 3) is diminished at small g and completely suppressed at large g.In Fig. 25, we show a comparison between v * and v B at gt 0 /U = 1 as an example.We see that at this value of g, v * > v B for all temperatures and thus the OTOC always propagates in a diffusive manner. A sensible comparison of chaos diffusion constants to energy diffusion constant is not available, because the quasi-free phonons dominate the heat capacity and D E ∼ O(1/N ). In an electron-phonon system with only electronphonon interaction, it is found that both the electrons and the phonons have a scrambling rate of about the phonon decay rate (1/N )(ω 2 0 /T ) 32 .It is possible that a similar effect in our model can cause F to develop pole at a small imaginary frequency ω = iλ , λ ∼ O(1/N ).However, because t * ∝ ln(N ) and λ t * ∝ ln(N )/N , this extra pole has no impact on the behavior of h(t) in the large N limit. VIII. CONCLUSIONS This work has described the transport and chaos properties of a model electronic system with strong electronelectron and electron-phonon interactions.We did not include phonon-phonon interactions, and the feedback of the electrons on the phonon dynamics was weak: so the phonons act essentially as a heat bath of oscillators at a typical frequency ω 0 .All our results here are for T ω 0 , as the phonons have little influence at lower T . The electron-electron interactions were described by SYK islands with interactions of strength U , and hopping between islands of strength t 0 (see Fig. 1).The electron-phonon interactions were characterized by a dimensionless coupling g.The properties of the different regimes of chaos and transport are summarized in Fig. 2, which are controlled by the dimensionless ratio gt 0 /U .For gt 0 U , the phonons have a minor influence, and the electron-electron interaction dominated transport is similar to that described by Song et al. 4 .We computed here the chaos properties across the crossover from the heavy Fermi liquid to the incoherent metal at T ∼ t 2 0 /U .The chaos propagation is controlled by two velocities, v * and v B , as shown in Fig. 3. • In the heavy Fermi liquid regime, v * > v B , and the chaos propagation is diffusive, as in Eq. (2.12).The Lyapunov rate λ L ∼ T 2 /E c is much smaller that the chaos bound of 2πT .The chaos diffusion is characterized by D * , and we found D * ≈ D E , the energy diffusion constant. • In the incoherent metal regime v * < v B , and now there is a wave-front of chaos propagation, as in Eq. (2.14).The Lyapunov rate λ L is close to the chaos bound of 2πT .The chaos diffusion is characterized by D chaos , and we again found that D chaos ≈ D E , the energy diffusion constant. With increasing electron-phonon coupling, g, the slope of the linear-in-T resistivity in the incoherent metal regime changes.This change is described by the scaling plot in Fig. 11.The corresponding plot for the electron thermal conductivity is in Fig. 14.For the Lyapunov rate, λ L (q = 0), the entire effect of the electronphonon coupling is to replace T /E c by (T /E c )/(1 + (T /E c )(gt 0 /U ) where E c = t 2 0 /U ; we can then read off the resulting rate from Fig. 6. For gt 0 U , the scattering of electrons off a heat bath of phonons dominates the transport.The density of thermally excited phonons is proportional to T , and so this leads to a linear-in-T resitivity.Nevertheless, the properties of this regime are quite different from the incoherent metal described above for small gt 0 /U .The electronphonon scattering is essentially elastic, and so does not contribute significantly to chaos.Consequently, we find that that the Lyapunov exponent is much smaller than the chaos bound, and controlled by the weaker electronelectron interactions: λ L ∼ (U/(gt 0 ))T in the high T regime.We also have v * > v B , and the chaos propagation is diffusive as in Eq. (2.12).The nearly elastic electron-phonon scattering also implies that while the DC conductivity is dominated by electron-phonon scatttering, the optical conductivity is not.As shown in Fig. 12, there is a 1/ω behavior in the optical conductivity, which is characteristic of the local incoherent dynamics of the pure SYK model. The limiting results above for small and large gt 0 /U illustrate one of our main results: there is a fundamental distinction between the linear-in-T resistivity between the electron-electron and electron-phonon dominated regimes.Experimentally, this distinction can be detected by comparing the DC and optical conductivities.Strong electron-phonon scattering increases the DC resistivity, but has little effect on the 1/ω optical conductivity; in contrast, the critical electron-electron interactions describe by SYK physics connect the DC and optical responses via ω/T scaling.If we increase the electron-phonon coupling g, the electron scattering rate also increases without an apparent bound, as does the DC resistivity (modulo the saturation effects discussed in Section V D).But this increased electron scattering rate does not show up in OTOC: the Lyapunov rate is far from maximal at large g, with λ L ∼ T (U/(gt 0 )) actually decreasing with increasing g.On the other hand, in the SYK physics, the same rate ∼ T shows up in the resistivity and the OTOC.Alternatively, the distinction can also be diagnosed via the resistivity saturation effect.If the linear-in-T resistivity is due to phonons, it will saturate to the MIR limit 32,33 , as in a generic model there are phonons sitting on both sites and bonds.However, linear-in-T resistivity originated from the SYK interaction can easily surpass the MIR limit.In future work, it would be interesting to treat the electronphonon and phonon-phonon interactions in a more self-consistent manner; then, we can expect an "electronphonon" soup 31 in which the strong dependence on the electron-phonon coupling disappears, and both transport and chaos are determined by a common rate ∼ T .Finally, it is useful to compare our results with recent studies of operator spreading using random unitary circuits [38][39][40][41][42] , which report a broadening of the chaos wavefront.In our model, the phonons remain essentially free oscillators, and the N 2 oscillators on each island can have consequences similar to a random unitary perturbation.And it is notable that we do find a diffusive broadening of the chaos wavefront with v * > v B , as the electron-phonon coupling is increased.Moreover, with strong electron-phonon coupling, the Lyapunov rate is much smaller than the maximal rate, as summarized in Fig. 2.However, when the chaos is near maximal, at weak electron-phonon coupling, the sharp chaos wavefront is preserved. slow.Fortunately, we found that the error between the two methods for G A (z) only weakly depends on Im z and it can be accurately interpolated as a function of Re z and Im z.Hence, we can use the spectral function method to calibrate the Fourier transform method, and then use the Fourier transform method for calculation. After having the matrix elements, discretize M ω,q in frequency space.The frequency unit Ω 0 is the same as the previous section, and the highest frequency is LΩ 0 .We found L = 3000 sufficient in practice. The following algorithm computes the scrambling rate λ L and short-distance scrambling diffusion coefficient D * .For each λ = −iω and q, we compute H(λ, q) which is the absolute value of the smallest eigenvalue of M iλ,q .Then we numerically minimize H(λ, q) over λ to obtain λ L .To extract the chaos diffusion coefficient, we calculate λ L (q) for different values of small q, and fit for D * using Eq.(2.13). To compute the long-distance scrambling diffusion coefficient D chaos , we instead fix λ L = 2πT , and minimizes H(2πT, i|q 1 |) over |q 1 | to obtain |q 1 | and D chaos .As for v * , we computed it by varying λ L a little bit and then use v * = ∆λ L (q 1 )/∆q 1 , where ∆λ L = 0.005 × 2πT . Computing Entropy and Heat Capacity The entropy S is computed using Eq.(6.5) and the imaginary time Green's function.We expect there to be non-universal correction at the order of t 0 /U and T /U , so it is preferred to put U as large as possible.However, we found that Eq. (6.5) converges poorly at small T /U ratio.To solve this problem, we observed that both poor convergence and non-universal corrections overestimate S, so at a fixed T /E c , we computed S for various U and took the minimum, as shown in Fig. 20. To compute the heat capacity C, we interpolated S(T /E c ) using third-order spline method, and then performed numerical differentiation. Appendix B: Keldysh Formalism In Keldysh formalism, the time contour of path integral is doubled to have both a forward branch and a backward branch (see Fig. 26).The action is thus S K = C β dtd d xL(φ, ∂φ), where the field φ has support over the whole contour C β .Since we are only interested in equilibrium physics, we can send the initial time t 0 → −∞ and decouple the imaginary branch.The action can thus be written in terms of fields on the ± branches as S K = S 0 [φ + ] − S 0 [φ − ], where S 0 is the original action and the minus sign is due to different orientations of time integration. (B2) Due to the above redundancy in ± notation, it is more convenient to organize fields in the "classical-quantum" or "r − a" notation, which is We write correlators with the notation G α1...αn = φ α1 . . .φ αn , where α i = r, a.There are three independent two-point functions, they are where the subscripts on the RHS are for boson/fermion respectively.In Eq. (B4), G R (G A ) is the usual retarded (advanced) Green's function and G K is called Keldysh Green's function. Here we have explicitly written out path ordering for emphasis.In the second case, the path ordering exchanges the two operators and generates a minus sign. j y M m l U K 9 5 F x b u / L N d u 8 j g K c A w n c A Y e X E E N 7 q A O P j A Y w T O 8 w p u T O C / O u / M x b 1 1 x 8 p k j + A P n 8 w d A 4 4 7 a < / l a t e x i t > t 0 /g < l a t e x i t s h a 1 _ b a s e 6 4 = " 8 8 D + Q i o y R 5 0 d I S d d D n V F j m O C n p w = " > A A A B 7 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 1 U 0 G P R i 8 c K x h b a U D b b T b t 0 s w m 7 E 6 G E / g Y v H h T x 6 g / y 5 r 9 x 2 + a g r Q 8 G H u / N M D M v T K U w 6 L r f T m l l d W 1 9 o 7 x Z 2 d r e 2 d 2 r 7 h 8 8 m i T T j P s s k e R d 2 7 v 6 w 1 b o o 4 y n A E x 3 A K H l x B A + 6 g C T 4 w E P A M r / D m K O f F e X c + 5 q 0 l p 5 g 5 h D 9 w P n 8 A N H 6 O S A = = < / l a t e x i t > U < l a t e x i t s h a 1 _ b a s e 6 4 = " 5 FIG. 6 . FIG.6.The zero momentum scrambling rate λL(q = 0) plotted versus temperature T /Ec (U = 200t0).The inset zooms into the low temperature region.The scrambling rate λL grows as T 2 /Ec at low temperatures, and saturates to a Tlinear curve at high temperatures. 12 FIG. 7 . FIG. 7. The scrambling diffusion coefficients D * , D chaos and the energy diffusion coefficient DE plotted against temperature T /Ec (U = 200t0).The inset shows the low temperature region.DE and D chaos are roughly equal while D * differs significantly. 6 FIG. 8 . FIG. 8.The two characteristic velocities v * and vB plotted against temperature, in the unit of aEc where a = 1 is lattice spacing.U/t0 = 200.At low temperatures (see inset) v * > vB and at high temperatures v * < vB. 14 FIG. 9 . FIG. 9.The thermal conductivity κ (left) and the heat capacity C (right) plotted against temperature T /Ec.The insets show the low temperature region.The thermal conductivity is computed at U = 200t0.The heat capacity is computed by combining data of various U/t0. FIG. 10 . FIG. 10.The resistivity ρ = 1/σDC plotted as a function of dimensionless temperature T /Ec, for different value of g at U/t0 = 200.The solid lines are guides to eyes.The dashed lines show U = 0 values for comparison.The inset zooms into the low temperature region. FIG. 11 . FIG. 11.The slope kC plotted against gt0/U for different values of U .The curves of different U collapses into a single curve, confirming the scaling property.The dashed line is a fit discussed in the main text. FIG. 12 . FIG. 12.The real part of the optical conductivity at U = 200t0.The large frequency part approximately follows a 1/ω trend. FIG. 13 . FIG. 13.The inverse thermal conductivity (electron part) T /κ0 plotted as a function of dimensionless temperature T /Ec, for different value of g at U/t0 = 200.The solid lines are guides to eyes.The dashed lines show U = 0 values for comparison.The inset zooms into the low temperature region. FIG. 16 . FIG. 16.The diagrams for phonon drag.Solid arrowed lines denote fermions.Wavy lines denote phonons.Dashed lines denote contractions of t ab xx . Fig. 16 should differ by a minus sign.The first diagram comes from contraction ∂ τ c † ∂ τ c +h.c., and the second diagram comes from ∂ τ c∂ τ c + h.c..The minus sign is due to opposite frequency sign conventions for c and c † . FIG. 19 . FIG.19.The Lorentz ratio L = κρ/T , plotted versus temperature for different g at U = 200t0 in log-log scale.As a reference, the g = 0 curve has L = π 2 /3 at low temperatures and L = π 2 /8 at high temperatures. FIG. 20 . FIG. 20.The entropy S plotted against temperature T for various g.The inset is the heat capacity at g = 0.The behavior of the entropy follows a universal function S T /Ec 1+(gt 0 /U )(T /Ec) represented by curves. 5 FIG. 22 . FIG. 22.The scrambling rate λL plotted versus rescaled temperature for different g at U = 200t0.The inset zooms into the low temperature sector.After rescaling, data points of different g collapse onto a universal curve. 4 FIG. 23 . FIG. 23.The short-distance scrambling diffusion coefficient D * plotted versus temperature for different g at U = 200t0.The inset zooms into the low temperature sector. 5 FIG. 24 . FIG. 24.The long-distance scrambling diffusion coefficient D chaos plotted versus temperature for different g at U = 200t0.The inset zooms into the low temperature sector. 6 FIG. 25 . FIG. 25.The two characteristic velocities v * and vB plotted against temperature, in the unit of aEc where a = 1 is lattice spacing.Here gt0/U = 1 and U/t0 = 200.At all temperatures v * > vB. 1.At low temperatures, the scrambling rate λ L grows quadratically as T 2 /E c , which matches with expectations from Fermi liquid theory.It is reported in Ref. 8 that in a Majorana version of the t-U model, λ L vanishes identically below some critical temperature, but our results do not support this. * at temperature T /E c ∼ O(1).5.At low temperatures v * > v B and at high temperatures v * < v B .The intersection is at at T /E c ∼ O(1).This agrees with the qualitative features discussed in Sec.II B. ss ss d 2 tG ss x (t, t )G s sx (t , t)D s s (t , t). FIG. 14.The slope kE plotted against gt0/U for different values of U .The curves of different U collapses into a single curve, confirming the scaling property.The dashed line is a fit discussed in the main text.
2019-04-03T18:00:12.000Z
2019-04-03T00:00:00.000
{ "year": 2019, "sha1": "27707417bb1f9d05dd4543a5e1864fe87510c7f3", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.100.045140", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5f49e7c1a0899cecba4e649dcc093c021b67ab07", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260561836
pes2o/s2orc
v3-fos-license
Examination of fully automated mammographic density measures using LIBRA and breast cancer risk in a cohort of 21,000 non-Hispanic white women Background Breast density is strongly associated with breast cancer risk. Fully automated quantitative density assessment methods have recently been developed that could facilitate large-scale studies, although data on associations with long-term breast cancer risk are limited. We examined LIBRA assessments and breast cancer risk and compared results to prior assessments using Cumulus, an established computer-assisted method requiring manual thresholding. Methods We conducted a cohort study among 21,150 non-Hispanic white female participants of the Research Program in Genes, Environment and Health of Kaiser Permanente Northern California who were 40–74 years at enrollment, followed for up to 10 years, and had archived processed screening mammograms acquired on Hologic or General Electric full-field digital mammography (FFDM) machines and prior Cumulus density assessments available for analysis. Dense area (DA), non-dense area (NDA), and percent density (PD) were assessed using LIBRA software. Cox regression was used to estimate hazard ratios (HRs) for breast cancer associated with DA, NDA and PD modeled continuously in standard deviation (SD) increments, adjusting for age, mammogram year, body mass index, parity, first-degree family history of breast cancer, and menopausal hormone use. We also examined differences by machine type and breast view. Results The adjusted HRs for breast cancer associated with each SD increment of DA, NDA and PD were 1.36 (95% confidence interval, 1.18–1.57), 0.85 (0.77–0.93) and 1.44 (1.26–1.66) for LIBRA and 1.44 (1.33–1.55), 0.81 (0.74–0.89) and 1.54 (1.34–1.77) for Cumulus, respectively. LIBRA results were generally similar by machine type and breast view, although associations were strongest for Hologic machines and mediolateral oblique views. Results were also similar during the first 2 years, 2–5 years and 5–10 years after the baseline mammogram. Conclusion Associations with breast cancer risk were generally similar for LIBRA and Cumulus density measures and were sustained for up to 10 years. These findings support the suitability of fully automated LIBRA assessments on processed FFDM images for large-scale research on breast density and cancer risk. Supplementary Information The online version contains supplementary material available at 10.1186/s13058-023-01685-6. Introduction Mammographic density, or the extent of the breast that appears radiopaque on a mammogram, is an established breast cancer risk factor [1,2].Clinically, breast tissue composition is assessed visually by radiologists and categorized according to the American College of Radiology BI-RADS atlas as: (a) entirely fatty; (b) scattered areas of fibroglandular density; (c) heterogeneously dense, which may obscure small masses; and (d) extremely dense, which lowers the sensitivity of mammography [3].While BI-RADS density categories are routinely recorded on screening mammography reports because of the potential for dense tissue to mask the presence of breast cancer, quantitative measures are preferred for research studies because they provide more information to improve statistical power, robustness and reproducibility in risk prediction.Early research studies manually assessed breast density from film-screen mammograms [1,[4][5][6].Over the last 2 decades, conventional film mammography has been replaced with full-field digital mammography (FFDM), obviating the need to digitize film mammograms prior to application of computer-assisted methods.Several studies have found that quantitative density assessments from FFDM images also are strongly associated with breast cancer risk [7][8][9][10][11]. There are multiple methods for quantitating breast density from FFDMs [12].One of the most common and established methods used in research is Cumulus, a semi-automated tool that facilitates visual thresholding by a trained reader to segment the dense and non-dense areas of the breast [13].The requirement for thresholding by a trained reader can be an impediment for conducting large-scale studies of thousands of women.More recently, several fully automated methods have been developed.Two commercial automated and validated tools, Volpara [14] and Quantra [15], estimate volumetric density but require the raw 'for processing' FFDM images which are not routinely archived for clinical care.Several commercial tools also are available for automatically quantitating area-based density on processed 'for-presentation' FFDM images, including Densitas and DenSeeMammo [16,17].This study focused on the Laboratory for Breast Radiodensity Assessment (LIBRA) area-based density assessment tool because it is fully automated and publicly available, and can be used on both raw and processed FFDM images [18].Briefly, LIBRA delineates the breast region by using edge-detection algorithms and applies fuzzy c-means clustering to partition the breast region into gray-level intensity clusters, which are then aggregated into the final dense tissue segmentation.LIBRA density measures have been reported to be associated with breast cancer risk in small case-control studies [7,9,11,18].However, additional studies in large cohorts are needed to evaluate associations with long-term breast cancer risk (e.g., 5-or 10-year risk) to further validate this automated tool for use in large-scale breast density studies. The aim of this study was to examine the association between density measures by LIBRA and long-term breast cancer risk in a large cohort of over 21,000 women undergoing screening mammography by FFDM and followed for up to 10 years.We also compare associations with breast cancer risk obtained using LIBRA density measures with those using Cumulus in the same breast cancer screening cohort. Setting This study is ancillary to a genome-wide association study of mammographic density [19].The parent study included non-Hispanic white female participants of the Research Program in Genes, Environment and Health (RPGEH), who completed a health survey and provided a saliva sample for genotyping and who had a least one archived screening FFDM between 2003 and 2013.The RPGEH was established by the Division of Research, Kaiser Permanente Northern California (KPNC).Briefly, the RPGEH resource enables research on the genetic and environmental determinants of common, age-related complex health conditions.The resource links together surveys, biospecimens and derived data, with longitudinal data from electronic health records (EHRs) on a cohort of approximately 400,000 consenting adult KPNC members.Genome-wide genotyping was performed on DNA extracted from saliva samples of more than 100,000 RPGEH participants enrolled before 2010 [20]. Mammograms The EHR was used to identify potentially eligible screening FFDMs on the cohort.Processed FFDMs in the KPNC imaging archive came from 37 different KPNC mammography facilities, with 1-5 machines per facility.Over 90% of FFDM machines were manufactured by Hologic or General Electric (GE). The study was restricted to the 24,800 non-Hispanic white women with Cumulus density measures who met eligibility criteria for a prior study [16].Briefly, for those Cumulus analyses, we identified Hologic or GE mammograms closest to and on or after the RPGEH survey date.Assessments included dense area (DA) in cm 2 , non-dense area (NDA) in cm 2 , and percent density (PD), defined as the dense area divided by the total breast area and expressed as a percentage.Cumulus assessments were done in batches by a single trained reader using the left craniocaudal (CC) view for ~ 90% of women; the right CC view was randomly selected for ~ 10% of women to blind the reader to breast cancer history, because the prior study included the right CC view for women with prior breast cancer in the left breast.We excluded women who had bilateral breast cancer, bilateral breast implants, breasts too large to be completely imaged on a single exposure, unreadable images or unavailable images [16]. LIBRA density measures were obtained from the same FFDM exams included in the prior study of Cumulus; however, because assessments are fully automated we analyzed up to 4 breast views instead of just a single right or left CC view.For this current study, we further excluded 110 women for whom a LIBRA measure could not be obtained because of missing data in required DICOM fields, 2225 women with a history of unilateral breast cancer, and 216 women with unilateral implants (bilateral implants and cancer were previously excluded [16]).We also excluded women for the following reasons: their LIBRA values did not pass quality control filters (n = 1110) (see Additional file 1: LIBRA quality control steps), they did not have KPNC membership data during the follow-up period (n = 16), or they did not have at least one mediolateral oblique (MLO) and one CC view (n = 13) (Fig. 1). Statistical methods We generated scatter plots and Pearson correlation coefficients to compare density measures from LIBRA vs. Cumulus.Cox regression was used to estimate hazard ratios (HRs) for breast cancer associated with DA, NDA and PD, with time since baseline mammogram as the time scale.Women entered the cohort at the time of their baseline mammogram, and were followed until diagnosis of breast cancer (event) or censored at death, end of KPNC membership or end of study period (12/31/2021), whichever came first.We modeled DA, NDA and PD as continuous variables in units of the standard deviation (SD) in the full cohort.Prior studies have applied different transformations to density measures, including no transformation, log and square-root transformation, [9,11] so we assessed each of these to increase comparability across studies.Cox regression does not require continuous covariates to be normally distributed, so we did not consider normality when assessing transformations.Since reporting HRs in SD units assumes that the risk increases linearly, we assessed the linearity of the associations for each transformed density measure (Additional file 2: Figure S1).In our final analyses, we used log transformation for LIBRA and Cumulus measures of DA and PD, and untransformed LIBRA and Cumulus measures for NDA.To maximize adjustment for age and BMI (kg/ m 2 ), these variables were modeled using splines.Parity was categorized as nulliparous, parous, or missing.History of breast cancer in a first-degree family member was categorized as yes or no.The use of menopausal hormones was categorized as none, estrogen alone, or estrogen plus progestin within the 5 years prior to the index mammogram.Analyses of Cumulus density measurements were also adjusted for image batch [19].Separate multivariable Cox regression models were fit for Hologic and GE mammograms, and the estimates were also combined using random effects meta-analysis. For LIBRA density measurements, separate models were fit for CC and for MLO views, using the average of the right and left views when both were available.We used the average to reduce noise and improve the robustness of our estimates. Results The eligible study population included a total of 21,150 women and 988 incident breast cancer diagnoses within 10 years of follow up (Table 1).There were 17,970 women with a baseline mammogram on a Hologic machine and 3180 women with a baseline mammogram on a GE machine.The distribution of baseline characteristics differed somewhat between the two cohorts, but in both groups most women were between 50 and 70 years of age at baseline mammogram, had no family history of cancer, were parous and had not used postmenopausal hormones in the prior 5 years.Scatter plots of measures from LIBRA (average of left and right views) versus Cumulus (one view, 90% left) on CC views acquired on Hologic FFDM machines show very high correlation for NDA (r = 0.97), moderate correlation for DA (r = 0.69), and moderate to good correlation for PD (r = 0.80) (see Fig. 2).Results were very similar for measures on processed GE FFDM images.While our main results used the average of the left and right views on LIBRA to reduce noise, we also examined correlations of LIBRA and Cumulus measures on the left CC view and found that correlations were very similar to the average of the two CC views on LIBRA.For example, the correlation of LIBRA and Cumulus measures for PD on Hologic FFDM machines was r = 0.79 for the left CC view only, and r = 0.80 for the average of left and right CC on LIBRA versus left CC on Cumulus. The fully adjusted HRs for the association between each of the density measures and breast cancer risk are presented in Table 2. Results for LIBRA differed slightly by view, with slightly stronger associations for MLO than for CC views.Results for both LIBRA and Cumulus also differed by machine type, with slightly stronger associations for measures obtained from Hologic than from GE FFDM images.LIBRA results for the MLO view were very similar to Cumulus results, whereas LIBRA results for the CC view were generally weaker than for Cumulus.Results when adjusting only for age and BMI (and batch for Cumulus) (Additional file 3: Table S1) were quite similar to those in Table 2 from fully adjusted models. HRs for the association of both LIBRA and Cumulus measures with breast cancer risk were generally similar for the periods ≤ 2 years, 2-5 years and 5-10 years after the baseline mammogram, although the inverse association of NDA with risk appeared to be stronger with longer follow-up (Table 3).HRs for the association of both LIBRA and Cumulus measures with breast cancer risk also did not appear to differ markedly when restricting follow-up to ≤ 2 years, ≤ 5 years or ≤ 10 years (Additional file 4: Table S2) similar to results for non-overlapping time intervals (Table 3).For example, the HRs per SD for LIBRA PD (MLO view) were 1.37 (95% CI, 1.08-1.75),1.43 (1.27-1.62)and 1.45 (1.28-1.65)for ≤ 2 years, ≤ 5 years and ≤ 10 years, respectively. Discussion In our study of over 21,000 non-Hispanic white women and nearly 1000 breast cancer cases, we found that LIBRA measures of dense area, non-dense area, and percent density on processed images acquired from both Hologic and GE FFDM machines were associated with breast cancer risk.Moreover, the magnitude of the associations using fully automated LIBRA measures were generally quite similar to that of operator-dependent Cumulus measures.Results for LIBRA measures on the MLO view were the most similar to Cumulus results.We also found that both LIBRA and Cumulus density measures were significantly associated with increased breast cancer risk over a 10-year follow-up period, with similar magnitudes in both the near-term (≤ 2 years) and long-term (5-10 years).These findings provide further evidence that associations of mammographic density with near-term breast cancer risk are not largely explained by masking, and reflect breast tissue characteristics that predispose to future malignant transformation. Our findings are largely consistent with a few smaller case-control studies that have also reported results on the association between LIBRA density assessments and breast cancer risk with comparison to Cumulus measurements [7,9,11].The first by Busana et al. [11] was a UK study of 414 breast cancer cases and 684 controls, and all FFDMs were acquired on GE machines.They used the CC view only and compared results using processed and raw images.The study found slightly stronger associations for Cumulus than LIBRA measures, and that associations were also slightly stronger for both measures on processed vs. raw images.The adjusted (age, body mass index, menopausal status, parity, age at menarche, everuse of oral contraceptive and hormonal therapy) OR per SD of DA on processed GE images was 1.39 (95% CI, 1.17-1.64)for LIBRA and 1.53 (95% CI, 1.30, 1.79) for Cumulus, which are fairly similar to our findings for both measures on processed GE images.The second by Nguyen et al. [7] was a Korean study with 398 breast cancer cases and 737 controls, and FFDMs acquired on either Hologic or GE machines.They used the CC view of processed images only and compared results for Hologic and GE machines.In contrast to Busana et al. [11] and to our findings, they found associations were slightly stronger for LIBRA than for Cumulus.For GE, the adjusted (age, BMI, menopausal status) ORs per SD of DA was 1.50 (1.28-1.76)for LIBRA and 1.36 (1.16-1.59)Fig. 2 Correlation of dense area, non-dense area, and percent density measurements using LIBRA versus cumulus on full-field digital mammography images acquired from Hologic (A) or GE (B) machines.LIBRA values were averaged for the right and left cranio-caudal views Table 2 Hazard ratios a per SD of breast density assessments and breast cancer risk, by view and machine type for Cumulus.For Hologic, the adjusted ORs were 1.72 (1.38-2.15)for LIBRA and 1.58 (1.27-1.97)for Cumulus.The third by Gastounioti et al. [9] was a US study of 437 breast cancer cases and 1225 controls, and all FFDMs were acquired on Hologic machines.Density measures for each woman were an average of all four breast views.Similar to Busana et al. [11], they found that associations were slightly stronger for processed than raw images and like Busana et al. [11] and our study, results were slightly stronger for Cumulus than LIBRA measures.On processed Hologic images, the adjusted (age, BMI) ORs per SD of DA were 1.2 (95% CI 1.1-1.4)for LIBRA and 1.3 (95% CI 1.2-1.5)for Cumulus.The present study is the first to provide results for the CC versus MLO views. Our finding that LIBRA measures on the MLO view provide slightly stronger associations with breast cancer risk than the CC view suggests that the MLO views may be preferred, especially if resources limit the number of views available for study.The stronger associations for the MLO view may be related to the initial training of LIBRA, which was done only on MLO views [21].In addition, our study and the study by Nguyen et al. [7] suggest that associations with LIBRA measures may be slightly stronger on processed images from Hologic vs. GE FFDM machines, although the numbers of GE images in the two studies were relatively small and these findings need to be confirmed by others. Our study has several strengths and limitations.It is the first large cohort study of automated area-based measures of mammographic density and breast cancer risk and information was available on important risk factors.Given the cohort design and duration of follow-up, we were able to examine associations between density and breast cancer risk in both the near-term (< 2 years) and long-term (5-10 years) periods after the baseline screening mammogram.The associations of LIBRA measures and breast cancer risk were examined by breast view (MLO and CC) and by machine type (Hologic and GE).Cumulus measures were performed by a single radiological technologist and were shown to be strongly associated with breast cancer risk [8].However, we only had Cumulus measures on the CC view for comparison, although the CC view is the most commonly selected view for Cumulus studies.Given resource constraints, it was infeasible to visually review all images for this study.Instead, we applied a set of quality control criteria that could be automatically implemented to filter out images likely to be incorrectly segmented.Manual review of some of these flagged images, indicated that a small number of correctly segmented images likely were excluded, while some incorrectly segmented images were missed.This quality control process resulted in a slightly higher exclusion percent (5%) than in our Cumulus study (3%) for which all images were visually assessed.Another limitation is that the cohort includes only non-Hispanic white women because it is ancillary to a genome-wide association study [19].In addition, we did not have raw FFDM images, which are not routinely archived, and could not compare results for raw vs processed images.However, processed FFDM images are more widely available in the clinical setting for use in largescale studies, and results from studies by Busana et al. [11] and Gastounioti et al. [9] suggest that associations of LIBRA measures with breast cancer risk are stronger on processed than raw images.LIBRA measures on standard two-dimensional FFDM images have been found to be highly correlated with LIBRA measures on synthetic mammograms from digital breast tomosynthesis (DBT) [22], which is increasingly being used in breast cancer screening.However, DBT images were not available for the present study, and future studies will be needed to determine whether density measures on synthetic mammograms are associated with breast cancer risk. Conclusions Our findings, together with the results from other studies, provide substantial support for the use of the fully automated and publicly available density measurement tool, LIBRA, for large-scale studies of mammographic density using processed FFDM images from either Hologic or GE machines.Moreover, LIBRA and Cumulus density measurements were significantly associated with both near-and long-term breast cancer risk and the magnitude of the associations did not attenuate over the 10-year follow-up period.LIBRA allowed us to generate density measures in a fraction of the time and cost needed for a trained reader to measure density using Cumulus.By providing reliable and robust density measures, LIBRA will enable future large longitudinal studies to address multiple questions related to the determinants of mammographic density and its relationship to breast cancer risk, and how these relationships may vary over time. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ?Choose BMC and benefit from: LIBRA and Cumulus DA and PD were log-transformed, and LIBRA and Cumulus NDA were untransformed a Hazard ratios adjusted for age at FFDM (spline), mammogram year (categorical), BMI (spline), parity, first-degree family history, and HRT use within 5 years prior to mammogram date.Cumulus analyses were also adjusted for image batch.HRs are per standard deviation of density based on distribution in full cohort b Average of measures on right and left breasts c Meta-analysis was used to combine Hologic and GE results Breast cancer diagnoses were identified from the KPNC cancer registry, which reports to the California Cancer Registry and to the National Cancer Institute's Surveillance, Epidemiology and End Results (SEER) program of cancer registries.The KPNC registry records information on all new primary cancers (except non-melanoma skin cancer) diagnosed among KPNC members.Data elements and quality assurance measures are similar to SEER.Age at mammogram was determined based on date of birth and date of the mammogram, both from the EHR.We used the body mass index (BMI) from the EHR measured at the patient visit closest to mammogram date.The RPGEH survey provided self-reported information on parity and family history of breast cancer.The KPNC pharmacy database, which records all dispensed outpatient and inpatient prescriptions, was used to determine use of menopausal hormones within the 5 years prior to FFDM. Fig. 1 Study cohort eligibilityData sources for cancer diagnoses and covariates Table 1 Baseline characteristics of incident breast cancer cases and the full cohort, by machine type Table 3 Hazard ratios per SD of breast density assessments and breast cancer risk, by time since mammogramNumber of breast cancers for ≤ 2 years, > 2 to 5 years and > 5 to 10 years were 322, 290 and 315, respectively.LIBRA and Cumulus DA and PD were log-transformed, and LIBRA and Cumulus NDA were untransformed a Hazard ratios adjusted for age at FFDM (spline), mammogram year (categorical), BMI (spline), parity, first-degree family history, and HRT use within 5 years prior to mammogram date.Cumulus analyses were also adjusted for image batch.HRs are per standard deviation of density based on distribution in full cohort.Meta-analysis was used to combine Hologic and GE results b Average of measures on right and left breasts
2023-08-06T13:10:49.223Z
2023-08-06T00:00:00.000
{ "year": 2023, "sha1": "3afa3f21627ccd697a692a6775368cd6c525c1b4", "oa_license": "CCBY", "oa_url": "https://breast-cancer-research.biomedcentral.com/counter/pdf/10.1186/s13058-023-01685-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f630527b1157c0edca451433f09b7594640c1138", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227123962
pes2o/s2orc
v3-fos-license
The European Academy of Neurology COVID‐19 registry (ENERGY): an international instrument for surveillance of neurological complications in patients with COVID‐19 Abstract The COVID‐19 pandemic is a global public health issue. Neurological complications have been reported in up to one‐third of affected cases, but their distribution varies significantly in terms of prevalence, incidence and phenotypical characteristics. Variability can be mostly explained by the differing sources of cases (hospital vs. community‐based), the accuracy of the diagnostic approach and the interpretation of the patients’ complaints. Moreover, after recovering, patients can still experience neurological symptoms. To obtain a more precise picture of the neurological manifestations and outcome of the COVID‐19 infection, an international registry (ENERGY) has been created by the European Academy of Neurology in collaboration with European national neurological societies and the Neurocritical Care Society and Research Network. ENERGY can be implemented as a stand‐alone instrument for patients with suspected or confirmed COVID‐19 and neurological findings or as an addendum to an existing registry not targeting neurological symptoms. Data are also collected to study the impact of neurological symptoms and neurological complications on outcomes. The variables included in the registry have been selected in the interests of most countries, to favour pooling with data from other sources and to facilitate data collection even in resource‐poor countries. Included are adults with suspected or confirmed COVID‐19 infection, ascertained through neurological consultation, and providing informed consent. Key demographic and clinical findings are collected at registration. Patients are followed up to 12 months in search of incident neurological manifestations. As of 19 August, 254 centres from 69 countries and four continents have made requests to join the study. neurological symptoms can occur as a complication secondary to systemic illnesses or from the exacerbation of pre-existing neurological conditions. Neurological manifestations include symptoms reflecting the overall severity of the disease (as documented by encephalitis and encephalopathy) or more specific syndromic entities (e.g., stroke or Guillain-Barré syndrome) that might be the result of the peculiar mechanisms of action of the virus [6]. Neurological manifestations have been reported in about onethird of adult and elderly cases, as indicated in the first clinical series from Wuhan, China [7]. Children have also been shown to be affected, although to a lesser extent [8]. Data are mostly driven by case reports and small series that are illustrated in a comprehensive review [9]. Headache, myalgia, anosmia, fatigue/sleepiness are the most frequently reported symptoms [10]. Amongst severe manifestations, altered mental status [11], stroke [12,13] and peripheral nerve involvement [14] can be the most frequent. Mild complaints and subclinical findings are not uncommon and might indicate that neurological findings are more common than expected and are underdiagnosed unless accurate screening methods are adopted. This suggests that an active search might be followed by even higher numbers. Moreover, it is becoming evident that several symptoms persist after recovering from COVID-19 infection [15]. This new evidence suggests further and careful surveillance and monitoring. In published reports, the distribution of neurological symptoms, signs and diseases varied significantly in terms of prevalence, incidence and phenotypical characteristics. This variability can be mostly explained by the differing sources of cases (hospital vs. community-based), the accuracy of the diagnostic approach, and the subjective interpretation of the patients' complaints by the attending physicians. Thus, a standardized approach is needed to provide a clearer outline of the spectrum of neurological disorders comparing the main clinical aspects of COVID-19 disease in different countries and verifying whether differences, if any, can be attributed to differences in environmental and genetic factors. The approach also allows for the evaluation of the severity of illness across resource settings to examine the role of critical illness or prolonged hospitalization on symptoms and draw conclusions regarding causality between viral infection and neurological manifestations. A registry represents the ideal instrument for this purpose. A REG IS TRY A S AN IN S TRUMENT FOR A S TANDARDIZED DATA COLLEC TION Registries are the instruments used to detect and define the spectrum of a given disease in population-based samples or in specific settings. The demographic and clinical characteristics of the individuals to be included in a registry are pre-defined. The source(s) of cases is(are) identified. Each patient is assigned a unique identifier. The diagnosis, and any other factor deemed important for the description of a registered case, is defined using commonly accepted and unanimously applied criteria. The data are collected in compliance with these pre-assigned criteria. To preserve the representativeness of the sample, all patients eligible for inclusion and releasing an informed consent are registered. If a follow-up is required, patients are assessed at specific time points for the identification of any incident complication. Attrition can be minimized through an active and accurate search of the individuals qualifying for inclusion and, where a follow-up is needed, to be invited at follow-up visits. PL ANNING AND DE VELOPMENT OF A EUROPE AN REG IS TRY The European Academy of Neurology (EAN) has been active since the start of the COVID-19 outbreak with a number of activities to promote knowledge, research and international collaborations [10]. Starting in April 2020, a Task Force was assembled, including clinicians and epidemiologists from various countries (the authors of the present report). One of the projects of this Task Force was to develop a European registry. Objectives of a European COVID-19 registry The overall aim of a European COVID-19 registry is to provide epidemiological data on the spectrum of neurological symptoms and signs in patients with COVID-19 infection reported by neurologists or other key referents in outpatient services, emergency rooms or hospital departments. The registry can be implemented as a stand-alone instrument for patients with suspected or confirmed COVID-19 and neurological findings or as an addendum to an existing registry not targeting neurological signs and symptoms. More specific primary objectives are (1) to evaluate the prevalence of neurological manifestations in patients with suspected or confirmed COVID-19 disease; and (2) to assess the general characteristics of these neurological manifestations. Secondary objectives are (1) to gain epidemiological data on neurological manifestations of the COVID-19 infection in different countries in Europe and, where available, in non-European countries; and (2) to study the impact of neurological symptoms and neurological complications on outcomes. An accurate search was made of existing national registries and databases to identify the variables most commonly collected, with a threefold purpose: (1) to select the data considered of primary interest by most countries; (2) to favour data pooling and meta-analyses on common variables from differing sources; and (3) to focus on variables that could be easily collected even in resource-poor countries. The variables identified were critically appraised and only those on which there was full agreement were retained. For each variable, a definition was provided resulting from widely accepted criteria or, where not available, fully agreed by the group. The collection of the data was kept to a minimum to prevent attrition and loss of data due to the constraints posed by the outbreak. Patients to be registered A patient was eligible for inclusion provided that all the following criteria were satisfied: (1) age 18 years or older; (2) symptoms suggesting confirmed COVID-19 infection; (3) case ascertainment through neurological consultation; and (4) patient's informed consent (according to the requirements of local regulatory agencies). Study conduct All neurologist members of the EAN or its affiliated national socie- (Table 1). All registered patients with neurological symptoms are asked to be followed for 12 months, with telephone calls at 6 and 12 months to verify the vital status, the functional abilities and identify neurological symptoms, signs or diagnoses that might have occurred after the acute phase of the disease. The neurologist (or a designated partner of the local study team) is required to be in charge of the follow-up. Statistical analysis plan Statistical analyses will be performed in conformity with two separate plans. The first plan refers to countries adhering to the EAN registry. The plan includes descriptive statistics to be performed on all variables collected in the registry. Inferential statistics are also included using conventional univariate and multivariate methods. Cross-tabulations were pre-planned to correlate each symptom, sign and neurological diagnosis to demographics and the other clinical variables, including comorbidities and the main complications of infection. These data will be presented in the entire sample and for each country separately. The neurological diagnoses made at the time of the infection will be contrasted to the status at last observation (recovered, alive with functional impairment, dead). Multivariate analyses will also be performed using logistic regression models with status at last observation (alive/dead) as the dependent variable and neurological diagnoses as the independent variables, adjusting for demographics, comorbidities, setting and country. Follow-up data will be analysed in survivors with Kaplan-Meier curves using the occurrence of a neurological diagnosis as the outcome variable and demographics and comorbidities as prognostic predictors. Comparisons will be tested with the log-rank test and independent prognostic predictors will be assessed using Cox hazard models, adjusting for setting and country. Retrospective and prospective data will be analysed separately and compared. The modality for data collection (retrospective vs. prospective) will also be included in multivariable analysis models. The significance will be set at the 5% level (p = 0.05). A separate plan (still to be discussed with partners in charge of independent data collections and willing to share their data) will General (2) a meta-analysis of aggregated data. Sample size calculation The primary end-points of the EAN registry are to determine the prevalence of neurological manifestations in COVID-19 patients. The hypotheses tested by this registry are exploratory; hence a sample size calculation was not performed. Implementation of the registry When the protocol (Annex 1) and the case report form (Annex 2) were in final form, an extensive correspondence was started with the national societies affiliated to the EAN and with individual members. The goal was to advertise the registry and encourage countries and individuals to use this instrument. As of 17 August 2020, a total of 254 centres from four continents declared their willingness to participate. A heatmap of the participating sites is illustrated in Figure 1. The profile of each centre will be provided through the completion of an ad hoc form (Annex 3) that will also include the setting where the patient was registered. Whilst the EAN registry was distributed to the participating sites, a discussion was started with countries using their own registries in Europe with the intent to organize data sharing (individual patient data) or pooling (meta-analysis). In parallel, an intensive collaboration was also started with the Expected short-term and long-term findings The activation of the EAN registry and its interaction with several other surveillance systems in and outside Europe will provide a number of short-term and long-term findings. In the short term, a more complete picture will be offered of the spectrum of the disease and its neurological complications, comparing the various settings where the patients were registered. The demographic and clinical profile of registered patients will be compared across European countries to detect similarities and differences. Demographic and clinical characteristics of patients from countries with differing proportions of affected individuals and COVID-related deaths will also be compared. Using the settings for denominators, prevalence and incidence of neurological signs and diseases will also be calculated, separating the neurological manifestations of the infection from well-defined syndromic entities. Registered data will also be used to plan focused studies and the established registry can serve as a critical infrastructure to facilitate global research in future unanticipated events. CO N FLI C T S O F I NTE R E S T None declared. DATA AVA I L A B I L I T Y S TAT E M E N T Data sharing is not applicable to this article as no new data were created or analysed in this study. PRO M OTER The Registry is promoted and endorsed by the European Academy of Neurology (EAN). PA RTI CI PA NTS TO TH E R EG I S TRY National Neurological Societies or divisions of Neurology from individual academic centres can apply to participate to the ENERGY Consortium. Methodology Neurologists are asked to implement this study protocol in their institution/clinic, to assess and record demographic and other data, Exclusion criteria • Symptoms suggesting other (pulmonary/systemic) infection than COVID-19 AND other confirmed infection. Procedure Patients' inclusion can be performed prospectively, at the time of the visit or at patient's discharge, whichever is most convenient; or retrospectively, provided that all inclusion criteria are satisfied. Visits can be performed anywhere in the context of health care facilities (outpatient services, emergency rooms, hospital departments). If at the time of the visit, the clinical picture of the patient is incomplete, the neurologist is invited to contact the caring physician upon discharge to complete the e-CRF. The collection of the data will be kept to a minimum to prevent attrition and loss of data due to the constraints posed by the outbreak. No additional investigations are needed besides a detailed neurological examination and common variables recorded in this pandemic. The registration of the patients will continue until the end of the outbreak. All registered patients with neurological symptoms will be followed up to 12 months, with telephone calls at 6 and 12 months, to verify clinical conditions, functional abilities, and identify neurological manifestations that might have occurred after the acute phase of the disease. The neurologist (or a designated partner of the local study team) will oversee the follow-up. A guide is annexed to this protocol to define each variable and facilitate data collection in the e-CRF. Statistical analysis plan Descriptive statistics will be performed on all variables collected in the registry. Inferential statistics will include univariate and multivariate analyses. Cross-tabulations will be performed for each symptom, sign and neurological diagnosis against demographics and the other clinical variables, including comorbidities and the main complications of infection. These data will be presented in the entire sample and for each country separately. The neurological diagnoses made at the time of the infection will be contrasted to the status at last observation (recovered, alive with functional impairment, dead). The prevalence of neurological symptoms, signs and diagnoses will be calculated using the number of neurological consultations as denominator and symptoms/signs and, separately, neurological diagnoses as a group. Multivariate analyses will be also performed using logistic regression models with status at last observation (alive with or without functional impairment/dead) as the dependent variable and neurological diagnoses as the independent variables, adjusting for demographics, comorbidities, centre and country. Follow-up data will be analysed in survivors with Kaplan-Meier curves using the occurrence of a neurological diagnosis as the outcome variable and demographics and comorbidities as prognostic predictors. Comparisons will be tested with Log-rank and independent prognostic predictors will be assessed using Cox's hazard models, adjusting for centre and country. The significance will be set at the 5% level E TH I C A L S TA N DA R DS The Principal Investigators (PIs) will ensure that the study is conducted in full conformity with the Declaration of Helsinki and Good Clinical Practices. E TH I C S CO M M IT TEE The protocol will be submitted by the PI to the local ethics committees (ECs). Any amendment to the protocol will require review and approval by the EC before the changes are implemented to the survey. Only individual data collected after the patient's informed consent will be used. Every eligible patient will be assigned an anonymized code. DATA CO N FI D ENTI A LIT Y Participants' and centres' confidentiality is strictly held in trust by the participating investigators. All medical or administrative staff with an access to the data is subject to a duty of confidentiality and data protection. Therefore, the study protocol, documentation, data, and all other information generated will be held in strict confidentiality agreement protocols. The study sponsor (European Academy of Neurology) and representatives of local authorities may inspect all documents and records required to be maintained by the local investigator for the participants in this registry. Research data of the registry, which is for purposes of statistical analysis and scientific reporting, will be transmitted to the Data Managers and the Statisticians of the registry. For this purpose, data will be de-identified and anonymized at input into the eCRF by the local centres/PIs. Individual participants and their research data will be identified by a unique identification number. The eCRF system used by clinical sites and by research staff will be secured and password protected. In the situation when a centre would be temporary not able to access the eCRF or complete it, a paper-based CRF will be available on demand. To keep administration and data correctness on a high level, this possibility should only rarely be used. These records will be entered in the eCRF at the EAN central office in collaboration with the research staff of the Medical University of Innsbruck and the Mario Negri Institute of Milan. DATA S H A R I N G & OWN E R S H I P Where ENERGY is an addendum to other registries or databases, formal collaborations can be activated with European and international organisations to share common variables in the intent to provide a broad European and even worldwide picture and favour comparisons. For countries with independent registries/databases and that wish to share their data but are unwilling to use this registry, data will be compared in aggregate using pre-specified statistical plans. The data collected by individual centres will be accessible to these centres without restriction. All participants should be registered as active members of the EAN Neuro-COVID Registry Consortium. The data collected can be also used to test scientific hypotheses forwarded by any active member. However, these hypotheses should be illustrated in ad-hoc protocols to be submitted for approval to the Registry Core Scientific Committee. The scientific reports should be published on behalf of the EAN and the affiliated neurological societies. Participating sites will be informed of any data sharing agreement with organisations in countries not associated to the European Union. PU B LI C ATI O N , A N D AUTH O R S H I P Data will be made available to the scientific community by means of abstract or scientific papers submitted to peer-reviewed journals. Authorship of the main manuscript will follow the ICMJE recommendations that base authorship on the following four criteria: • Substantial contributions to the conception or design of the work or the acquisition, analysis, or interpretation of data for the work, AND • Drafting the work or revising it critically for important intellectual content, AND • Final approval of the version to be published, AND • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. A writing committee composed by the Core Scientific Committee will draft the work and will be authors of the manuscript. All publications will be made in the name of ENERGY Consortium. All those who satisfy the criteria for authorship will be listed as authors. Each centre will be mentioned at least by the name of one author and listed "on behalf of the ENERGY consortium" in the main publications in PubMed. Additional authors will be listed based on the contribution of each site to the registry. Each author's contribution within the Consortium will be specified.
2020-11-23T14:00:50.442Z
2020-11-21T00:00:00.000
{ "year": 2021, "sha1": "5878cb7121ed955190c3d703486d9616da1a0f51", "oa_license": null, "oa_url": "https://boris.unibe.ch/149344/1/Beghi__2020__The_EAN_COVID_19_Registry.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d5fe4e9127f563bb9fefbc4a5b207eec153f68be", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253274870
pes2o/s2orc
v3-fos-license
Effect of Home-Based Training with a Daily Calendar on Preventing Frailty in Community-Dwelling Older People during the COVID-19 Pandemic It has been reported that marked decreases in physical activity including social activities, deterioration in eating habits and mental health, and an increase in frailty have occurred during the COVID-19 pandemic. This study aimed to devise a method to prevent the onset and progression of frailty during the COVID-19 pandemic and to verify its effect. The subjects were 111 community-dwelling older people who answered questionnaires before and after the intervention. A calendar incorporating 31 different tasks, one for each day, was created as an intervention tool with the aim of improving motor, oral, and cognitive functions. The intervention group (n = 49) participants performed these tasks every day for 3 months. The primary outcome was the Kihon checklist (KCL) score. When the amount of change in the KCL score before and after 3 months was compared between the two groups, no difference in the total score was observed between the two groups; however, the intervention group showed significantly improved cognitive function in the KCL sub-domain. In the intervention group, the number of pre-frailty and frailty patients decreased significantly after the intervention compared to before the intervention. These results suggest that the use of the calendar created in this study during the COVID-19 pandemic could prevent decreased cognitive function in the KCL sub-domain and could help prevent the onset and progression of pre-frailty and frailty. Introduction COVID-19 has spread worldwide. Even today, Japan faces daily fluctuations in the number of people who are infected. In July 2022, the World Health Organization (WHO) reported that Japan had the highest number of COVID-19 infections per week in the world, at approximately 970,000 [1]. Frailty is defined as a state of enhanced vulnerability to external stress due to various organ dysfunctions associated with aging. Frailty includes not only physical problems such as muscle weakness and malnutrition in old age but also mental and psychological problems such as cognitive impairment and depression, as well as social problems such as living alone or being confined to one's home, and economic hardship, future falls, impairment of daily living functions, and hospitalization. It is a pathological condition that leads to various poor outcomes such as decreased life expectancy [2]. It has also been reported that, with appropriate interventions, frail persons can be returned to robust [3][4][5]. Beginning with the restrictions on going out under the declaration of a state of emergency issued in April 2020, self-restraint of social activities has continued. In such circumstances, many reports of mental and physical changes of community-dwelling older people have been published. Prominent decreases in physical activity and increased frailty have been reported [6][7][8][9]. Yamada et al. reported that socially inactive older adults living alone are more likely to experience incident frailty/disability due to reduced physical activity during the COVID-19 pandemic [8]. In addition, cognitive function of the Korean elderly cohort declined much more during the pandemic than before the pandemic, particularly in terms of memory and recall function [10]. At present, since the end of the COVID-19 pandemic is unpredictable, it is extremely important for all people to maintain their health while balancing infection prevention and activities to ensure the future health of society. For that purpose, various optimal health promotion methods need to be developed and promoted. Therefore, this study aimed to devise a method to prevent the onset and progression of frailty, which can be induced by the special social environment of the COVID-19 pandemic, and to verify its efficacy in community-dwelling older people in Japan. Participants The subjects were local older people who participate in physical function measurement meetings in Kaizuka City every year [11]. Participants answered the questionnaires in July 2021 and November 2021. The preliminary questionnaires were sent to 250 people in July, and the 126 people who returned the questionnaires were divided into a control group (n = 64) and an intervention group (n = 62). The intervention group completed a calendar task for 3 months. In November, the post-event questionnaire was sent out. The analysis included 62 people in the control group and 49 people in the intervention group who responded to the survey. The data of these 111 people were used in the final analysis ( Figure 1). Referring to previous studies [12], the sample size was calculated post hoc using Gpower version 3.1.9.7 computer software with a two-tailed significance level of 0.05, and a power of 0.8; the estimated number of people was 90, and the 111 people included in this study were thus considered appropriate. Participants were allocated using a stratified randomization method, with stratification by sex, age and frailty level. Intervention A specialized daily calendar was sent to the intervention group, and they performed the tasks in it for 3 months (Figure 2). The calendar incorporated 31 different tasks for improving motor, oral, and cognitive functions. When developing this calendar, the aims This study was approved by the Ethics Committee of Osaka Kawasaki Rehabilitation University (Approval No. OKRU-RB0002) and was conducted in accordance with the Declaration of Helsinki. This study was also registered with the UMIN Clinical Trial Registry System for intervention research (ID: UMIN000045102). Intervention A specialized daily calendar was sent to the intervention group, and they performed the tasks in it for 3 months (Figure 2). The calendar incorporated 31 different tasks for improving motor, oral, and cognitive functions. When developing this calendar, the aims were (1) to incorporate content that promotes motor functions (squats, towel exercises, etc.), oral functions (cheek exercises, tongue exercises, etc.), and cognitive functions (word rearranging, finding word mistakes, etc.); and (2) the content should be simple so that many older people can do it alone at home. Finally, one month consisted of 21 days of motor function, 6 days of oral function, and 4 days of cognitive function tasks ( Table 1). The proverbs and their explanations were in the daily calendar. The intervention group used this one-month calendar three times in the 3-month period whereas the control group continued their normal life without using the calendar for 3 months. Intervention A specialized daily calendar was sent to the intervention group, and they performed the tasks in it for 3 months (Figure 2). The calendar incorporated 31 different tasks for improving motor, oral, and cognitive functions. When developing this calendar, the aims were (1) to incorporate content that promotes motor functions (squats, towel exercises, etc.), oral functions (cheek exercises, tongue exercises, etc.), and cognitive functions (word rearranging, finding word mistakes, etc.); and (2) the content should be simple so that many older people can do it alone at home. Finally, one month consisted of 21 days of motor function, 6 days of oral function, and 4 days of cognitive function tasks ( Table 1). The proverbs and their explanations were in the daily calendar. The intervention group used this one-month calendar three times in the 3-month period whereas the control group continued their normal life without using the calendar for 3 months. Physical Step backwards ([5 s × 5 times/one leg] × 2) Outcome Outcomes were scored on the Kihon checklist (KCL) [13]. The KCL is a self-administered questionnaire in which participants answer "yes" or "no" to 25 questions about living conditions and physical and mental functions. It consisted of a 5-item assessment of activities related to daily life ("life function"), a 5-item assessment of locomotor function ("physical function"), a 2-item assessment of malnutrition ("nutrition"), a 3-item assessment of oral function ("oral function"), a 2-item assessment of outdoor activities ("outdoor activities"), a 3-item assessment of cognitive function ("cognitive function"), and a 5-item assessment of depressive mood ("depression"), with a set of 7-domain questions. In the content of each question, 1 point was added when it was considered that there was a problem with each item, and the higher the score, the more likely there was a problem with life function. Recently, the KCL has been used internationally as a frailty assessment; when the patient's total KCL score is 0-3 points, the patient is diagnosed as robust, 4-7 points as prefrailty, and 8-25 as frailty. Receiver operating characteristic curve analyses showed that the areas under the curves for the evaluation of frailty status were 0.81 (sensitivity, 70.3%; specificity, 78.3%) for prefrailty and 0.92 (sensitivity, 89.5%; specificity, 80.7%) for frailty at total KCL scores of 3/4 and 7/8, respectively [14]. Statistical Analysis Statistical analysis was performed using Student's t-test and Pearson's chi-squared test for differences between the control and intervention groups. Two-way repeated analysis of variance (ANOVA) was carried out to compare changes in KCL total score and sub-domain score values to determine the effect of intervention. Effect sizes were indicated by partial eta-squared (η 2 ). In addition, the McNemar-Bowker test was also performed to clarify the difference in the number of robust and pre-frail-frail populations for 3 months. SPSS Statistics software (version 26; IBM Corp., Armonk, NY, USA) was used as the statistical analysis software. Characteristics of the Study Participants The characteristics of the participants in the control group and the intervention group are shown in Table 2. A total KCL score of three points or less was considered robust, and a total score of four points or more was considered pre-frailty and frailty (pre-frail-frail). There were no significant differences in the sex ratio, age, solitary living rate, and number of pre-frail-frail patients between the two groups ( Table 2). In the intervention group, 23 people (47.0%) used the calendar every day, 9 people (18.4%) used it 5-6 days a week, 8 people (16.3%) used it 3-4 days a week, and 9 people (18.4%) used it 1-2 days a week. All people used the calendar during the 3 months, with daily use having the highest rate. Table 3 shows the comparison of KCL total score and sub-domain score changes in the two groups. There was no difference in the amount of change in the total KCL score before and after 3 months between the two groups, but "cognitive function" showed a significant improvement in the intervention group (F = 4.347, p = 0.039, partial η 2 = 0.038) ( Table 3). Comparison of the Number of Participants Who Were Robust and Pre-Frail-Frail Pre-Intervention and Post-Intervention The numbers of robust and pre-frail-frail participants before and after 3 months in each group were compared. There was a significant difference in their ratio in the intervention group (p = 0.035) ( Table 4). In other words, the number of pre-frail and frail participants was less after the intervention than before the intervention. Discussion In the present study, the intervention group showed significantly improved cognitive function in the KCL sub-domain. Furthermore, the number of pre-frailty and frailty patients decreased significantly after the intervention compared to before the intervention. The cognitive function domain consists of question items related to "memory", "executive function", and "date orientation", and it is reportedly useful as a mild cognitive impairment screening tool [15]. In addition, examining the relationship between this domain and the onset of dementia, it has been reported that the higher score of this domain, the higher the risk of developing dementia [16]. The present study showed that cognitive function deteriorated in the control group and improved in the intervention group. This result is considered to be due to reading and thinking about proverbs written in the calendar every day in addition to the effect of training to improve cognitive function during the task. There have been several reports of interventions for frailty prevention during the COVID-19 pandemic. A health class run by residents who exercise while watching a video recording of simple gymnastics can be regarded as a place for interaction with people for a year during the COVID-19 pandemic, and improvements in oral function, outdoor activities, cognitive function, and a depressed mood in KCL domains have been reported [17]. Reports related to physical frailty showed that home exercise programs improved motor function and lower extremity muscle strength [18,19], and TV-based assistive integration technology improved physical and mental well-being [20]. Peretz et al. reported that maintaining social networks and reading contributes to maintaining physical activity [21]. In addition, the National Center for Geriatrics and Gerontology in Japan expects to determine appropriate activity plans to prevent physical and mental decline at home for older persons who cannot go out or have limited social activities during the COVID-19 pandemic [22]. In the present study, the number of pre-frail-frail participants after the intervention decreased significantly compared to before the intervention in the intervention group, which strongly supports the results of the before and after changes in the KCL score described above. Based on the results of the KCL domain mentioned above, this intervention's effect may have been due to psychological factors including cognitive function rather than physical factors. In community-dwelling older adults, regardless of age, sex, polypharmacy, undernutrition risk, and frailty status, information and communication technology (ICT) users were more proactive in maintaining their health during the COVID-19 pandemic [23]. However, in our previous survey targeting older people in the study region, it was found that the ownership rate of ICT devices such as tablets (7.6%) and personal computers (20.9%) was very low [24]; therefore, a daily calendar was created. The point in creating a daily calendar was to include tasks that promote motor function [25], oral function [26], and cognitive function [27], which are closely related to frailty. The use of the calendar was effective in improving cognitive function, but it was not effective in improving oral function or motor function. In the future, it will be necessary to consider the content of calendar assignments and the period to be used. The response rate for the post hoc questionnaire was 79.0% in the intervention group and 96.9% in the control group. The low response rate in the intervention group may be due to non-response of those who did not use the calendar during the period. The percentage of pre-frail-frail participants in the present study was 69.4% at baseline. In the previous report, the results for women were slightly higher than 60.3% [18], which may be partly because the baseline period of the present study was during the ongoing COVID-19 pandemic. It has been proposed that it is important to address frailty prevention from the three aspects of nutrition, exercise, and social activity during the COVID-19 pandemic [28]. Although the task calendar created in the present study is an excellent method for being able to do it alone at home, it impedes social involvement. It would be most effective to incorporate the various intervention methods described above, including the present method, according to the risk of infection with social activities and the degree of movement restriction during the COVID-19 pandemic. This study has a major limitation. Because the KCL was self-administered, there were some missing values in the returned responses, so the participants were asked about the missing values by telephone. Therefore, some responses were not necessarily made in the same environment, which may decrease the reliability of the data. However, this is an unavoidable problem in this type of research. There is also a lack of research on pre-intervention and post-intervention nutritional status, the socio-family situation of individuals, and the degree of dependency. In addition, as mentioned above, it is possible that the analysis results were only for those who were highly receptive to this intervention method. Therefore, in the future it will be necessary to conduct an evaluation that also considers receptivity. According to the trend in the number of people infected with coronavirus, a peak increase in the number of infected people called the 5th wave was seen from early August to mid-September in 2021, and a state of emergency was declared from 2nd August to the end of September. After that, there were no particularly strict restrictions on going out until the request for measures to prevent the spread of the virus in late February 2022 [29]. In the future, it will be necessary to consider intervention methods based on such differences in social situations. Conclusions During this research, social conditions strongly influenced people's lifestyles and behaviors, so it is unwise to draw a general conclusion. However, performing the task created for frailty prevention every day during the COVID-19 pandemic is expected to prevent deterioration in cognitive function in the KCL sub-domain and to help prevent the onset and progression of pre-frailty and frailty. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The database used and analyzed during the present study will be available from the corresponding author upon reasonable request.
2022-11-04T19:38:19.952Z
2022-10-30T00:00:00.000
{ "year": 2022, "sha1": "6f1704ea94f8cf1837f402d8aa88062179dfce3c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/21/14205/pdf?version=1667129178", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19aab7ea70dfc6b08bcba13a478540a1e431cf1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254887627
pes2o/s2orc
v3-fos-license
Characterization of Heterogeneity and Spatial Distribution of Phases in Complex Solid Dispersions by Thermal Analysis by Structural Characterization and X-ray Micro Computed Tomography This study investigated the effect of drug-excipient miscibility on the heterogeneity and spatial distribution of phase separation in pharmaceutical solid dispersions at a micron-scale using two novel and complementary characterization techniques, thermal analysis by structural characterization (TASC) and X-ray micro-computed tomography (XμCT) in conjunction with conventional characterization methods. Complex dispersions containing felodipine, TPGS, PEG and PEO were prepared using hot melt extrusion-injection moulding. The phase separation behavior of the samples was characterized using TASC and XμCT in conjunction with conventional thermal, microscopic and spectroscopic techniques. The in vitro drug release study was performed to demonstrate the impact of phase separation on dissolution of the dispersions. The conventional characterization results indicated the phase separating nature of the carrier materials in the patches and the presence of crystalline drug in the patches with the highest drug loading (30% w/w). TASC and XμCT where used to provide insight into the spatial configuration of the separate phases. TASC enabled assessment of the increased heterogeneity of the dispersions with increasing the drug loading. XμCT allowed the visualization of the accumulation of phase separated (crystalline) drug clusters at the interface of air pockets in the patches with highest drug loading which led to poor dissolution performance. Semi-quantitative assessment of the phase separated drug clusters in the patches were attempted using XμCT. TASC and XμCT can provide unique information regarding the phase separation behavior of solid dispersions which can be closely associated with important product quality indicators such as heterogeneity and microstructure. with the highest drug loading (30% w/w). TASC and XμCT where used to provide insight into the spatial configuration of the separate phases. TASC enabled assessment of the increased heterogeneity of the dispersions with increasing the drug loading. XμCT allowed the visualization of the accumulation of phase separated (crystalline) drug clusters at the interface of air pockets in the patches with highest drug loading which led to poor dissolution performance. Semi-quantitative assessment of the phase separated drug clusters in the patches were attempted using XμCT. Conclusion TASC and XμCT can provide unique information regarding the phase separation behavior of solid dispersions which can be closely associated with important product quality indicators such as heterogeneity and microstructure. INTRODUCTION Solid dispersions have been used to improve the dissolution properties of poorly water-soluble drugs in an attempt to achieve better oral bioavailability and overall therapeutic outcomes (1)(2)(3). These dispersions have been often loosely classified into single-phase molecular dispersions and phase separated systems with varying degrees of structural complexity (4)(5)(6). Phase separation of and the formation of microstructures in the solid dispersions are the result of the diversity in the physicochemical properties of the drugs and excipients used in the formulations, which affects their miscibility. Traditionally phase separation has often been considered as an example of instability or incompatibility between the drug and excipients and therefore been avoided in industrial formulation development (7,8). This is largely a result of a lack of understanding regarding the mechanisms of the formation and ability to control of the progression of phase separation. More recently however, intentionally forming phase separated solid dispersions to improve stability or modulate the drug release profile has been proposed (9)(10)(11). From the literature, the most commonly observed phase separation behavior in solid dispersions is the separation of the incorporated drug from the carrier polymer and excipient materials (if more than one carrier material was used) (9,10) as either amorphous or crystalline domains (12)(13)(14). Although conventional characterization techniques, such as differential scanning calorimetery (DSC and MTDSC), powder X-Ray diffraction (PXRD) and spectroscopic methods including IR, Raman and terahertz spectroscopy, often allow the confirmation of presence of phase separation, understanding the phase separation behavior in solid dispersions can still be challenging. The overlapping diffraction patterns or spectra from different phases or the thermal dissolution of one phase into another during heating in the DSC often lead to the difficulty in accurate data interpretation (15)(16)(17). Many excipients and active ingredients are organic materials which makes scanning electron microscopy (SEM) measurements in combination with element dispersive spectroscopy (EDS) powerless for identifying detailed phase separation due to the lack of elemental variability between samples. In addition, the conventional characterization methods mentioned above have not been able to effectively provide information on two important aspects of phase separation in formulations, heterogeneity and the 3D spatial distribution of different phases. Addressing these two aspects of phase separated solid dispersions will advance our understanding of how to control the formation and kinetics of phase separation behavior in complex solid formulations and in turn enable the rapid development of phase-separated dispersions which may be used for the delivery of multiple active pharmaceutical ingredients in one formulations. The motivation behind this study is to investigate these two less understood features of phase separation in solid dispersions by applying two novel characterization methods, thermal analysis by structural characterization (TASC) and X-ray micro computed tomography (XμCT), alongside conventional analytical tools. This study introduces the use of two non-conventional methods, thermal analysis by structural characterization (TASC) and X-ray micro computed tomography (XμCT), that are not commonly used for studying pharmaceutical solid dispersions. We have evaluated the potential of these techniques for characterizing heterogeneity and spatial distribution of phase separations in solid dispersions. TASC is a thermal microscopic analysis method recently developed by Reading et al. with a particular focus on studying the glass transition kinetics and thermal dissolution behavior of materials (18). TASC is an optical analogue of micro/nano thermal analysis which has been reported in the literature for studying the phase separation behavior of solid dispersions (19). Micro/ nano thermal analysis can pin-point the different phases present in the dispersion by identifying the differences in their thermal transition temperatures using heated AFM tips. The recent development of local nano-thermal analysis into an imaging method, transition temperature microscopy (TTM), has demonstrated the capacity of allowing the mapping of phase separation in some dispersion formulations (9,19). However, the disadvantage of micro/nano thermal analysis and TTM is that the measurements are often time consuming. Instead of using AFM as the measurement platform in micro/ nano TA, TASC uses conventional, user-friendly hot stage microscopy with novel algorithm for quantifying changes in successive micrographs of the samples during heating or cooling. The detailed working principle of TASC has been explained previously (18,20). The subtle changes in the samples appearance in the course of heating or cooling detected by TASC can then be converted into thermal transition graphs. Alhijjaj and co-workers reported the first use of TASC for pharmaceutical applications and identified the advantages of TASC including rapid measurement and high sensitivity for detection of subtle thermal transitions and heterogeneity of the samples (20). XμCT is a 3D X-ray imaging technique that has been widely used in a diverse range of disciplines to study the microstructure of objects without causing damage to the original sample. In contrast to X-ray diffraction methods, where X-rays are not absorbed but are reflected by an ordered array of matter, with a XμCT experiment it is the absorption of Xrays that results in the image, in a manner analogous to transmission microscopy. The differentiation of different phases by XμCT relies on the electron density differences that are characteristic of different elements. In the pharmaceutical industry, XμCT is used routinely to identify physical imperfections in solid dosage forms showing a high density contrast such as voids and cracks in tablets and coatings (21,22). Therefore the ability of the technique to distinguish materials with similar attenuation coefficients such as amorphous and crystalline forms of the same drug can be extremely limited (23) unless synchrotron radiation is used to improve the phase contrast (24,25). However for the conventional XμCT used in this study, in theory, if sufficient electron density differences are present between different phases contained within a sample, XμCT should be effective for resolving the distribution of these phases in 3D. The distribution of solid excipients in compressed tablets has been studied using XμCT based on this principle (26). However it has not been widely used to investigate phase separation in solid dispersions (27). In this study a series of complex solid dispersions were prepared containing a poorly soluble model drug, felodipine, two semi-crystalline polymers, polyethylene glycol (PEG) 4000 and polyethylene oxide (PEO) 900,000, and semi-crystalline D-α tocopheryl polyethylene glycol 1000 succinate (Vitamin E TPGS). The dispersions were prepared by hot melt extrusioninjection moulding (HME-IM) to provide buccal patches containing felodipine, which would avoid its extensive first pass hepatic metabolism when administered orally, improving its bioavailability and allowing a reduced dose to be given via the buccal route (28). The rationale for the selection of excipients is that PEG allows the patches to be formed easily by HME, PEO provides mucoadhesive properties and TPGS acts as a drug permeation enhancer and solubilising agent (29)(30)(31)(32)(33). As a result of the limited miscibility between the excipients and felodipine as well as the semi-crystalline nature of the polymers used, the HME-IM patches showed phase separation. The presence of chlorine (Cl) in felodipine molecules provides an electron density difference between pure drug clusters and the rest of the excipients. When a significant amount of felodipine is dissolved in the excipients, the contribution of higher electron density of felodipine molecules allows identification of the drug-rich domains in the dispersions using XμCT. The technique cannot be used to distinguish between crystallised and amorphous drug only regions of high drug concentration however with complementary information provided by techniques such as PXRD and ATR-FTIR the crystalline/amorphous nature of phase separated drug domains can be confirmed. In addition, we report a preliminary attempt of using XμCT as a quantitative method to estimate the amount of drug phase separation in processed patches. Hot Melt Extrusion and Injection Moulding (HME-IM) The extruder used in the fabrication of felodipine patches was a twin-screw bench-top hot melt extruder with a set of co-rotating conical screws (HAAK MiniLab II Micro Compounder, Thermo Electron, Karlsruhe, Germany). The extruder was connected to an injection moulding apparatus (HAAKE MiniJet System, Thermo Electron Corporation, Karlsruhe, Germany). Before processing, physical mixtures of the formulations were prepared at different drug loadings (see Table I). The physical mixtures were prepared by initially mixing crystalline felodipine in the molten TPGS (65°C) followed by the addition of the other excipients. The semi-solid mixtures were further blended thoroughly using a mortar and pestle for at least 2 min at room temperature. This mixture was then fed into the extruder, the barrel temperature of which was pre-set to 65°C and 100 rpm with 5 min of residence time. After extrusion, the extrudate was loaded into the pre-heated cylinder of the injection moulding apparatus (65°C) and injected into a patch shaped mould (25 mm × 25 mm × 0.5 mm), warmed to the same temperature as the cylinder using 300 bars as an injection pressure for 20 s. The patches were allowed to cool inside the mould for 1 h prior to collection. Scanning Electron Microscopy (SEM) and Energy-Dispersive X-ray Spectroscopy (EDS) Surfaces and cross sections of the freshly prepared patches were scanned using the JSM 5900LV Field Emission Scanning Electron Microscope (Jeol Ltd, Japan) equipped with a tungsten hairpin electron gun and operating at an acceleration voltage of 5-20 kV. As the samples were relatively soft, dipping the samples into liquid nitrogen and crushing the frozen samples in order to obtain the natural morphology of the cross-sections of formulations with various drug loadings. Both kinds of sample were fixed on sample stubs using double adhesive tape. A Polaran SC7640 sputter gold coater (Quorum Technologies, Newhaven, UK) was used to coat the surfaces and cross-sections prior to imaging. EDS (INCA Energy manufactured by Oxford Instruments) connected to the SEM was used to map the distribution of drug clusters using Cl in felodipine as the marker. Samples were tested using both SEM and mapping mode EDS (data can be found in Supplementary Information). Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) Spectroscopy A IFS 66/S FTIR spectrometer (Bruker Optics Ltd, Coventry, UK) fitted with a Golden Gate® ATR accessory with temperature controllable top plate (Specac Orpington, UK) and diamond internal reflection element was used to identify the physical form of felodipine and the possible interaction between the drug and the other excipients included in the patches. All samples were scanned using the following parameters: 2 cm −1 resolution, 32 scans for sample and background, and 4000-550 cm −1 spectrum range in absorption mode. The spectra of 3 replicates per sample for all drug loadings were analyzed using OPUS software. Powder X-Ray Diffraction (PXRD) PXRD was used in this study to identify the polymorphic form of felodipine in the different formulations and the possible transformation from one physical form to another as the drug loading percentage increased from 10 to 30% w/w. In addition, the analysis was also used to investigate the effect of the drug on the crystallinity of PEG-PEO. All measurements were performed using the Thermo ARL Xtra X-ray diffractometer (Thermo Scientific, Switzerland) equipped with a copper X-ray Tube (λ =1.540562 Å). All PXRD patterns were obtained using an X-ray beam generated with an acceleration voltage of 45 kV and a current of 40 mA. The angular scan range was 5°< 2 < 60°with a step width of 0.01°a nd scan speed of 1 s/step. Differential Scanning Calorimetry (DSC) Thermal analysis of felodipine loaded patches, their physical mixes and the raw materials was performed using the Q-2000 MTDSC (TA Instruments, Newcastle, USA) equipped with a RC90 cooling unit. Full calibration was performed prior to the samples measurements. For samples scanned using standard DSC, a heating rate of 10°C/ min and a heating range of −80-180°C were used. Before scanning, 2-3 mg of samples were weighed accurately and crimped in standard DSC pans (TA Instruments, Newcastle, USA). The obtained thermograms were analyzed using the Universal Analysis software. All measurements were performed in triplicate. Thermal Analysis by Structural Characterisation (TASC) The TASC system was composed of a temperature controlled heating/cooling Linkam MDSG600 automated stage fixed to a Linkam imaging station that was attached to a microscope working in reflective mode (LED light source and × 10 magnification lens) and was equipped with a digital camera to capture images that correspond to thermal events as a function of temperature. For cooling ramps, the temperature of the stage is controlled using a cooling unit that operates by purging liquid nitrogen into the stage. For all samples analyzed, thin slices of the prepared patches (0.6-1.2 mm × 0.6 mm × 0.2 mm) were cut using a sharp blade and placed in standard DSC pans (TA Instruments, Newcastle, USA). A pre-designed temperature program (10°C/min) for heating, cooling and reheating cycles with an isothermal period of 1 min separating the ramps was applied to the prepared samples. Before starting the experiments, the image-capturing mode was activated at an image acquisition rate of 1 frame/°C. The captured images were then collected and analyzed using the TASC software provided by Cyversa (Norwich, UK). The results obtained were statistically analyzed by using one-way analysis of variance (ANOVA). Statistical significance was accepted at the p ≤ 0.05 level. X-Ray Micro-Computed Tomography (XμCT) A SkyScan1172 high-resolution X-ray micro computed tomography (XμCT) scanner (Bruker-microCT, Kontich, Antwerp, Belgium) was used to analyse felodipine solid dispersions with different drug loading percentages (0-30% w/w). The analysed samples were imaged using an aluminium filter to cut-off high energy X-rays at an isotropic voxel resolution of 3 μm over a total of 20 min acquisition time and a subsequent image reconstruction time took approximately 20 min per sample, using the NRecon program (version 1.6.8.0, Bruker-microCT). The reconstructed images were analysed using CTan and CTvol software in which the images for a small section (designated as a region of interest ROI) for each sample are converted to binary images followed by thresholding each component according to differences in density and represented in 3D models. Powder compacts made of the physical mixtures of crystalline felodipine and the rest of the excipients with consistent compositions to those used in the HME-IM formulations were prepared for the quantitative studies. The compacts (13 mm in diameter) were prepared by compressing (500 mg) of the premixed physical blends into flat-faced disks using an IR press (Specac, Kent, UK) with 10 kN pressure held for 5 min. In Vitro Drug Release Studies Unidirectional dissolution studies to simulate the release profile for systemic buccal administration were conducted using the paddle over disc method (similar to USP apparatus 5) using a dissolution apparatus (Caleva 8ST, Germany). Under non sink conditions, patch samples having the equivalent of 10 mg of felodipine (maximum daily dose) attached to a glass disc using double adhesive tape were immersed in 900 ml of phosphate buffer saline pH 6.8 (simulated salivary fluid) at 37 ± 0.5°C and 100 rpm paddle rotation. At different predetermined time intervals, 5 ml samples were withdrawn and filtered using a 0.45 um filter unit (Minisart NML single use syringe, Sartorius, UK). The filtered samples were then diluted with an equal volume of absolute ethanol and the samples were analysed using a UV-VIS spectrophotometer (Perkin-Elmer lambda 35, USA) at 363 nm. Samples withdrawn were substituted with dissolution media at the same temperature after each sample was taken. The details of the dissolution methodology development and validation are described in Supplementary Information. All drug release studies were conducted in triplicate. RESULTS AND DISCUSSION Conventional Microscopic, Spectroscopic and Thermal Characterization of Phase Separation in HME-IM Patches Images captured using SEM (Fig. 1) revealed that the surfaces of the solid dispersion patches, except for those with 20% w/w drug loading, show the presence of small cracks and air voids and increased roughness with increasing the drug loading. The cross-sectional images of the patches show increased roughness in the interior in comparison to the surfaces and a clear porous character for all samples. Large air pockets between 100 and 300 μm in diameter, and particles (often with defined edges), with diameters of 10-20 μm, can be observed only in the patches with 30% drug loading. EDS analysis using chlorine (Cl) as the marker for felodipine (Supplementary Information Figure S1) confirmed that these particles contain a higher concentration of felodipine than other areas. With the confirmation of the presence of crystalline felodipine by PXRD (see Supplementary Information Figure S2), these high felodipine concentration areas are likely to be crystalline felodipine particles. PXRD results of 10 and 20% loaded patches show no clear evidence of the presence of crystalline drug. The -NH stretching region of the ATR-FTIR spectra of patches also indicates the presence of crystalline felodipine (signature -NH peak at 3667 cm −1 ) in the patches with 30% drug loading and the amorphous (signature -NH peak at 3333 cm −1 ) nature of the drug in the 10 and 20% drug loaded patches, as shown in Fig. 2. In order to draw some degree of prediction of phase separation in the dispersions, miscibilities between the excipients and the drug with the excipients were studied using DSC. As shown in Fig. 3a, PEG, PEO and TPGS have melting points at 59.0 ± 0.2, 70.1 ± 0.2, and 37.4 ± 0.4°C, respectively. However after injection moulding, PEG and PEO melting peaks merged into a single peak (Fig. 3b) indicating good miscibility of PEG-PEO and confirming the formation of a single continuous PEG-PEO phase. The DSC thermograms of the physical mixes of the three excipients (with the same ratio as the one used in the placebo patches) retained all of the original melting events of each excipient. This indicates that either the heating rate used is faster than the kinetic process of melting induced mixing between the molten excipients or TPGS has limited miscibility with PEG and PEO. The DSC results of the placebo patches show two melting transitions at 38 and 65°C corresponding to the melting of TPGS and the blend of PEG-PEO, respectively (Fig. 3b). The separate melting of TPGS suggests limited miscibility between TPGS and PEG-PEO. The phase separation of TPGS and PEG-PEO is likely to be the result of the presence of the hydrophobic alpha tocopherol moiety in the TPGS structure. As a result of the complexity of the composition of TPGS, it is difficult to use theoretical approaches such as calculating the Flory-Huggins interaction parameters using the group contribution method to predict the miscibility of TPGS with the drug and any other excipients used in the patches. An attempt to use the melting point depression method to estimate the miscibility of TPGS and PEG-PEO with the drug also proved unreliable as the dissolution of drug in the molten excipients meant that no melting transition for the drug was observed. Nevertheless the thermal behavior of the physical mixtures of crystalline felodipine and each individual excipient could provide some insight into the miscibility between the drug and the carrier materials. With a low drug:polymer ratio, no melting of crystalline felodipine was detected using DSC due to the melt-dissolution of the drug in the molten carrier material. As seen in Fig. 3c, increasing the ratio of drug:polymer to 90:10 allows the detection of felodipine melting with reduced melting onset and peak temperatures, broader melting peak and reduced melting enthalpy in comparison to pure felodipine (pure crystalline felodipine ΔH f = 76.32 ± 1.44 J/g; crystalline felodipine: (PEG/PEO 4:3) 9:1 ΔH f = 61.46 ± 2.46 J/g; crystalline felodipine: TPGS 9:1 ΔH f = 57.46 ± 0.56 J/g), which indicates a certain degree of miscibility between the drug and each carrier material. The reductions in the onset temperature and enthalpy are more significant f or the felodipine:TPGS mixture than the felodipine:PEG-PEO mixture which implies a higher miscibility of the drug with TPGS than PEG-PEO. This leads to the hypothesis that more drug may be solubilized in the TPGS phases than PEG-EPO phase in the processed dispersions. After felodipine was incorporated in the HME-IM patches, no crystalline felodipine melting was detected by DSC in any patches (data not shown). As earlier, the PXRD and ATR-FTIR results indicated that crystalline felodipine was present in at least in the 30% drug loaded patches, this result suggests that thermal dissolution of crystalline felodipine in the molten excipients occurred during DSC runs. The melting transitions of TPGS and PEG-PEO in the drug loaded patches shifted to lower temperatures than those observed for the placebo patches (Fig. 3b). This melting point depressions of the excipients are likely caused by the dissolved felodipine in the TPGS and PEG-PEO phases during the HME-IM process which may lead to higher level of crystal defects compared to the placebo formulation (34). The melting transition temperatures show drug-loading dependence, as seen in Fig. 3b. It was noted that the lowest melting points of TPGS and PEG-PEO were obtained in the patches with 20% drug loading. This may indicate that the 20% patches contain most dissolved/solubilized drug in the matrices which approaches the saturation or even potentially supersaturation of the drug in the polymer matrices. Further increasing the drug loading to 30% leads to the presence of undissolved/recrystallised crystalline drug accompanied by a shift in the melting peaks of TPGS and PEG-PEO to higher temperatures than were observed in the 10 and 20% loaded patches. However, the melting temperatures of TPGS and PEG-PEO are still lower than those of the placebo suggesting the presence of solubilized drug in the matrices. Based on the conventional characterization results described above one can conclude that 1) phase separation of TPGS and PEG-PEO is present in all patches; 2) drug loading can affect the phase separation behavior; 3) at a drug loading of 30%, the patches contain phase separated crystalline drug. However, as all systems exhibit phase separation it is important to gain more information on the uniformity and distribution of these separate phases. Here TASC and XμCT are proposed as complementary methods to the more established methods to study the microstructure of the samples. TASC Investigation of the Structural Heterogeneity of the Patches Sequences of images were collected during the heating or cooling of patches using TASC. Initially a region of interest (ROI) was selected (Fig. 4). TASC follows the subtle changes of structure of the selected ROI and converts this information into phase transition signals plotted against temperature. The detailed algorithm of TASC is described elsewhere (20). Figure 5 shows the heating and cooling cycle of the patches measured using TASC. Both the melting of TPGS and PEG-PEO phases can be clearly distinguished on the TASC thermogram of the placebo patches. The transition temperatures are in good agreement with the DSC data. At 10% w/w drug loading, the melting of TPGS is less obvious and there is slight lowering in the melting peak of the PEG-PEO phase compared to the placebo sample. For both the placebo and 10% loaded patches, a clear and sharp transition to the plateau of the maximum normalized TASC signal was observed after the melting transition of PEG-PEO. The plateau region is a clear indication of no further changes in the TASC signal of Fig. 2 Partial ATR-FTIR spectra of felodipine NH stretching region of the HME-IM patches with different drug loadings in comparison to crystalline and amorphous felodipine. Characterization of Complex Solid Dispersions by TASC and XμCT the samples, which can be translated into complete melting in this case. Further increasing the drug loading to 20% w/w caused the TPGS melting to almost disappear in the TASC curve and was associated with a further reduction in the melting temperature of PEG-PEO phase. It is noted that after the PEG-PEO melting, the signal approached the plateau region much more gradually in comparison to the TASC results of the placebo and 10% loaded patches. The TASC results of the patches with 30% w/w felodipine content show a complex triple transition. The melting peak of TPGS can be clearly seen at approximately 33°C which is in agreement with the DSC data. Two further melting transitions were detected at 60 and 76°C followed by the absence of the plateau region seen in the placebo and 10% loaded samples. DSC data of the 30% loaded patches only showed the melting of the PEG-PEO phase at 60°C (Fig. 3c). However it is known from the other characterization methods that there were crystalline drug particles present in the 30% patches. Therefore the 76°C transition detected by TASC is likely to be associated with the thermal dissolution of the remaining crystalline drug into the molten matrix. The absence of a plateau region indicates the continuous changes captured by TASC were not completed at 90°C. The poorer reproducibility of data in the high temperature region was also noted in comparison to the results of the samples with lower drug loadings. The low reproducibility and failure to reach a plateau with the individual replicates of the 30% w/w loaded formulation were further investigated by altering the size of ROI and increasing the terminal temperature of the analysis to above the melting point of crystalline felodipine. As seen in Fig. 6, the reproducibility of the data collected by analyzing small areas (ROIs, approximately between 2.5 × 10 −3 and 10 × 10 −3 mm 2 ) is lower than that obtained from larger areas (between 40 × 10 −3 and 90 × 10 −3 mm 2 ). The results obtained using larger tested areas often overlooks the differences present locally on a micro scale (heterogeneity). This is demonstrated by the highly reproducible DSC data in which the samples were tested as a bulk material with no localized information being obtainable (Fig. 6c). The poor reproducibility of the TASC results obtained from small ROIs indicates a high variability in the thermal transitions detected locally. The size of drug crystals detected by SEM is approximately 10-20 μm in diameter which is smaller than the smallest ROI used in this analyses. The thermal properties detected for each ROI is the average of all materials within the area which should therefore be a mixture of drug crystals, excipients and amorphous dispersions of drug dissolved in the excipients. The variation of the thermal properties is likely to represent differing amounts of drug crystals, excipients and amorphous drug dispersions being present in each ROI. This was not observed in placebo and samples with 10% drug loading (Supplementary Information Figure S3). This is a clear indication of the high heterogeneity of the distribution of the separate phases in the patches with 30% drug loading at the micron scale. The attempt of validating such finding by XμCT is described in the next section. To confirm that the slower approach to the plateau of TASC signal for the 30% loaded patches is related to the dissolution of phase separated crystalline drug, the heating was extended to above the melting point of crystalline felodipine. As seen in Fig. 6, a plateau was gradually approached between 100 and 140°C suggesting the slow and temperature dependent thermal dissolution of crystalline felodipine into the molten matrices of TPGS and PEG-PEO. To further investigate the temperature and time dependency of the phase separation behavior of these patches, cooling and reheating cycles of the heated samples were analyzed. As plotted in Fig. 7, double transitions were detected in the cooling cycles of 0-20% loaded patches and the transition temperatures decreased with increasing the drug loading. This agrees well with the corresponding DSC data of the cooling cycle from 160 to 0°C (Fig. 8a), indicating the crystallization of PEG-PEO and TPGS phases. The reduction of the crystallization temperature can be explained by the incorporation of drug in both phases (although not necessarily in equal proportions) which disrupted the crystallization of the excipients. However only a single phase can be seen in the TASC and DSC results of the cooling cycle of the patches with 30% drug loading. This may indicate that TPGS did not crystallise due to the presence of the dissolved drug in these patches. The TASC results of the reheating cycles show improved reproducibility and new thermal features compared to the heating cycle. As seen in Fig. 7, a new transition at 56-59°C was detected in the patches with 0-20% drug loading. This thermal transition was not detected by DSC at 10°C/min in this study (Fig. 8b). In the literature, it has been reported that using a slower scanning rate would allow the observation of melting of the folded form of PEG 4000 in the presence of drug molecules (35). The fact that it was absent in the measurements of the heating cycle suggests that this transition is highly time dependent. Although the heating cycle was performed on samples freshly prepared by HME-IM, the samples were cooled for at least 1 h prior the measurements being taken. Within this period, the unfolding of PEG chains was already completed and therefore this transition was not detected in the heating cycle. It was also noted that for the samples with 0-20% drug loading, a clear plateau was reached after the sharper melting transitions of the PEG-PEO phases than with the melting transition observed in the heating cycle (Fig. 5). This may be attributed to the high homogeneity and complete lack of a crystalline drug phase separation in these reheated samples. In contrast to the clear identification of TPGS and PEG-EPO melting in the heating cycle, the TASC reheating cycle for the 30% drug loaded samples showed a gradual transition at 50°C and a sharp transition of the signal towards the plateau region. This may indicate that the phase separation of TPGS and PEG-PEO is not completed within the timeframe of cooling-reheating cycle, suggesting that for the 30% drug loaded patches, the kinetics of the phase separation process is slower than the one for the patches with other drug loadings. All transition temperatures observed in the TASC results are in good agreement with the transitions detected by DSC. XμCT Analysis of the Internal Microstructure and Spatial Distribution of Crystalline Drug XμCT analysis of the placebo patches revealed that they are pore-free with little interior microstructure at the resolution used in XμCT (Supplementary Information Figure S4). At 10 and 20% drug loading, some internal air pockets are evident as seen in Fig. 9. These occasional air pockets have no defined structure. With increasing the drug loading, the volume fraction of the patches occupied by the air voids was also increased. The few particles with high density shown as bright spots in the matrix were identified as silicone dioxide (SiO 2 ) (with a density of 2.65 g/cm 3 ), which is an inorganic material present in the powder of PEO at a concentration of 0.8-3% w/w as a powder flowability enhancer (36). No other phase separation can be observed in these patches with 10 and 20% drug loading. Although DSC and TASC confirmed the presence of separate TPGS and PEG-PEO phases, both are Fig. 6 Comparison of the TASC results of the heating cycle of 30% w/w felodipine patches using (a) small sampling spots (ROIs, approximately between 2.5 × 10 −3 and 10× 10 −3 mm 2 ), (b) larger sampling spots (ROIs, approximately between 40× 10 −3 and 90× 10 −3 mm 2 ); (c) standard DSC averaged thermograms (n = 3) for the 30% w/w drug loaded samples. organic materials with similar elemental composition in their structure which provide no electron density contrast that can be used in XμCT to resolve the different phases. Felodipine has chlorine atoms in its structure which have higher electron density compared to the elements in the excipients. When felodipine dissolved in the excipients as a molecular dispersion, the overall electron density of the local area will be elevated by the presence of felodipine. The fact that no isolated drug clusters can be identified using XμCT for these two patches indicates that felodipine is relatively evenly distributed across the patches. It should also be mentioned that the spatial resolution of XμCT used in this study is within the micrometre range. Therefore, if any drug clusters occur with sizes smaller than few microns, they would not be detectable by XμCT. As seen in Fig. 10, the XμCT images of the patches with 30% drug loading show the presence of clear drug clusters and air voids with well-defined spherical shape. As PXRD and ATR-FTIR spectroscopy results indicated the presence of crystalline drug, it can be stated with some confidence that these drug clusters, represent the crystalline drug particles and be described as crystalline drug particles in the following discussions. The crystalline drug particles are 10-20 μm in diameter, which is similar to the crystals observed using SEM. As seen in Fig. 10a, the crystals (light spots) are more frequently distributed at the interfaces between the air voids and the matrix. This is an interesting feature which was not detected by any other characterisation method used in this study. The DSC results indicate that felodipine has a higher miscibility with TPGS than PEG-PEO and hence drug crystallisation after reaching supersaturation is more likely to occur in PEG-PEO-rich domains than in TPGS-rich domains. Therefore it is reasonable to speculate that these crystalline felodipine-rich areas around the air pockets are also PEG-PEO rich regions. XμCT Analysis as a Potential Semi-Quantitative Method to Study Crystalline Drug Content and Heterogeneity In order to further explore the possibility of using XμCT as a quantitative method for characterising phase separation in solid dispersions, compressed compacts of the physical mixes of crystalline felodipine with known drug content (the same drug content as was used in the patches) were prepared and analysed. It should be highlighted that although crystalline drug was used to prepare the physical mixtures, only differences in the chemical makeup of the drug and excipients can be observed by XμCT not their physical form. As seen in Fig. 11, crystalline felodipine particles are evenly distributed across the matrices. The volume fraction of the space occupied by the crystalline drug particles can be measured and the values for the compacts with 10-60% crystalline drug loading were plotted against the known drug content (Fig. 11d). It was noted that the linearity of the correlation was not ideal (with a regression R 2 of 0.92). Therefore these results should be regarded as semi-quantitative. It was noted that the compacts were much softer after compression than normal solid tablets and the surfaces of the compacts were slightly tacky. This softening indicates the lowered melting point of the mixture which could be caused by solubilisation of crystalline drug in the low melting excipients such as TPGS during the highpressure compression process. This may explain why the 60% drug loaded physical mixture shows more deviation from the linear correlation in comparison to results obtained from the 10-40% drug loaded physical mixtures. Using systems that do not have dissolution or physical form changes of the drug during compaction with excipients may improve the accuracy and linear correlation between drug loading and XμCT measured volume. Nevertheless the attempt of using the correlation as a calibration curve was made to estimate the amount crystalline drug in the HME-IM patches with 30% drug loading. The volume fraction of the crystalline drug particles observed in Fig. 10 is 0.078. Using the linear correlation Fig. 11 Representative 3D XμCT images of the distribution of crystalline felodipine in the compacts made of the physical mixes of crystalline felodipine-TPGS-PEG-PEO with (a) 10%; (b) 30%; and (c) 60% crystalline felodipine loadings. (d) the correlation between crystalline drug content in these compacts and measured volume fraction of felodipine in their 3D XμCT images, which was used as calibration curve for the quantitative estimation of crystalline felodipine in HME-IM patches with 30% drug loading. shown in Fig. 11d the weight fraction of crystalline drug can be calculated as 10.3% (w/w). This indicates that 19.7% felodipine was molecularly dispersed in the matrices in the HME-IM patches with 30% drug loading. As no crystalline drug was detected in the HME-IM patches with 20% drug loading, it indicates that the 20% is close to the saturation of the solubility of felodipine in the matrices. Therefore for the patches with 30% loading, approximately 10% drug should be phase separated as crystalline drug. The XμCT quantitative estimation agrees well with this. The heterogeneity of the patches with 30% drug loading was studied using XμCT in order to make comparison with the measurements on heterogeneity by TASC. The same methodology used with TASC for measuring heterogeneity was adopted and areas of interests (ROI) with various sizes were taken from 2D XμCT images. Using the quantitative Pure Felodipine 10% w/w felodipine patch 20% w/w felodipine patch 30% w/w felodipine patch calibration described above, the amount of crystalline felodipine in each ROI were calculated. As shown in Fig. 12a, a single XμCT slice (grey scale image) was used and 6 small (100 × 100 μm equivalent to 10 × 10 −3 mm 2 ) ROIs and 6 large ROIs (300 × 300 μm equivalent to 90 × 10 −3 mm 2 ) were randomly selected and analyzed. These areas are similar in size to the ones used in TASC measurements. Same thresholding procedure was adapted for the estimation of the volume fraction of phase separated crystalline felodipine in all of these ROIs. It can be seen in Fig. 12b that the amounts of crystalline felodipine measured in larger ROIs have lower standard deviation in comparison to those measured in the smaller ROIs indicating the high heterogeneity at the scale of 100 × 100 μm. This finding agrees well with the results obtained by TASC and confirm that integrating large areas reduces the sensitivity to heterogeneity and explained why heterogeneity is not detected by DSC analysis. In Vitro Drug Release from the HME-IM Patches In vitro unidirectional drug release data of the patches with different drug loading tested under non-sink conditions are shown in Fig. 13. For the 10 and 20% w/w patches, 10-15 fold increases in maximum drug release were achieved within 2-2.25 h in comparison to the crystalline drug alone. This may be attributed to fact that the majority of drug in these two formulations is in the amorphous state, which led to faster dissolution. However, with increasing the drug loading to 30%, the increase in drug release reduced to only 2-fold in comparison to the crystalline drug. The presence of phase separated crystalline drug located in at the interface of the air pockets (likely to be the PEG-PEO domains) in the patches is likely to be responsible for this result. The dissolution results indicate that the phase-separated carrier systems that contain no crystalline drug can significantly improve drug release. Even with only one side of the intact patches in contact with the dissolution media, this dissolution enhancement is comparable with other binary solid dispersion systems reported in the literature where milled extrusion powders with much higher total surface area for dissolution were used (9). CONCLUSION This study introduces the use of two novel characterisation methods for studying phase separation behaviour in pharmaceutical solid dispersions, TASC and XμCT. The characterisation techniques were challenged by a set of complex multicomponent solid dispersions containing TPGS, PEG, PEO and the model drug felodipine. The results confirmed that both techniques not only could provide complementary information to conventional characterisation tools, such as DSC, PXRD, ATR-FTIR and SEM-EDS to reveal the correlation between drug-excipient miscibility and phase separation, but also are able to provide a new and important understanding of the heterogeneity and distribution of separate phases in the systems. TASC allowed rapid identification of heterogeneity in the dispersions containing phase separation but does not have the capability of analysing the spatial distribution of the phases. As a non-destructive technique, XμCT analysis provided the 3D microstructure of the interior of the patches and the spatial distribution of the separated phases. The drug release results reflected the negative impact that phase separation of drug clusters had on the dissolution of the poorly soluble model drug. This detailed understanding of the dispersions will provide confidence in product quality of dispersions formulations. However, it should be highlighted that XμCT cannot be used as identification method on its own for distinguishing crystalline and amorphous drug domains. The first attempt of using XμCT as a quantitative method to estimate phase separated drug clusters (identified as crystalline drug with confirmation by PXRD and ATR-FTIR) in processed formulations indicated its potential application for such purposes. However the results reported here can only be regarded as semi-quantitative. Further studies are needed to validate XμCT as a quantitative method.
2022-12-21T15:56:50.271Z
2016-04-19T00:00:00.000
{ "year": 2016, "sha1": "b26a8061bb47ff52883a9eb007d2fce63af99189", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11095-016-1923-3.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "b26a8061bb47ff52883a9eb007d2fce63af99189", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
51615428
pes2o/s2orc
v3-fos-license
Progressive Ventricles Enlargement and Cerebrospinal Fluid Volume Increases as a Marker of Neurodegeneration in Patients with Spinal Cord Injury: A Longitudinal Magnetic Resonance Imaging Study Abstract Next to gray and white matter atrophy, cerebrospinal fluid (CSF) volume and ventricular dilation may be surrogate biomarkers for brain atrophy in spinal cord injury (SCI). We therefore aimed to track brain atrophy by means of CSF volume changes and ventricular enlargements over two years after SCI. Fifteen patients with SCI and 18 healthy controls underwent a series of T1-weighted scans during five time points over two years. Changes of CSF/intracranial volume (CSF/ICV) ratio, CSF volume, and ventricular enlargement rate over time were determined. Sample sizes with 80% power and 5% significance were calculated to detect a range of treatment effects for a two-armed trial. There was a significant cross-sectional increased CSF/ICV ratio in patients compared with controls at each time point (p < 0.02). The rate of CSF/ICV changes, however, was not significantly different between groups over time. CSF volume increased linearly over bilateral sensorimotor cortices (left: p = 0.002, right: p = 0.042) and in the supracerebellar space (p < 0.001) within two years. An acceleration of the enlargement within the third (p = 0.017) and the fourth (p = 0.006) ventricles was observed in patients over time. Sample size estimation for six-month trials with CSF volume requires 25 patients per treatment arm to detect a hypothetical treatment effect in terms of slowing of atrophy rate of 30%. This study shows that SCI-induced changes in CSF/ICV ratio and ventricular expansion rate provide additional information on the neurodegenerative processes after injury. The sensitivity to scoring treatment effects speaks to its potential to serve as a sensitive biomarker in addition to local atrophy measures. Introduction T raumatic spinal cord injury (SCI) leads in most patients to profound neurological dysfunction and paralysis below the level of lesion 1 as information flow between supraspinal and spinal neuronal networks is impaired. 2 Functional recovery after human SCI is restricted but can be fostered by intensive neurorehabilitation. The neuronal mechanisms underlying neurological and functional recovery, however, are still not well understood because of the complex relationship between neurodegeneration and plasticity. The ability to track trauma-induced structural changes across the neuraxis provides the opportunity to quantify neurodegeneration in vivo 3 and recovery-related plasticity, 4 which may identify new treatment targets. After SCI, serial magnetic resonance imaging (MRI) studies revealed progressive neurodegeneration, both in the gray matter (GM) and white matter (WM), along the entire trajectory of the motor 5 and sensory systems 6 above the level of injury. Recently, a follow-up study in the same patient cohort revealed that these changes continued for at least two years post-SCI. 7 Crucially, the magnitude of neurodegeneration at the level of the spinal cord, brainstem, and cortex over the first six months predicted clinical outcome at two years, independent of early clinical changes. 7 Nevertheless, potential neuroimaging biomarkers for global brain atrophy such as intracranial volume (ICV), cerebrospinal fluid (CSF) volume changes, and ventricular enlargement were not investigated. ICV, CSF volume, and ventricle enlargement measurements are reliable morphometric features to determine atrophy patterns in patients with mild cognitive impairment, Alzheimer disease, 8 Parkinson disease, 9 Huntington disease, 10 and traumatic brain injury (TBI). 11 For instance, early ventricular dilatation was observed in the course of significant cognitive decline in patients with Parkinson disease. 9 In addition, it has been shown that the CSF/ICV ratio, besides its potential to quantitate general brain atrophy, 12 does not depend on sex and therefore may be used in mixed sex studies as well. 13 The rationale for the use of these markers of global brain atrophy measures in SCI arises as the trauma to the spinal cord triggers a cascade of inflammatory processes that spreads across the central nervous system (CNS). [14][15][16][17] Chronic inflammation is associated with neurodegeneration and hence could lead to volumetric fluctuations in brain tissue. 18,19 Interestingly, measures of global brain atrophy as well as longitudinal MRI findings are now used as surrogate end-points in clinical trials 9,10,19 next to measures of focal GM and WM neurodegeneration to complement clinical assessments for disease-modifying trials. It remains to be established, however, whether MRI-derived measurement of global brain atrophy and ventricle expansions (both are reflected by increases in CSF volume) are sensitive and accurate in identifying diseaserelated changes in patients with SCI. The aim of this study was to investigate the trajectory of progressive global brain atrophy (i.e., CSF/ICV ratios, volumetric CSF changes) and local brain atrophy (i.e., ventricular enlargement) over two years in the same SCI patient cohort that previously showed enduring neurodegenerative changes in the cortical GM and WM after SCI. 7 To track brain atrophy, we applied longitudinal Voxel Based Morphometry (VBM) 20,21 on serially acquired high resolution T1-weighted MR images. We estimated the sample sizes for a six-month trial of CSF volume that might inform the design of future clinical trials to detect a range of treatment effects with 80% statistical power. Subjects The longitudinal study was approved by the local ethics committee of Zurich (EK-2010-0271), and written informed consent was obtained from each subject before the examination. Fifteen patients with SCI (nine tetraplegic and six paraplegic patients, mean age, 48 years -19; age range, 19-75 years, Table 1) and 18 healthy controls (mean age, 35 years -10, age range, 23-65 years) underwent a series of T1-weighted three-dimensional Magnetization Prepared Rapid Acquisition Gradient Echo, (3D-MPRAGE) scans during five time points over two years. The inclusion criteria were traumatic subacute SCI patients with no head and brain lesions, no mental or medical disorders affecting functional results. Patients underwent a comprehensive clinical assessment, including the International Standards for the Neurological Classification of Spinal Cord Injury protocol 22 at baseline and at two months, six months, 12 months, and 24 months follow-up. MRI measurements The 3D-MPRAGE sequence comprises the following parameters: field of view = 224 · 256 mm 2 , matrix size = 224 · 256, repetition time = 2420 msec, echo time = 4.18 msec, readout bandwidth = 150 Hz per pixel, 1 mm 3 of resolution, flip angle a = 9 degrees, inversion time = 960 msec, and total acquisition time of 9 min. The first scan (baseline) was acquired at 49.67 (-22) days post-injury, the second scan at two, the third scan at six, the fourth scan at 12, and the fifth scan at 24 months after injury. The images on the first four time points were acquired using a 3T Magnetom Verio (Siemens Healthcare, Erlangen, Germany), and for measurements on the fifth time point, the scanner was upgraded to a 3T Magnetom Skyra fit . We used a 16-channel radiofrequency receive head and neck coil in combination with a spine matrix coil. Trained radiographers positioned all participants in the same supine position for each scan. Image acquisition over five time points was completed successfully in 14 patients and in 18 healthy controls. One patient died of causes unrelated to SCI after the second time point. A total of 156 MRI datasets were analyzed from 33 participants. All T1-weighted 3D-MPARGE images acquired from subjects were included in the VBM analysis. 20,21 Brain volume Global tissue volumes for GM, WM, and CSF at each time point were calculated using the segmented T1-weighted images with applying unified segmentation, 23 and the total ICV was expressed as the sum of volumes of all tissue classes. A previously established global measure of CSF volume is the CSF volume-to-ICV (CSF/ ICV) ratio, calculated as the CSF volume divided by total brain volume (sum of GM, WM, and CSF) to adjust for intersubject differences in brain size. 24 CSF/ICV ratio is used as a global atrophy marker associated with CSF volume. To assess local change of CSF volume over time, longitudinal VBM was applied within SPM12 (Wellcome Trust Centre for Neuroimaging). Diffeomorphic registration was applied for longitudinal MRI, and resulting midpoint images were segmented, 20 Non-linear template generation and image normalization were performed using a geodesic shooting procedure. 25 The template was affine registered to the standard brain template from Montreal Neurological Institute for all subsequent modeling steps. Consecutively, normalized CSF tissue segments from all subjects and time points were modulated by the Jacobian determinants encoding individual volume changes over time. Morphometric images were smoothed using Gaussian kernels of 6 mm full width at half maximum. Subsequent modeling and analysis were performed for smoothed, normalized CSF segments within specific brain areas. Statistical analysis To statistically assess cross-sectional and longitudinal changes of the CSF/ICV ratio, we used pairwise comparisons for each time point and linear mixed effects models with a group indicator and To assess group differences in trajectories of local CSF volume and ventricular enlargements, we followed a conservative twostage summary statistics approach commonly used in fMRI and longitudinal image analysis. In a first stage, we estimated individual quadratic trajectory models y (t) = b 0 + b 1 t + b 2 t 2 and obtained intercepts (b 0 ), rate of change (b 1 ), quadratic effects (b 2 ), and time since injury (t) for all subjects in the sample independently. In a second stage, we used two-sample parametric t tests (for all voxels within each region of interest [ROI]), comparing the parameters across clinical groups, while adjusting for age and sex as covariates of no interest. Group differences of linear (e.g., b 1 < 0 indicating decline) and quadratic (e.g., b 2 > 0 indicating deceleration) effects were assessed using random field theory for correction of multiple comparisons within each considered ROI. The associated p values were corrected for multiple comparisons using family-wise error correction, and cluster significance was tested (after applying a cluster-forming-threshold of 0.001), using Gaussian random field theory. Regression models were applied to determine associations between CSF volume and ventricular expansion and clinical outcomes over two years of follow-up. The mean age difference between patients and controls was not found to be statistically significant ( p = 0.071, Mann-Whitney U test). However, age was included as a covariate of no interest in all statistical tests. For CSF volume, we applied the six-month effect size to calculate estimates of the sample sizes necessary to detect a 100% treatment effect with 80% statistical power and 5% significant differences between healthy controls and patient group, by use of the standard formula based on two-group trials, assuming a baseline adjusted comparison of mean (analysis of covariance). 26 The required Pearson correlation coefficient between baseline data and six-month CSF volume was estimated using the available data. The associations between MRI readouts (i.e., CSF/ICV ratio and ventricle enlargements) and clinical outcomes were investigated using regression model in SPM 12 and Stata 13. Results The local WM and GM changes over two years were estimated previously from these 33 participants, and the results have been presented before. 7 The mean scan intervals between time of injury and imaging baseline, two, six, 12, and 24 months were 49.67 (standard error of the mean [SE] 5.91), 103.5 (SE 12.40), 220.36 (SE 18.69), 389.93 (SE 29.60), and 881.14 (SE 43.07) days, respectively. Global CSF volume In this study, at baseline, CSF/ICV ratio was significantly increased in patients compared with controls (patients = 0.21 -0.03 mL; controls = 0.25 -0.05 mL, p < 0.01). The ratio remained constantly increased (i.e., cross-sectional), but the linear slope of rate of change was not significantly different between patients and controls over time (Fig. 1). Local CSF volume and ventricles enlargement At baseline, the third ventricle was enlarged (z score = 3.81, p = 0.011) in patients relative to controls while accounting for age and sex. Over 24 months, local CSF volume increased linearly (i.e., degeneration) over bilateral sensorimotor cortices (left: z score = 4.14 p = 0.002, right: z score = 4.21, p = 0.042) and within the left supracerebellar space (z score = 4.97, p < 0.001, Fig. 2A,B,C and Table 2). Testing for potential effects of recovery in terms of deceleration and acceleration of the disease process, we found that the third (z score = 4.60, p = 0.017) and fourth (z score = 3.82, p = 0.006) ventricle enlargements accelerated (positive quadratic effect) in patients compared with the controls (Fig. 2 A,D,E) over time. Clinical outcomes correlation The associations between MRI readouts (i.e., CSF/ICV ratio and ventricle enlargements) and clinical outcomes showed no significant correlations between MRI parameters and clinical outcome measures. Figure 3 demonstrates the sample size requirements for sixmonth clinical trials that have 80% statistical power at 5% significance to detect a 30% treatment effect. For instance, in a six-month trial that has 80% statistical power (at a two-sided 5% significance level) to detect 30% changes in CSF volume using VBM model, 25 patients are required per group. Discussion This longitudinal study shows that progressive global brain atrophy is evident-next to GM and WM atrophy as reported in the same cohort-early after SCI. At baseline, the CSF/ICV ratio had increased already and remained so over time without substantial fluctuations while changes of CSF volume over the sensorimotor cortex and within the supracerebellar space showed sustained increases. Interestingly, the rate of expansions of the third and fourth ventricle showed linear and non-linear changes (i.e., acceleration of the disease process) over time. These dynamic volumetric expansions may be indicative of enduring neurodegenerative processes within the cortical GM and WM as reported in this cohort before over two years post-SCI. 7 Sample size calculations demonstrated the sensitivity of acute CSF neuroimaging biomarkers to render them viable candidates for scoring the effects of treatment, including rehabilitation. Several studies 5-7 focused previously on progressive GM and WM atrophy after SCI but did not address global measures of brain atrophy such as the CSF/ICV ratio, local CSF volume, and ventricular expansion. Although non-specific to any single disease process, global measures of brain atrophy have proven sensitive in tracking disease processes and in picking up treatment-induced changes in neurodegenerative diseases, 8-10 thus rendering them viable tools in clinical trials. 10 Moreover, a previous MRI study on 123 patients with TBI showed that TBI results in the expansion of CSF spaces, particularly in the temporal horns and third ventricle, which preceded subsequent reduction in total brain volume. 11 A gradual process of diminishing arborisation of surviving neurons as a result of disruption of neuronal circuitry might be one possible explanation for this observation. 11 In the current SCI cohort, next to baseline differences of the CSF/ ICV ratio and enlarged third and fourth ventricle, linear increases of CSF volume were detectable in the supracerebellar space and bilaterally over the sensorimotor cortices. The increase of CSF volume over the sensorimotor cortices might reflect active neurodegeneration within the GM in the output deprived leg representing area of primary motor cortex. 7 The reported focal GM changes within the sensorimotor cortices (restricted to the leg area) and thus at the border to the CSF 27 might be because of increases in CSF volume, however, which had been assigned falsely to GM during segmentation. Cognitive decline, anxiety, and depression are reported to be elevated in patients with SCI when compared with the normal population. 28 Crucially, patients with depression showed a higher CSF volume in other neurodegenerative diseases, 29 which may also be the case in SCI. 16 Because these symptoms have been associated with brain atrophy in other diseases, 29 it seems likely that global brain atrophy in our cohort could be associated with signs of cognitive impairment. In particular, we have shown previously that the limbic system in this cohort shows signs of neurodegeneration. 5, 6 We did not anticipate that the global measures of the CSF changes were related to specific functional measures of recovery. This finding is in line with a previous study in Huntington disease in which no significant correlations were detected between ventricular volume changes and clinical measures. 10 Changes in imaging outcomes in response to treatment might not always be accompanied by functional improvements, and clinical function might sometimes improve in the absence of changes in imaging measures. 10 Thus, although anticipated, our results provide no evidence that the severity of trauma is related to the rate of change of CSF measures. This is interesting because it also points to the fact that a focal CNS injury, such as SCI, might induce a cascade of secondary neurodegenerative events that progress with a distinct time profile. The conjoint analysis of CSF volume has the potential to elucidate whether the chronic inflammatory process within the CSF or the WM and GM is the main factor driving the observed changes in our study cohort. Based on the estimated longitudinal effect sizes of CSF volume, we recommend CSF volume as an outcome measures to power clinical trials in SCI. These MRI-based measures may afford the opportunity to assess site-specific effects of intervention, essential for the translation of trial efficacy into clinical effectiveness. 30 Hypothetical treatment effects, defined by slower longitudinal structural changes in these imaging measures, could be detectable over a realistic time scale with significantly lower sample sizes than required for traditional clinical readouts. 31 Thus, these objective outcome measures hold considerable promise for quantifying the effects of treatment. In short, quantitative MRI biomarkers of neurodegeneration therefore represent promising instruments for the stratification of patient cohorts and the improvement of trial efficiency. 30 Limitations A limitation of the study is the relatively small number of participants that were recruited. In total, however, 156 data points from patients and controls were included in the analysis and the summary statistics used maximize the efficiency because all data points were included. Hypothetical treatment effects defined by slower longitudinal structural changes in these measures would be detectable over a realistic time scale with practical sample sizes and be useful in monitoring trauma-induced changes before, during, and after treatment. Our sample size calculations are made under several assumptions. We do consider full compliance, no dropout, any possible effects of treatment response heterogeneity, all of which might inflate within group variance. The impact of these confounds, however, is an order of magnitude smaller than the sample sizes that we present. Conclusion We found that CSF/ICV ratio, CSF volume, and ventricular enlargement rate are sensitive to neurodegenerative changes in SCI by way of group differences between patients and healthy controls. The sensitivity to scoring treatment effects speaks to its potential to serve as a sensitive biomarker in addition to local GM and WM atrophy measures. The link between inflammatory effects detectable within the CSF 32,33 and global brain atrophy should be addressed next. Acknowledgment We would like to thank all subjects participating in this study who gave generously of their time, the staff of the Department of Radiology for scanning subjects, as well as Dr. Markus Hupp and Dr. Katharina Wolf at the University Hospital Balgrist for patient recruitment. This project has received funding from the European Union's Horizon 2020 research and innovation program under the grant agreement No 681094, and is supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 15.0137. Moreover, authors receive funds from Wings for Life. Author Disclosure Statement No competing financial interests exist.
2018-08-01T18:42:50.149Z
2018-12-13T00:00:00.000
{ "year": 2018, "sha1": "45d73e8f9cf4a3e1e124c17bc12da73cd487ee56", "oa_license": "CCBY", "oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/neu.2017.5522", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ed9b1e9e2de19cb3cac055a275ce21dfad5babe4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55789716
pes2o/s2orc
v3-fos-license
Analysis of concrete meso damage based on CT CT is one of the effective means to analyze concrete meso-damage. Based on CT image under uniaxial compression, this paper used Amira—three-dimensional reconstruction software to segment the images, extract pores, count and calculate the pore volume .This paper analyzed the corresponding relation between pressure —pore volume diagram and pressure—displacement diagram under cyclic loading. The results showed that: under uniaxial loading condition, pore volume change can be used to describe the change of the meso pore of the concrete specimen, including compaction, initiation, extension and perforation. Introduction Concrete is a kind of complex artificial composite materi als.The long-term practice [1][2][3] showed that the macrosc opic failure of concrete under external loads included por e initiation, propagation and coalescence.Therefore, if yo u want to find the relationship of concrete meso-damage and macro mechanical properties, the first thing was how to descry the damage process of concrete with feature of concrete meso.Computer tomography technology also kn own as the CT, CT experiments on concrete material nondestructive detection and real-time observation of change s of concrete crack had unique advantage.Academician C hen Hou-qun [4] completed real-time scanned observatio n on concrete meso-damage under uniaxial compression using CT, and obtained the initiation, propagation and co alescence of the whole process of CT image of internal m icro cracks of concrete; Wu Li-qin [5] put CT number cro ss section as the characteristics of representative failure o f materials, from the angle of CT number to described th e failure process of concrete internal.Liu Han-kun [6] did 3D reconstruction of CT images by using MIMICS softw are, a three-dimensional geometric model of concrete, an d used Abaqus finite element software to complete the nu merical simulation analysis, finally achieved good results.Liang Xinyu, Dang FA ning [7] based on changes of the average CT number of CT image in concrete meso dama ge process, put forward the concrete damage variable bas ed on CT number, which showed that concrete specimens under uniaxial compression experienced elastic compre ssion, CT scale crack initiation, crack propagation and co alescence, straight to the macroscopic failure of concret e. Zhou Huoming, Yang Yu and other [8] using acoustic emission location technology and CT image, based on ac oustic emission positioning and section CT number variat ions of rock under uniaxial compression , crack of rock fa ilure process was studied.Yu Aiping, Zhao Yanlin, Feng Yifeng [9] studied the failure process of bonding steel rei nforced concrete by using acoustic emission instrument t o real-time locate tracking corrosion in the pull-out test.L iu Jinghong [10] studied the fractal dimension of acoustic emission parameters of coal rock under uniaxial compres sion by using the fractal theory, and the correlation dimen sion of the acoustic emission energy decreasing continuo usly is seen as the precursor to instability failure of coal a nd rock.Raymond Lam, Li Shulinand other [11] accordin g to the number of acoustic emission and the stress leve l of damage variable, the probability density function abo ut acoustic emission number was proposed, and fitted of the linear relationship between the number of damage var iable and acoustic emission number through the curve.A mira [12] is a 3D modeling software system, and it not onl y can achieve reconstructed using CT image, but also it c an precisely statistic each composition material volume.t his paper, based on the Amira software, extracted the crac k of concrete test block under different loading stages, co unted the crack volume, and got the change curve about p ore volume and pressure, finally discussed the concrete m eso-damage process. CT experiment and pore volume statist ics 2.1 The test specimen preparation In the test of concrete specimens with information such as shown in table 1. Scanning device CT detection system in the test is ACTIS300-320/225X from the State Key laboratory in China University of Mining & Technology (Beijing) [13], and the image size in pixels is 500*500.The scanning thickness is 0.2mm.Moreover, the specimen specification is a cylinder with a radius of 50mm and a high of 190mm.The specimens were carried out in 6 stages of scanning, including the initial phase, as shown in Table 2. Schematic diagram of experimental system was as shown in Figure 1. The result of the experiment The specimen loading pressure-time curve was as shown in Figure 2. 998 CT images were obtained.Select one section under different stress phase as shown in Figure 3. Fig.3 CT scanning image of the failure process of specimen und er uniaxial compression There is no obvious changes and crackle in the first stages until the peak pressure. The Amira software extracted the crack porosity as shown in Figure 4. Concrete pore and crack is a green in figure 3, and pore crack changed more clearly than CT original images through the Amira software extracted pore crack.Based on the Amira software, the pore volume and the center coordinates of pore mass were counted in in different loading stages, as shown in table 3. Table 3 :The pore volume and center coordinate of pore crack mass in different loading stages The ordinate of the concrete test block ranged from (0 , 0 , 0 ) to ( 125 , 125 , 91 ) .From the table above, the center of mass coordinates for the initial pore in the concrete test block was (62.4,60.6, 43.4) when the pressure was 0 KN.Thus, we can conclude that the initial center of mass coordinate in the concrete test block is uniform.After the destruction of the concrete test block, the center of mass coordinate in the concrete test block was (83. the Y and Z coordinates almost didn't change, and the change of the X coordinate was more obvious.Generally speaking, the two showed some correlation.There was almost no change for the coordinate Z of the center coordinate form the beginning to the final failure for instability, thus, a conclusion can be made that the failure section is through, and this is the same with the spatial shape of the crack in the concrete test block from the CT images.As for the X , Y coordinates , with the enlargement of the external load , the center of mass coordinate in the concrete test block increased at the same time , furthermore , it expanse toward , and this agree with the idea that the two main cracks appear at the edge of the specimens .Draw a cure of "pore volume-stress" for the specimens based on the pore volumes in different stress stages from the above table, as figure 5. Due to the high porosity of the specimen, namely, the severity of the initial damage, on the phrase of 30% ultimate stress, there was no obvious elastic compression for the test blocks on the whole, instead, the volume increased slightly.This is because of the severity of the damage, then the test blocks find a faster energy release methods, and it makes the elastic compression phrase shorter than before, accelerates the specimen into the stage of stable crack extension, and finally the pore volume increase slowly.When approach the limit stress, the test blocks enter the crack unable development stage, and the pore volume increases suddenly, finally the blocks fails for the instability.Compared the damage localization maps on different phrase with the corresponding CT images, obvious cracks can be found only in the final failure stage with CT images.Compared with CT technology , the Acoustic emission instrument is more sensitive than former in damage localization , for instance , on the phrase of 23% ultimate press , one main crack appeared ,which indicates that the Acoustic emission instrument can track the damage localization of the concrete earlier , at the same time ,it approves the conclusion that the test blocks find a faster energy release methods , and it makes the elastic phrase shorter than before for the reason of its' severe initial damage .Through a comprehensive analysis from the damage maps with Acoustic emission instrument , load-pore volume diagram and load-time curve ,we can conclude that : c in the early stage that the load was lower than the ultimate stress , the damage localization was disordered , the damage variable of the concrete is relative small , and the initial pore volume increases slighted ; in the cyclic load-time curve , the second loading curve deviates from the first loading curve , indicating that the blocks has experienced pore compression ,and step into the phrase of stable crack extension in advance ; part of the damage localizations overlap with the distribution of the initial pore , the discreteness of the damage localization is relevant with the randomness of initial damage ; d when the load pass the 30% of the ultimate load , load-time curve increase slowly , with the incensement of the load , damage localization distribute around the final failure surface gradually and orderly , and the damage variable of concrete increase gradually ; load-pore volume curve increase slowly , from which we can see that pore distribution showed no obvious changes , and the center of mass coordinate in the concrete test block didn't change significantly , finally ,the blocks was in the phrase of stable crack extension ; e when the load approaches the ultimate load ---134KN , from the loading curve , the stress keep almost constant , while the strain increase sharply ; The damage localization points increase sharply in the final failure surface , and damage variable of concrete showed a significant incensement as well as the volume of pore crack ; the center of mass coordinate in the concrete test block changed obviously , and the blocks stepped into the phrase of instable crack extension , then the two main cracks appeared , which connected the failure surface caused by parts of the original pore , finally , the blocks failed in a instable way . Conclusions From the analyses above, we can conclude that: Based on CT images, utilize Amira 3D visualization soft ware to complete pore segmentation, and count the pore v olume and the center of mass coordinate in the concrete t est block under different stress phrase.The conclusion sh ows that the changes of the former reflect the concrete tes t blocks experience three main deformation stages: the ela stic pore compression phrase, the phrase of stable crack e xtension, the phrase of instable crack extension.The cent er of mass coordinate in the concrete test block under diff erent phrase have certain correlation with the destruction process of the concrete test blocks.After the the destructi on of the concrete test block , the center of mass coordina te in the concrete test block is close to the initial center of mass coordinate in the concrete test block , this provide a theoretical basis for the pore structure of concrete in dete ction of weak surface and in reinforcement in the future .As for whether the distribution of pore in concrete have a decisive influence on the failure of concrete under uniaxi al compression, it still calls for plenty of subsequent expe riment to prove. But by the limit of software, so image segmentation h ad not been a unified standard, and this paper also has the following shortcomings to be improved: A) Automatic threshold segmentation has high requirem ents on the picture, but the manual segmentation is m ore subjectivity.the segmentation effect by using the method of combination of human and computer is i mproved, and the overall regularity is no problem, bu t the data accuracy will be affected, which is not con ducive to the next quantitative analysis B) amount of initial crack is differ from the statistical va lues, there may being reasons: measurement error; th e pore distribution may also have an impact, because the pore volume of statistics is owned to the local blo ck; the main reason is that Amira statistical softwar e requirement for the volume of pore sealing, so the s urrounding open pores or not closed pores in the test block will not be statistics, resulting that pore statisti cs is not comprehensive enough; Some micro pore w as not extracted in the process. Figure 1 . Figure 1.schematic diagram of experimental system Figure 2 pressure displacement diagramFigure 4 : Figure 2 pressure displacement diagram DS2 series full information of acoustic emission signal a nalyzed to complete the AE data acquisition of the whol e loading process.The relevant acoustic emission parame ter setting, threshold values of 100dB, PDT: 150us, HDT: 300us, HLT: 500us.The related technical indicators: 8 c hannel 3MHZ, sampling rate, data collection methods: 4 channel synchronous data acquisition, RS-35C integrate d front sensor: amplifier gain: 100 times. Figure 5 : Figure 5: concrete specimens AE damage location map under the loading process of different stage of stress Table 1 . The mix of concrete' in the test material Amount of per cubic concrete Cement/kg Water/kg Sand/kg Gravel/kg DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015 Table 2 . The load corresponding to the 6 scanning time of the specimen 5, 61.8, 40.5).Compared with the center of mass coordinate for the initial pore in the concrete test block,
2018-12-07T14:13:37.572Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "48756bca2e4c638fd9d46db677e32b5c14c8fc1e", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2015/12/matecconf_icmee2015_12003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48756bca2e4c638fd9d46db677e32b5c14c8fc1e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
232146677
pes2o/s2orc
v3-fos-license
The intersection of algorithmically random closed sets and effective dimension In this article, we study several aspects of the intersections of algorithmically random closed sets. First, we answer a question of Cenzer and Weber, showing that the operation of intersecting relatively random closed sets (with respect to certain underlying measures induced by Bernoulli measures on the space of codes of closed sets), which preserves randomness, can be inverted: a random closed set of the appropriate type can be obtained as the intersection of two relatively random closed sets. We then extend the Cenzer/Weber analysis to the intersection of multiple random closed sets, identifying the Bernoulli measures with respect to which the intersection of relatively random closed sets can be non-empty. We lastly apply our analysis to provide a characterization of the effective Hausdorff dimension of sequences in terms of the degree of intersectability of random closed sets that contain them. Introduction The goal of this article is twofold. First, we extend work on Cenzer and Weber [CW13] concerning the intersection of algorithmically random closed subsets of 2 ω to provide an analysis of multiple intersections of algorithmic random closed sets. Second, we apply our results on multiple intersections to reveal a hitherto undetected relationship between what we call a degree of intersectability of a family of random closed sets and the effective Hausdorff dimension of members of these random closed sets. In particular, we prove that for a family of random closed sets with respect to an underlying probability measure of a certain form (known as a symmetric Bernoulli measure, defined below), there is a fixed degree of intersectability of the random closed sets in this family, and this degree is inversely related to a lower bound on the effective Hausdorff dimension of members of these random closed sets. Moreover, given any sequence X ∈ 2 ω of positive effective Hausdorff dimension, any random closed set (with respect to the relevant underlying probability measure) that contains X must have a degree of intersectability that is inversely proportional to the effective dimension of X. The study of algorithmically random closed sets was initiated by Barmpalias, Brodhead, Cenzer, Dashti, and Weber in [BBC + 07]. In this study, each closed subset of 2 ω is coded as a member of 3 ω , where each value in the sequence is determined by the type of branching that occurs at each node of the underlying tree corresponding to the closed set in question (we discuss the coding mechanism in Section 2.5 below). The standard machinery of algorithmically random sequences then directly transfers over to the setting of closed sets. Subsequent work on this topic was carried out in, for instance, [BCTW11], [DKH12], and [CW13], and more recently, [Axo15], [CP16], and [Axo18]. In follow-up to their initial work, Cenzer and Weber [CW13] studied the unions and intersections of random closed sets with respect to a general family of measures on the space of closed subsets of 2 ω (we write this space as K(2 ω )). Such measures are induced by 1 Bernoulli measures on 3 ω : for p, q such that 0 ≤ p + q ≤ 1, we define the measure µ p,q to satisfy, for every σ ∈ 3 <ω , • µ p,q (σ0) = p · µ p,q (σ), • µ p,q (σ1) = q · µ p,q (σ), and • µ p,q (σ2) = (1 − p − q) · µ p,q (σ). We write the measure on K(2 ω ) induced by µ p,q as µ * p,q (where we follow the convention first laid out in [BBC + 07] that if µ is a measure on codes of closed subsets of 2 ω , then µ * is the induced measure on K(2 ω )). Of the results obtained by Cenzer and Weber in [CW13], the most relevant to the present study is what we will refer to as the Intersection Theorem, which provides a full characterization, in terms of the parameters of Bernoulli measures on 3 ω , of when the associated notions of random closed sets can yield non-empty intersections: Intersection Theorem (Cenzer/Weber [CW13]). Suppose that p, q, r, s ≥ 0, 0 ≤ p + q ≤ 1 and 0 ≤ r + s ≤ 1. Suppose that P ∈ K(2 ω ) is µ * p,q -random relative to Q ∈ K(2 ω ) and that Q is µ * r,s -random relative to P . (1) If p + q + r + s ≥ 1 + pr + qs, then P ∩ Q = ∅. (3) If p + q + r + s < 1 + pr + qs and P ∩ Q = ∅, then P ∩ Q is Martin-Löf random with respect to the measure µ * p+r−pr,q+s−qs . As a corollary of the Intersection Theorem, by setting p = q = r = s (obtaining what we refer to as a symmetric Bernoulli measure on 3 ω , which we write as µ p , with µ * p standing for the corresponding measure on K(2 ω )), Cenzer and Weber obtain: Corollary 1 (Cenzer/Weber [CW13]). For p ∈ (0, 1 2 ), let P, Q ∈ K(2 ω ) be relatively µ * prandom. ( 2 and P ∩ Q = ∅, then P ∩ Q is Martin-Löf random with respect to the measure µ * 2p−p 2 . In our analysis, we extend the work of Cenzer and Weber on the intersection of random closed sets in two respects. First, Cenzer and Weber leave open whether a converse of the Intersection Theorem holds: Question 2. Suppose that p, q, r, s ≥ 0 satisfy 0 ≤ p+q ≤ 1, 0 ≤ r +s ≤ 1 and p+q +r +s < 1 + pr + qs and R is Martin-Löf random with respect to the measure µ * p+r−pr,q+s−qs . Do there exist P, Q ∈ K(2 ω ) such that R = P ∩ Q, P is µ * p,q -Martin-Löf random and Q is µ * r,s -Martin-Löf random? Here we answer this question in the affirmative. Our result makes use of an alternative characterization of µ * p,q -random closed sets in terms of Galton-Watson trees, generalizing a result of Kjos-Hanssen and Diamondstone [DKH12]. We also use an approach similar to one due to Bienvenu, Hoyrup, and Shen [BHS17], who reprove the above result of Kjos-Hanssen and Diamondstone using the machinery of layerwise computability. The second respect in which we extend Cenzer and Weber's work on the intersection of random closed sets pertains to multiple intersections of random closed sets. Here we postpone the full statement of our result until more machinery has been developed, but the general idea is as follows: From Corollary 1, we can conclude: (1) If p < 1 − 1 √ 2 , then relatively µ * p -random closed sets may have a non-empty intersection. We extend this result by showing, for n ≥ 2, the following: (1) If p < 1 − 1 n √ 2 , then n mutually µ * p -random closed sets may have a non-empty intersection. Here, a sequence of closed sets is mutually µ * p -random if the code for each closed set in the sequence is µ p -random relative to the join of the codes of the remaining closed sets in the sequence. We also answer the analogue of Question 2 for the intersection of n random closed sets in the more general case that n ≥ 2. Lastly, we apply our result on multiple intersections to obtain a new characterization of the effective dimension of members of random closed sets. To do so, we draw on work of Diamondstone and Kjos-Hanssen on the effective Hausdorff dimension of members of random closed sets. In particular, from results of Diamondstone and Kjos-Hanssen we can immediately conclude: (1) the dimension spectrum of members of µ * p -random closed sets is [− log(1 − p), 1]; and (2) in the case that p = 1 − 1 n √ 2 , this dimension spectrum evaluates to [ 1 n , 1]. Combining the observations with our results on multiple intersections, we can show: (3) the lower bound on the dimension spectrum of a family of random closed sets is inversely proportional to an upper bound on the number of mutually random closed sets that can have a non-empty intersection; and (4) the effective dimension of a sequence is inversely proportional to the degree of intersectability of any random closed set containing it, where this degree of intersectability measures the number of mutually random closed sets of a given type that can have a non-empty intersection. (More precise statements of these results can be found in Section 5.) The outline of the remainder of this paper is as follows. First, we cover the necessary background in Section 2. Next, Section 3 contains a proof of the converse of the Intersection Theorem (as well as a new proof of the Intersection Theorem that enables us to prove the converse). In Section 4, we turn to multiple intersections of random closed sets, establishing analogues of the Intersection Theorem and its converse for the intersection of any finite number of sufficiently random closed sets. Lastly, we conclude in Section 5 with a discussion of the relationship between effective dimension and multiple intersections of random closed sets. Background 2.1. Some topological and measure-theoretic basics. As we will work with binary, ternary, and quaternary sequences in this study, we will introduce the spaces of such sequences in more generality. For n ∈ ω, we will write the set of all finite strings over the alphabet {0, 1, . . . n − 1} as n <ω . We use ǫ to stand for the empty string. Similarly, the space of all infinite sequences over the alphabet {0, 1, . . . n − 1} is written n ω . For x, y ∈ n ω , x ⊕ y is the sequence z ∈ n ω satisfying z(2k) = x(k) and z(2k + 1) = y(k) for every k ∈ ω. We similarly define σ ⊕ τ for σ, τ ∈ n <ω where |σ| = |τ |. We will work with the topology on n ω generated by the clopen sets where σ ∈ n <ω and x ≻ σ means that σ is an initial segment of x. For x ∈ n ω and k ∈ ω, x ↾ k stands for the initial segment of x of length k. For σ, τ ∈ n <ω , the concatenation of σ and τ will be written as σ ⌢ τ or, in some cases, as στ . We say that T ⊆ n <ω is a tree if, whenever τ ∈ T and σ τ , we have σ ∈ T . A path through a tree T ⊆ n <ω is a sequence x ∈ n ω satisfying x ↾ k ∈ T for every k. The set of all paths through a tree T is denoted by [T ]. Recall that a set C ⊆ n ω is closed if and only if C = [T ] for some tree T ⊆ n <ω . Moreover, C is non-empty if and only if T is infinite. Given a measure µ on n ω and σ, τ ∈ n <ω , the conditional measure µ(στ | σ) is defined by setting 2.2. Some computability theory. We assume the reader is familiar with the basic concepts of computability theory as found, for instance, in the early chapters of [Soa16]. A Σ 0 1 class S ⊆ n ω is an effectively open set, i.e., an effective union of basic clopen subsets of n ω . P ⊆ n ω is a Π 0 1 class if 2 ω \ P is a Σ 0 1 class. For n, m ∈ ω, a Turing functional Φ : ⊆ n ω → m ω is defined in terms of a computably enumerable set of pairs S Φ ⊆ n <ω ×m <ω with the condition that if (σ, τ ), (σ ′ , τ ′ ) ∈ S Φ and σ σ ′ , then either τ τ ′ or τ ′ τ . For each σ ∈ n <ω , we define Φ σ to be the maximal string in {τ ∈ m <ω : (∃σ ′ σ)((σ ′ , τ ) ∈ Φ)} in the order given by . To obtain a map defined on n ω from the c.e. set of pairs S Φ , for each x ∈ n ω , we let Φ x be the maximal y ∈ m <ω ∪ m ω in the order given by such that Φ x↾k is a prefix of y for all k ∈ ω. We will thus set dom(Φ) = {x ∈ n ω : Φ x ∈ m ω }. When Φ x ∈ m ω , we will sometimes write Φ x as Φ(x) to emphasize the functional Φ as a map from n ω to m ω . It is straightforward to relativize the notion of a Turing functional Φ : ⊆ n ω → m ω to any oracle z ∈ 2 ω to obtain a z-computable functional. A measure µ on n ω is computable if µ(σ) is a computable real number, uniformly in σ ∈ n <ω . Clearly, the Lebesgue measure λ on n ω is computable. If µ is a computable measure on n ω and Φ : ⊆ n ω → m ω is a Turing functional defined on a set of µ-measure one, then the pushforward measure µ Φ defined by for each σ ∈ m <ω , is a computable measure. 2.3. Algorithmically random sequences. In this section, we lay out the main definitions of algorithmic randomness with which we will be working. For more details, see [Nie09], [DH10], or [SUV17]. See also [FP20] for an up-to-date survey on algorithmic randomness. Let µ be a computable measure on n ω and let z ∈ m ω . Recall that a µ-Martin-Löf test relative to z (or simply a µ-test relative to z) is a uniformly Σ 0,z 1 sequence (U i ) i∈ω of subsets of n ω with µ(U n ) ≤ 2 −n . Then x ∈ n ω passes such a test (U i ) i∈ω if x / ∈ n U n , and x ∈ n ω is µ-Martin-Löf random relative to z if x passes every µ-Martin-Löf test relative to z. The collection of µ-random sequences relative to z will be denoted by MLR z µ . When z is computable we simply write MLR µ and will refer to x as µ-random. It is not difficult to see that if µ is a computable measure on n ω and Φ : ⊆ n ω → m ω is a Turing functional that satisfies µ(dom(Φ)) = 1, then MLR µ ⊆ dom(Φ). One of the central tools that we will use in this study is the following. Theorem 4 ([VL90]). Let µ and ν be computable measures on 3 ω . Then for x ⊕ y ∈ 3 ω , x ⊕ y ∈ MLR µ⊕ν if and only if x ∈ MLR y µ and y ∈ MLR ν . 2.4. Dimensions of Sequences. Originally, Lutz defined the dimension dim(x) of a sequence x ∈ n ω using a generalized notion of a martingale, called a gale [Lut03]. This notion of dimension can also be extended to individual points in Euclidean space, and various connections between the dimensions of points and classical Hausdorff dimension have been established. For example, it was shown by Hitchcock [Hit05] that, for any set E ⊆ R n that is a union of Π 0 1 sets, where dim H (E) is the classical Hausdorff dimension of E. Another point-wise characterization of Hausdorff dimension was proven by Lutz and Lutz [LL18] and states that, for any where dim A (x) is the dimension of point x ∈ R n relative to an oracle A ⊆ N. Mayordomo showed that the dimensions of sequences can be characterized using Kolmogorov complexity [May02], which we briefly discuss below. A Turing machine U is universal if, for all Turing machines M there exists a string σ M ∈ 2 <ω such that, for all strings π ∈ 2 <ω , U( σ M , π ) = M(π), where ·, · is some string pairing function, e.g., σ, τ = 0 |σ| 1στ . There are several "flavors" of Kolmogorov complexity. The one described above is referred to as the plain Kolmogorov complexity of a string. However, other variants exist such as the prefix-free Kolmogorov complexity, which restricts the domain of the Turing machines (including the universal Turing machine) to a prefix-free set. For a detailed discussion on Kolmogorov complexity, see [LV08]. The dimension of a sequence x ∈ 2 ω is defined by It should be noted that any variation of Kolmogorov complexity can be used in the definition of the dimension of a sequence. and if x is algorithmically random, then dim(x) = 1. However, there exist sequences with dimension 1 that are not algorithmically random. For any α ∈ [0, 1], there exists a sequence 2.5. Algorithmically random closed subsets of 2 ω . Recall that K(2 ω ) is the collection of all non-empty closed subsets of 2 ω . Equivalently, these are the sets of paths through infinite binary trees. Following [BBC + 07], we will code infinite trees by members of 3 ω . Given x ∈ 3 ω , define a tree T x ⊆ 2 <ω inductively as follows. First ǫ, the empty string, is Under this coding T x has no dead ends and hence is always infinite. Note that every tree without dead ends can be coded by some x ∈ 3 ω . We will thus write Θ : 3 ω → K(2 ω ) as the map that sends each x ∈ 3 ω to the closed set that it codes. Given a measure µ on 3 ω , we set µ * to be the measure on K(2 ω ) induced by µ and Θ, i.e., As noted in the introduction, we are particularly interested in certain Bernoulli measures on 3 ω . For p, q ≥ 0 satisfying p + q ≤ 1, µ p,q is the Bernoulli measure on 3 ω defined by setting, for each σ ∈ 3 <ω , In the case that p = q, we will write µ p,q as µ p . We will refer to µ p as a symmetric Bernoulli measure (as the probabilities of the occurrence of a single branch in the corresponding closed set are equal). Lastly, for any computable measure µ on 3 ω , we can define a non-empty closed set C ∈ K(2 ω ) to be µ * -Martin-Löf random if C = [T x ] for some x ∈ MLR µ . The Converse of the Intersection Theorem In this section, we prove the converse of Theorem 1, thereby answering Question 2. To do so, we provide an alternative proof of Theorem 1 in terms of effective Galton-Watson trees, an approach introduced by Diamondstone and Kjos-Hanssen [DKH12] that yields an alternative characterization of random closed sets. 3.1. Effective Galton-Watson Trees with Two Survival Parameters. The idea behind Galton-Watson trees is straightforward: in the cases considered by Kjos-Hanssen and Diamondstone, we fix some parameter β, called the survival parameter, and we prune 2 <ω node by node, leaving a node σ ∈ 2 <ω with probability β, in which case we say that σ survives (and otherwise we remove it). Note that if σ survives, this does not guarantee that infinitely many extensions of σ will survive. As shown by Kjos-Hanssen and Diamondstone, once we have finished pruning 2 <ω , in the case that we do not have a finite tree, the set of infinite paths through the pruned tree forms a random closed set. Bienvenu, Hoyrup, and Shen later provided a streamlined approach of the equivalence of these approaches for the case p = 1 3 (using the machinery of layerwise computability), which can be straightforwardly generalized to arbitrary computable p. In the case of µ * p -random closed sets for some p ∈ [0, 1 2 ], the corresponding Galton-Watson tree that induces the same class of random closed sets is given by using the survival parameter β = 1 − p. However, for the case that we consider here, in which the Bernoulli measures on 3 ω need not be symmetric, we need to work with two different survival parameters. For i ∈ {0, 1}, we let β i be the probability of survival for any string σ that ends with the bit i. As we will see, in the case of the measure µ * p,q , we set β 0 = 1 − q and β 1 = 1 − p. Due to the condition that 0 ≤ p + q ≤ 1, we will only consider β 0 and β 1 satisfying 1 ≤ β 0 + β 1 ≤ 2. As an alternative to representing a random closed set in terms of a code for the underlying binary tree with no dead ends, we will represent such a random closed set in terms of the code for a Galton-Watson tree, an approach first used in [CP16]. Whereas the former codes are sequences in 3 ω , the latter codes will be given by a sequence in 4 ω , where 0s, 1s, and 2s function as they do in the original coding and a 3 at a given node indicates that the tree is dead above that node. That is, given x ∈ 4 ω , we define a tree S x ⊆ 2 <ω inductively as follows. First ǫ, the empty string, is included in S x by default. Now suppose that σ ∈ S x is the (i + 1)-st surviving node in S x (i.e., we have yet to determine which, if any, extensions of σ are in S x ). Then ∈ S x and σ1 / ∈ S x if x(i) = 3. The above four possibilities correspond to the outcomes of a Galton-Watson tree, where for each non-empty σ ∈ 2 <ω we randomly remove σ from 2 <ω (independently of the other τ ∈ 2 <ω ). The set of infinite paths through the resulting random tree is thus a random closed set. In fact, as shown by Diamondstone and Kjos-Hanssen, if each edge is removed with probability p, the resulting distribution on the collection of closed sets is the same as the one given by the measure µ * p , with one exception: the former distribution also includes the empty set as an atom (that is, {∅} is given positive measure by the resulting measure on K(2 ω )), as there is a non-zero probability that the process of removing edges will produce a finite tree. We represent this process by a measure as follows: Let ν be the measure on 4 ω induced by setting, for each σ ∈ 4 <ω , β 1 ). We will refer to ν as the measure on 4 ω given by survival parameters (β 0 , β 1 ). ν induces a measure on Tree, the space of binary trees. In this case, the probability of extending a string in a tree by only 0 is a 0 , by only 1 is a 1 , by both 0 and 1 is a 2 , and by neither is a 3 . Let us say that a tree T is GW(β 0 , β 1 )-random if it has a ν-Martin-Löf random code, where ν is the measure on 4 ω given by survival parameters (β 0 , β 1 ), i.e. if there is some x ∈ MLR ν such that T = S x . The result relating GW(β 0 , β 1 )-random trees and random closed sets is the following: Theorem 5. For β 0 , β 1 ∈ (0, 1) satisfying β 0 + β 1 ≥ 1, a closed set C is the set of paths through an infinite GW(β 0 , β 1 )-random tree if and only if it is a µ * 1−β 1 ,1−β 0 -random closed set. To prove Theorem 5, we adapt an argument due to Bienvenu, Hoyrup, and Shen [BHS17], who, as noted above, give an alternative proof of the Diamondstone/Kjos-Hanssen result [DKH12] using the machinery of layerwise computability (which we define shortly). First, since the process that produces a Galton-Watson tree can yield either a finite tree or an infinite tree, we need to determine the probability of each outcome. Lemma 6. The probability that a GW(β 0 , β 1 )-random tree is infinite is Proof. Let r n be the probability that a Galton-Watson tree contains a string of length n, which is clearly a non-increasing sequence. Then we have the following recurrence relation: ( †) r n+1 = a 0 r n + a 1 r n + a 2 (2r n − r 2 n ) The first term corresponds to the case that 0 is the only child of the root, followed by a tree of height n, the second term corresponds to the case that 1 is the only child of the root, followed by a tree of height n, and the third term corresponds to the case that both 0 and 1 survive and at least one is followed by a tree of height n. which has solutions ℓ = 0 and ℓ = a 2 − a 3 a 2 . We claim that r n > a 2 − a 3 a 2 for all n ∈ ω. We proceed by induction. First r 0 = 1 (since we assume that each Galton-Watson tree at least contains the empty string). Next, assuming that r n > a 2 − a 3 a 2 for some fixed n, suppose for the sake of contradiction that r n+1 ≤ a 2 − a 3 a 2 . Combining this assumption with ( †) above yields the inequality a 2 r 2 n − (a 0 + a 1 + 2a 2 )r n + a 2 − a 3 a 2 ≥ 0. Since a 0 + a 1 + 2a 2 = 1 + a 2 − a 3 , the above inequality, after factoring, can be rewritten as This inequality holds precisely when both factors are positive or both factors are negative. In the former case, using the first factor, we can conclude that r n ≥ 1 a 2 > 1 (since, by assumption, β 0 , β 1 ∈ (0, 1)), which is impossible. In the latter case, using the second factor, we can conclude that r n ≤ a 2 − a 3 a 2 , which contradicts our original assumption about r n . Thus it follows that r n+1 > a 2 − a 3 a 2 . Since r n > a 2 − a 3 a 2 for all n, it follows that ℓ = lim n→∞ r n = a 2 − a 3 a 2 . Hereafter, let us say that a GW-tree T becomes extinct if T is finite, and that a GW-tree T becomes extinct above σ ∈ T if T only contains finitely many extensions of σ. Proof. By Lemma 6, the probability that a GW(β 0 , β 1 )-random tree does not become extinct is Thus, for any string σ ∈ 2 <ω in a GW(β 0 , β 1 )-random tree, the probability that σ has two children above both of which the tree does not become extinct is So, the probability that σ has two children in a pruned, infinite GW(β 0 , β 1 )-random tree is The probability that a string σ ∈ 2 <ω in a GW(β 0 , β 1 )-random tree only has a left child above which the tree does not become extinct is Therefore, the probability that σ has only a left child in a pruned, infinite GW(β 0 , β 1 )-random tree is Using a similar argument, we can prove that the probability that σ has only a right child in a pruned, infinite GW(β 0 , β 1 )-random tree is From Lemma 7, we see that if we take an infinite GW(β 0 , β 1 )-random tree and remove all of its terminal nodes, then the resulting distribution is given by the measure µ * 1−β 1 ,1−β 0 . In order to derive Theorem 5, we need to verify that there is an effective procedure that maps a code of an infinite GW(β 0 , β 1 )-random tree to a µ * 1−β 1 ,1−β 0 -random closed set. In the case of a single parameter GW-tree, this was shown by Bienvenu, Hoyrup, and Shen [BHS17] using the machinery of layerwise computability, originally defined by Hoyrup and Rojas [HR09a,HR09b]. As defined in [BHS17], for a computable measure µ, a mapping Φ is µ-layerwise computable if there is a µ-Martin-Löf test (U i ) i∈ω and a Turing machine M such that, for any n ∈ ω and x / ∈ U n , M(n, x) = Φ(x) (here we think of M as being equipped with an oracle tape on which x is written and a second tape containing the input n). Intuitively, the lemma below shows that there exists a layerwise computable mapping that converts a code for an infinite tree with dead ends to a code for the same infinite tree with the dead ends removed. Lemma 8. There exists a ν-layerwise computable mapping Φ : 4 ω → 4 ω such that, for all Proof. For all σ ∈ 2 <ω and n ∈ ω, let where T ↾ i is the set of all strings in T of length i ∈ ω, for any tree T . That is, U σ n consists of all codes x ∈ 4 ω such that a length n extension of σ survives in T x but T x eventually becomes extinct above σ. Clearly, (U σ n ) n∈ω is effectively open uniformly in σ. Using the notation from Lemma 6, where r n is the probability that GW-tree contains a string of length n and ℓ is the probability that such a tree contains a string of every length, then, for all σ ∈ 2 <ω and n ∈ ω, where ν is the measure on 4 ω from the definition of a random GW(β 0 , β 1 )-tree. Observe that, for any σ ∈ 2 <ω , as n increases ν(U σ n ) decreases and approaches zero (effectively in n). Therefore, there is a computable subsequence of indices (n i ) i∈ω such that, for all i ∈ ω, ν(U σ n i ) ≤ 2 −i . Without loss of generality, by taking an appropriate subsequence and relabeling the indices, we can assume that ν(U σ n ) ≤ 2 −n for all i, n ∈ ω. Thus (U σ n ) n∈ω is a ν-Martin-Löf test. Now, letting (σ i ) i∈ω be the enumeration of 2 <ω in length-lexicographic order, for all i ∈ ω, we set We now show that Φ is layerwise computable by describing a Turing machine such that, when given i ∈ ω and x ∈ 4 ω such that x / ∈ V i , produces Φ(x) on the output tape. In particular, this implies that for each j ∈ ω, if there is some τ σ j such that τ ∈ T x ↾(|σ j | + i + j + 1) then for all k > i + j + 1 there is some τ σ j such that τ ∈ T x ↾(|σ j | + k). In other words, if we see that T x contains an extension of σ j of length i + j + 1, then we can conclude that T x does not become extinct above σ j . Our machine M works by adding binary strings to a set S, which is a set of strings above which our procedure will take action (defined below); the set S, defined in stages, is equal to the set of extendible nodes of T x . First, M sets S 0 = ∅ and constructs the tree coded by x to see if T x contains a string of length i + 1. If so, then by the discussion in the previous paragraph, T x contains a string of every length and thus is infinite, and then M places the empty string ǫ inside S 1 . Otherwise, S = S 0 = ∅, M halts, and Φ(x) outputs 3 ∞ . If S 1 = ∅, we proceed inductively as follows. For k ≥ 1, assume that M has already produced k − 1 bits of output and suppose that σ ∈ S k is the lexicographically least string above which we have not taken action. We describe how our procedure takes action above σ. Since σ = σ j for some j ∈ ω, the two extensions of σ j are σ 2j+1 = σ j 0 and σ 2j+2 = σ j 1 (as we are using the standard length-lexicographic ordering of 2 <ω ). M then constructs sufficiently many levels of the tree T x to determine if T x is extinct i + 2j + 2 levels above σ j 0. If not, we enumerate σ j 0 into S k+1 . Then M similarly determines whether T x is extinct i + 2j + 3 levels above σ j 1; in the case that it is not, we also enumerate σ j 1 into S k+1 . Thus, either σ j 0 or σ j 1 are added to S k+1 , or both. If only σ j 0 is added to S k+1 , then M outputs 0. If only σ j 1 was added to S k+1 , then M outputs 1. If both σ j 0 and σ j 1 were added, then M outputs 2. It is straightforward to verify that Φ(x) is the desired layerwise computable mapping. Note that given any ν-random x ∈ 4 ω that codes for a tree with no infinite paths, we have Φ(x) = 3 ∞ . By Lemma 6, 3 ∞ is an atom of the measure induced by Φ and ν; in fact, the singleton {3 ∞ } is given measure 1 − β 0 +β 1 −1 β 0 β 1 . Moreover, by Lemma 7, the measure on 3 ω induced by Φ when restricted to those x ∈ 4 ω that code for a tree with infinite 11 paths (obtained by considering the range of Φ without the sequence 3 ∞ and scaling the measure appropriately) is precisely the measure µ * 1−β 1 ,1−β 0 . We can thus conclude the proof of Theorem 5 using the fact that both randomness preservation and no randomness from non-randomness hold for layerwise computable maps (see [HR09b,Proposition 5.3.1]): randomness preservation ensures that Φ maps an infinite GW(β 0 , β 1 )-random tree T to the corresponding µ * 1−β 1 ,1−β 0 -random closed set [T ], and no randomness from non-randomness ensures that every µ * 1−β 1 ,1−β 0 -random closed set C is the image of some infinite GW(β 0 , β 1 )random tree T under Φ with C = [T ]. 3.2. Intersections. We now turn to the main result of the section, which provides an affirmative answer to Question 2. Here the machinery we laid out in the previous section will prove to be useful. To prove Theorem 9, we will reprove part (3) of the Intersection Theorem using the lemma below, from which the converse will immediately follow by no randomness from nonrandomness. Proof. First we describe a total computable mapping Ψ : 3 ω → 4 ω such that, on input x = y ⊕ z, Ψ produces a code for the tree T y ∩ T z , which may include non-extendible nodes (and may even be finite). We define a machine M corresponding to this mapping as follows. On input y ⊕ z, M yields its output on the basis of an enumeration of T y ∩ T z , which we shall write as T . First, M places ǫ into T . Next, M enumerates T level by level as follows. Suppose that T has been defined for all strings of length ℓ. For each σ ∈ T of length ℓ, taken in lexicographic order, M checks to see if σ0 and σ1 are also in T y ∩ T z (using the input y ⊕ z). There are four cases to consider: • Case 1: σ0 ∈ T y ∩ T z and σ1 / ∈ T y ∩ T z , in which case σ0 is placed into T and M outputs a 0. • Case 2: σ0 / ∈ T y ∩ T z and σ1 ∈ T y ∩ T z , in which case σ1 is placed into T and M outputs a 1. • Case 3: σ0 ∈ T y ∩ T z and σ1 ∈ T y ∩ T z , in which case both σ0 and σ1 are placed into T and M outputs a 2. • Case 4: σ0 / ∈ T y ∩ T z and σ1 / ∈ T y ∩ T z , in which case neither σ0 nor σ1 is placed into T and M outputs a 3. If at any point during this process, there are no new strings for M to add to T , T y ∩ T z is a finite tree and M outputs an infinite sequence of 3's for its remaining output. Lastly, let ν be the measure on 3 ω induced by intersecting a µ * p,q -random closed set P with a µ * r,s -random closed set Q that is random relative to P ; that is, ν = (µ p,q ⊕ µ r,s ) • Ψ −1 (we will explicitly calculate this measure below). By Lemma 8, there exists a ν-layerwise computable mapping Φ : 4 ω → 4 ω such that, for all x ∈ 4 ω , if Φ(x) ∈ 3 ω , T Φ(x) is the set of all infinite branches of T x . This means that there exists a ν-Martin-Löf test (V i ) i∈ω (as in the proof of Lemma 8) and a Turing machine M ′ such that M ′ (x, i) = Φ(x) for any i ∈ ω and x / ∈ V i . We would like to compose Ψ with Φ to define Γ, but some care is needed. Therefore, the survival parameters of the resulting GW(β 0 , β 1 )-random tree are The code Γ(x P ⊕ x Q ) ∈ 3 ω represents a pruned, infinite GW(β 0 , β 1 )-random tree. By Lemma 7, the probability that any node in the tree encoded by Γ(x P ⊕ x Q ) has only a left child is 1 − β 0 = p + r − pr, only a right child is 1 − β 1 = q + s − qs, and both children is Therefore, Γ(x P ⊕ x Q ) is µ p+r−pr,q+s−qs -random. Part (3) of the Intersection Theorem follows directly from the lemma above and the fact that randomness preservation holds for layerwise computable mappings. Finally, Theorem 9 follows directly by an application of the no-randomness-from-nonrandomness principle. Multiple Intersections of Random Closed Sets In the case that we are dealing with two closed sets that are random with respect to the same symmetric Bernoulli measures, i.e. p = q = r = s, the key inequality p + q + r + s < 1 + pr + qs in the Intersection Theorem becomes 4p < 1 + 2p 2 . Since 2p 2 − 4p − 1 = 0 has solutions p = 1± 2 . This allows us to derive Corollary 1, which we restate here for the sake of convenience: Corollary 1 (Cenzer/Weber [CW13]). For p ∈ [0, 1/2], let P, Q ∈ K(2 ω ) be relatively µ * p -random. 2 and P ∩ Q = ∅, then P ∩ Q is Martin-Löf random with respect to the measure µ * 2p−p 2 . We would like to extend this analysis to determine which parameters p allow for the possibility that n µ * p -random closed sets have a non-empty intersection for various choices of n ∈ ω. Here we need to be more precise: let us say that closed sets P 1 , . . . , P n are mutually µ * p -random if, setting y i = j =i x P j , we have x P i ∈ MLR y i µp . In order to state our result, we define a sequence of polynomials (f n (p)) n≥1 by setting f n (p) = 1 − (1 − p) n for p ∈ [0, 1 2 ]. The desired generalization can thus be stated as follows: Theorem 11. For p ∈ [0, 1 2 ] and n ≥ 2, given n mutually µ * p -random closed sets P 1 , . . . , P n , the following hold: (1) 14 (3) If p < 1 − 1 n √ 2 and n i=1 P i = ∅, then n i=1 P i is Martin-Löf random with respect to the measure µ * fn(p) . In order to prove Theorem 11, we make use of several lemmas: Lemma 12. For p ∈ [0, 1 2 ], the following recursive relation holds: Proof. This follows immediately by induction on n ∈ ω. (2) immediately follows. To verify (3), suppose that n+1 i=1 P i = ∅. Since n i=1 P i = ∅, it follows from the induction hypothesis that n i=1 P i is µ * p -random. Then applying (3) of the Intersection Theorem to the case p = q and r = s = f n (p), n+1 i=1 P i is random with respect to the measure µ * p+fn(p)−pfn(p) , which, by Lemma 12, is the measure µ * f n+1 (p) . For n ≥ 1, since f n (0) = 0 and f n ( 1 2 ) = 1 − ( 1 2 ) n , and f ′ n (p) = n(1 − p) n−1 , f n : [0, 1 2 ] → [0, 1 − ( 1 2 ) n ] is strictly increasing. Thus we can define f −1 n (p) = 1 − n √ 1 − p on [0, 1 − ( 1 2 ) n ]. Using the functions f −1 n , we can obtain a converse to part (3) of Theorem 11. There is, however, one additional wrinkle. We would like to prove the result by induction on the number of closed sets in the desired intersection. To do so, we need an additional hypothesis about the relative randomness of the closed sets over which we will take the intersection. To prove the unrelativized version of our result, we use a relativized version of the result in the inductive step. For z ∈ 3 ω , let us say that closed sets P 1 , . . . , P n are mutually µ * p -random relative to z if, for each i ∈ {1, . . . , n}, setting y i = j =i x P j , we have x P i ∈ MLR y i ⊕z µp . We also make use of the following consequence of van Lambalgen's theorem: If x 1 , x 2 , . . . , x n are mutually µ p -random relative to z and z is µ q -random, then z is µ q -random relative to n i=1 x i . Lastly, we will make use of the following relativized version of Theorem 9, which follows from a direct relativization of the proof of Theorem 9: Theorem 14. Suppose that p, q, r, s ≥ 0, 0 ≤ p + q ≤ 1 and 0 ≤ r + s ≤ 1. If R ∈ K(2 ω ) is µ * p+r−pr,q+s−qs -Martin-Löf random relative to z ∈ 3 ω , then there are P, Q ∈ K(2 ω ) such that (i) P is µ * p,q -random relative to x Q ⊕ z, (ii) Q is µ * r,s -random relative to x P ⊕ z, and (iii) R = P ∩ Q. Now we state our partial converse of Theorem 11: Theorem 15. For p ∈ [0, 1 2 ], suppose that Q ∈ K(2 ω ) is µ * p -random relative to z ∈ 3 ω . Then for n ≥ 2, there are P 1 , . . . , P n ∈ K(2 ω ) that are µ * f −1 n (p) -random relative to z such that Q = n i=1 P i Proof. Again we proceed by induction. For n = 2, this follows from Theorem 14. Suppose now that the result holds for a fixed n ≥ 2 and all z ∈ 2 ω . Let q = f −1 n+1 (p), so that f n+1 (q) = p. In particular, by Lemma 12 we have p = f n+1 (q) = q + f n (q) − qf n (q). By Theorem 9, there are P 1 ∈ K(2 ω ) and R ∈ K(2 ω ) such that Q = P 1 ∩ R, P 1 is µ * q -random relative to x R , and R is µ * fn(q) -random relative to x P 1 . By the inductive hypothesis, since R is µ * fn(q) -random and f −1 n (f n (q)) = q, there are P 2 , . . . , P n+1 ∈ K(2 ω ) that are mutually µ * q -random relative to x P 1 such that R = n+1 i=2 P i . By the consequence of van Lambalgen's theorem discussed above, x P 1 is µ q -random relative to n+1 i=2 x P i , and hence the sequence P 1 , P 2 , . . . , P n+1 is mutually µ * q -random. Moreover, Q = P 1 ∩ R = n+1 i=1 P i , which yields the desired conclusion.
2021-03-09T02:16:14.304Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "644457334b3b077f9a9a6c314074c83c969123e4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a366d460fc92e6d0bf88d1229efaeb6bab867644", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
52012301
pes2o/s2orc
v3-fos-license
Effect of Different Anthocyanidin Glucosides on Lutein Uptake by Caco-2 Cells, and Their Combined Activities on Anti-Oxidation and Anti-Inflammation In Vitro and Ex Vivo The interactive effects on anti-oxidation and anti-inflammation of lutein combined with each of the six common anthocyanidin glucosides were studied in both chemical and cellular systems. The combined phytochemicals showed an antagonism in the inhibition of lipid oxidation in a liposomal membrane, but showed an additive effect on cellular antioxidant activity in Caco-2 cells. Lutein was an active lipoxygenase inhibitor at 2–12 μM while anthocyanins were inactive. The concentration of lutein when it was used in combination with anthocyanins was 25–54% higher than when lutein was used alone (i.e., IC50 = 1.2 μM) to induce 50% of lipoxygenase inhibition. Only the combination of lutein with malvidin-3-glucoside showed anti-inflammatory synergy in the suppression of interleukin-8, and the synergy was seen at all three ratios tested. Some mixtures, however, showed anti-inflammatory antagonism. The presence of anthocyanins (5–7.5 μM) did not affect lutein uptake (2.5–5 μM) by Caco-2 cells. Introduction Lutein is a xanthophyll carotenoid mainly present in dark green leafy vegetables [1]. Lutein shows antioxidant and anti-inflammatory activities by targeting reactive oxygen species, and downregulating inflammatory proteins and pro-inflammatory cytokines [2]. Lutein is one of the three xanthophyll carotenoids that can cross the blood-brain barrier and selectively accumulate in the retina and brain tissues [1,3,4]. The xanthophyll lutein is often co-ingested with other plant phytochemicals such as carotenoids and/or flavonoids in a normal human diet containing plant-based foods. Anthocyanins are one of the largest classes of flavonoids and are present abundantly in many fruits and vegetables [5]. Thus, there are chances for lutein and anthocyanins to be concurrently consumed in a meal, after which they can interact with each other during digestion and absorption to effect biological activities. Water-soluble phytochemicals may interfere with the uptake of lipid-soluble bioactive compounds [6]. For instance, lutein uptake by Caco-2 cells is impaired by the flavonoid naringenin, but is not affected by (+)-catechin, a phenolic acid, or vitamin C [1]. Absorption interference between phytochemicals may result in changes on combined biological effects of the compounds [6]. We have published a paper reporting that anthocyanins increase β-carotene uptake by Caco-2 cells to levels that trigger β-carotene's pro-oxidant activity, which results in an antagonistic cellular antioxidant effect seen Inhibitory Effect on Liposome Peroxidation The percentage of thiobarbituric acid reagent species (%TBARS) inhibition when lutein was present alone was 39%, and when anthocyanins were present alone, %TBARS inhibition was 14-43%. Lutein combined with each of the tested anthocyanins did not enhance the inhibitory effect on lipid peroxidation in the liposomal membrane. The expected additive effects of TBARS inhibition of lutein-anthocyanin mixtures were 48-66%, but the actual effects of the mixtures were less than 35% ( Figure 1). This indicates that lutein and anthocyanins showed an antagonistic interaction at the interface of the liposomal membrane. Lutein is a xanthophyll carotenoid characterized with polar groups at the two ends of its molecule. Lutein can position itself in parallel closely to the polar heads of the membrane, or it can span the molecule across the membrane with the polar ends anchoring to the polar lipid heads [8]. Anthocyanin compounds are normally positioned in the aqueous region of the membrane outer monolayer [9]. Such orientations of the compounds in the lipid bilayer membrane may enable them to interact and form lutein-anthocyanin adducts, which result in the reduced capability of lipid peroxidation inhibition. The formation of adducts between other carotenoids and flavonoids, for example: β-carotene and green tea polyphenolic compounds [10] or β-carotene and daidzein [11], has been previously reported to impart antioxidant antagonism in liposomes. which results in an antagonistic cellular antioxidant effect seen in some combinations [7]. Phytochemical interactions on cellular uptake and biological activities are often studied separately, so the mutual influences between these aspects are not well addressed. This study aimed to investigate the effect of different common anthocyanidin glucosides on lutein uptake by Caco-2 cells, and the combined effects of anthocyanins and lutein on oxidative inhibition and anti-inflammation in both chemical and cellular models. Inhibitory Effect on Liposome Peroxidation The percentage of thiobarbituric acid reagent species (%TBARS) inhibition when lutein was present alone was 39%, and when anthocyanins were present alone, %TBARS inhibition was 14-43%. Lutein combined with each of the tested anthocyanins did not enhance the inhibitory effect on lipid peroxidation in the liposomal membrane. The expected additive effects of TBARS inhibition of lutein-anthocyanin mixtures were 48-66%, but the actual effects of the mixtures were less than 35% ( Figure 1). This indicates that lutein and anthocyanins showed an antagonistic interaction at the interface of the liposomal membrane. Lutein is a xanthophyll carotenoid characterized with polar groups at the two ends of its molecule. Lutein can position itself in parallel closely to the polar heads of the membrane, or it can span the molecule across the membrane with the polar ends anchoring to the polar lipid heads [8]. Anthocyanin compounds are normally positioned in the aqueous region of the membrane outer monolayer [9]. Such orientations of the compounds in the lipid bilayer membrane may enable them to interact and form lutein-anthocyanin adducts, which result in the reduced capability of lipid peroxidation inhibition. The formation of adducts between other carotenoids and flavonoids, for example: β-carotene and green tea polyphenolic compounds [10] or β-carotene and daidzein [11], has been previously reported to impart antioxidant antagonism in liposomes. Asterisk-marked columns indicate a significant difference (p < 0.05) between the observed effect of the mixture with its calculated additive effect. Calculation of the expected additive effect was based on an equation of Fuhrman et al. [12]: TBARSA + TBARSL − TBARSA × TBARSL/100 (TBARSA and TBARSL are %TBARS inhibition of anthocyanin alone and lutein alone respectively), which was calculated following an equation given in [7]. TBARS: thiobarbituric acid reagent species, LUT: lutein, CG: kuromanin chloride, DG: myrtillin chloride, MG: oenin chloride, PNG: peonidin-3-glucoside chloride, PLG: callistephin chloride, PTG: petunidin-3-glucoside chloride. Asterisk-marked columns indicate a significant difference (p < 0.05) between the observed effect of the mixture with its calculated additive effect. Calculation of the expected additive effect was based on an equation of Fuhrman et al. [12]: TBARS A + TBARS L − TBARS A × TBARS L /100 (TBARS A and TBARS L are %TBARS inhibition of anthocyanin alone and lutein alone respectively), which was calculated following an equation given in [7]. TBARS: thiobarbituric acid reagent species, LUT: lutein, CG: kuromanin chloride, DG: myrtillin chloride, MG: oenin chloride, PNG: peonidin-3-glucoside chloride, PLG: callistephin chloride, PTG: petunidin-3-glucoside chloride. The interactive effects on anti-oxidation of lutein-anthocyanin combinations at 1:1, 1:3 and 3:1 ratios were assessed in a Caco-2 cell model. There was no synergistic or antagonistic effect seen in any of the mixtures at the tested ratios. All combinations showed additive CAA in Caco-2 cells (Table 1). Lutein and the anthocyanidin glucosides showed antagonistic interaction in the phosphatidylcholine (PC) liposome membrane, but did not show the same interaction in the cell membrane. The different interactions between phytochemicals can be seen in different assay models [6,13]. A combination of phytochemicals may show synergy/antagonism in chemical models, but may not show the same in cellular models, and vice versa. For example, the combination of raspberry and adzuki bean extracts showed antioxidant synergy in chemical assays but did not show the same effect in MCF-7 cancerous cells [14]. On the other hand, membrane lipid composition has a pronounced effect on the localization of phytochemicals and the interaction of the phytochemicals with the membrane, which may lead to changes in biological activities [15]. The interactive effect of lutein and anthocyanins in the PC liposome membrane being different from that in the Caco-2 cell membrane might be partly due to the differences in the composition of the two membrane models. 1 Each value of the experimental effect was mean of CAA unit ± SD of four individual replicates. 2 The expected additive effect was calculated as CAA A + CAA L − CAA A × CAA L /100 (CAA A and CAA L is the CAA unit of an anthocyanin alone and lutein alone respectively). Lipoxygenase Inhibitory Activity Lutein showed strong inhibition of LOX-1 (IC 50 = 1.2 µM). None of the anthocyanins showed potent LOX-1 inhibitory activity at 2-12 µM (% LOX-1 inhibition of 0.5-12.3%). They have been reported to have high LOX-1 IC 50 , for example: peonidin-3-glucoside (PNG): 38 mM, or kuromanin chloride (CG): 0.5 mM [16]. The mode of interactive effect upon LOX-1 inhibition between lutein and anthocyanins could not be determined because lutein was an active LOX-1 inhibitor at low concentrations while anthocyanins were not. The lipoxygenase inhibitory effects of all lutein-anthocyanin mixtures were still measured to evaluate whether the presence of anthocyanins affected the LOX-1 inhibitory activity of lutein. IC 50 values of lutein-anthocyanin mixtures ranged from 3.1-3.8 µM, which were higher than that of the single lutein (IC 50 = 1.2 µM) ( Figure 2). This indicates that lutein combined with anthocyanins inhibited LOX-1 less effectively than lutein alone. The concentrations of lutein required for the mixtures to exhibit 50% of LOX-1 inhibition were increased by 25-54% of the IC 50 of lutein when it was applied alone. These results show that the presence of anthocyanins affected the LOX-1 inhibitory activity of lutein. Anthocyanins and carotenoids inhibit LOX-1 non-competitively [16,17] by binding to the lipoxygenase-substrate complex. The reduced LOX-1 inhibitory effect of lutein when it was present with anthocyanins might be due to the interference of the anthocyanins with the binding of lutein to the lipoxygenase-substrate complex. carotenoids inhibit LOX-1 non-competitively [16,17] by binding to the lipoxygenase-substrate complex. The reduced LOX-1 inhibitory effect of lutein when it was present with anthocyanins might be due to the interference of the anthocyanins with the binding of lutein to the lipoxygenase-substrate complex. Secretion of Interleukin-8 (IL-8) The % interleukin-8 secretion compared to the control when lutein (2.5, 5, 7.5 μM) was applied alone was 78%, 70% and 57%, respectively. Most of the lutein-anthocyanin mixtures effectively reduced the amount of IL-8 secreted by Caco-2 cells after TNF-α-induced inflammation. The effectiveness of suppressing IL-8 secretion when lutein was combined with CG or myrtillin chloride (DG) was lower than when it was combined with the other anthocyanins. The mixtures of lutein with oenin chloride (MG), PNG, callistephin chloride (PLG) or petunidin-3-glucoside (PTG) increasingly reduced IL-8 secretion when the ratio of lutein to anthocyanins was increased ( Figure 3). The lutein-oenin chloride combination (LUT-MG) was the only combination that showed a synergistic effect on interleukin-8 suppression, and the synergy was seen at all three ratios tested. The LUT-PNG mixture showed an additive effect at all three tested ratios, and some mixtures showed an antagonistic effect, including: LUT-CG and LUT-DG at all three ratios tested; LUT-PLG at the lutein:anthocyanin ratios of 1:3 and 1:1; and LUT-PTG at the 1:1 and 3:1 ratios. Secretion of Interleukin-8 (IL-8) The % interleukin-8 secretion compared to the control when lutein (2.5, 5, 7.5 µM) was applied alone was 78%, 70% and 57%, respectively. Most of the lutein-anthocyanin mixtures effectively reduced the amount of IL-8 secreted by Caco-2 cells after TNF-α-induced inflammation. The effectiveness of suppressing IL-8 secretion when lutein was combined with CG or myrtillin chloride (DG) was lower than when it was combined with the other anthocyanins. The mixtures of lutein with oenin chloride (MG), PNG, callistephin chloride (PLG) or petunidin-3-glucoside (PTG) increasingly reduced IL-8 secretion when the ratio of lutein to anthocyanins was increased ( Figure 3). The lutein-oenin chloride combination (LUT-MG) was the only combination that showed a synergistic effect on interleukin-8 suppression, and the synergy was seen at all three ratios tested. The LUT-PNG mixture showed an additive effect at all three tested ratios, and some mixtures showed an antagonistic effect, including: LUT-CG and LUT-DG at all three ratios tested; LUT-PLG at the lutein:anthocyanin ratios of 1:3 and 1:1; and LUT-PTG at the 1:1 and 3:1 ratios. Secretion of Interleukin-8 (IL-8) The % interleukin-8 secretion compared to the control when lutein (2.5, 5, 7.5 μM) was applied alone was 78%, 70% and 57%, respectively. Most of the lutein-anthocyanin mixtures effectively reduced the amount of IL-8 secreted by Caco-2 cells after TNF-α-induced inflammation. The effectiveness of suppressing IL-8 secretion when lutein was combined with CG or myrtillin chloride (DG) was lower than when it was combined with the other anthocyanins. The mixtures of lutein with oenin chloride (MG), PNG, callistephin chloride (PLG) or petunidin-3-glucoside (PTG) increasingly reduced IL-8 secretion when the ratio of lutein to anthocyanins was increased ( Figure 3). The lutein-oenin chloride combination (LUT-MG) was the only combination that showed a synergistic effect on interleukin-8 suppression, and the synergy was seen at all three ratios tested. The LUT-PNG mixture showed an additive effect at all three tested ratios, and some mixtures showed an antagonistic effect, including: LUT-CG and LUT-DG at all three ratios tested; LUT-PLG at the lutein:anthocyanin ratios of 1:3 and 1:1; and LUT-PTG at the 1:1 and 3:1 ratios. Nitric Oxide (NO) Production The % NO production compared to the control when lutein (2.5, 5, 7.5 μM) was applied alone was 95%, 81% and 76%, respectively. Most of the combinations of lutein with anthocyanins did not effectively inhibit the production of nitric oxide (Figure 4). Synergy was not seen in any of the mixtures. An antagonistic effect was observed in most of the combinations at the 1:1 and 3:1 ratios of lutein to anthocyanins. All mixtures showed an additive effect at the lutein:anthocyanin ratio of 1:3. Interferences of Anthocyanins on Lutein Uptake by Caco-2 Cells The cellular uptake of lutein (5 μM) in the presence of each of the tested anthocyanins (5 μM) was not significantly different (p > 0.05) from the lutein uptake when it was present alone (Figure 3). The same trend was observed when the ratio of anthocyanin to lutein was increased to 7.5 μM:2.5 μM (Figure 3). These results indicate that anthocyanins did not affect the uptake of lutein by Caco-2 cells. The effects of some polyphenols on lutein uptake by Caco-2 cells have been previously reported. (+)-catechin, gallic acid and caffeic acid do not affect the cellular absorption of lutein, whereas naringenin causes an impairment of lutein uptake [1]. The latter has been suggested to be the consequence of the interaction of naringenin with the membrane lipids, which influences the invagination of the lipid raft domains containing lutein receptors [1]. Anthocyanins can incorporate Nitric Oxide (NO) Production The % NO production compared to the control when lutein (2.5, 5, 7.5 µM) was applied alone was 95%, 81% and 76%, respectively. Most of the combinations of lutein with anthocyanins did not effectively inhibit the production of nitric oxide (Figure 4). Synergy was not seen in any of the mixtures. An antagonistic effect was observed in most of the combinations at the 1:1 and 3:1 ratios of lutein to anthocyanins. All mixtures showed an additive effect at the lutein:anthocyanin ratio of 1:3. Interferences of Anthocyanins on Lutein Uptake by Caco-2 Cells The cellular uptake of lutein (5 µM) in the presence of each of the tested anthocyanins (5 µM) was not significantly different (p > 0.05) from the lutein uptake when it was present alone (Figure 3). The same trend was observed when the ratio of anthocyanin to lutein was increased to 7.5 µM:2.5 µM (Figure 3). These results indicate that anthocyanins did not affect the uptake of lutein by Caco-2 cells. The effects of some polyphenols on lutein uptake by Caco-2 cells have been previously reported. (+)-catechin, gallic acid and caffeic acid do not affect the cellular absorption of lutein, whereas naringenin causes an impairment of lutein uptake [1]. The latter has been suggested to be the consequence of the interaction of naringenin with the membrane lipids, which influences the invagination of the lipid raft domains containing lutein receptors [1]. Anthocyanins can incorporate into the polar interface of the membrane outer monolayer [9] leading to an increase in the polarization area, which may result in a mismatch between the area of the polar heads and the area of the hydrophobic tails [18]. Consequently, the interspace between the two lipid layers can be increased, giving additional freedom to the hydrocarbon chains. This effect is called membrane fluidization, which may influence the appearance and development of lipid rafts (the so-called raft-breaking effect [18]), leading to a reduced diffusion of some lipid molecules. Membrane fluidization, on the other hand, decreases lipid-melting temperatures which possibly results in an increase in lipid diffusion [18]. These contradictory effects of polar flavonoids upon the diffusion of lipophilic molecules were seen in anthocyanins affecting the uptake of carotenoids. We previously reported that some anthocyanins (7.5 µM) increase β-carotene uptake (2.5 µM) [7]. These anthocyanin compounds, however, decreased lycopene absorption (data not shown) and did not influence lutein uptake. It seems that the interaction of anthocyanins with the cellular lipid membrane did not affect the lipid raft domains that contain lutein receptors. Phytochemical Stock Preparation Stocks of anthocyanidin glucosides (1 mg/mL, in methanol) and lutein (1 mg/mL, in tetrahydrofuran) were stored at −80 °C. The concentration of lutein was checked prior to making up working solutions by measuring absorbance at 446 nm (extinction coefficient = 144,500 L/mol·cm −1 ) (UV-1800 series spectrometer, Shimadzu, Tokyo, Japan) [19]. The combinations of lutein with anthocyanins showed neither synergy nor antagonism in cellular antioxidant activity (CAA) in Caco-2 cells. Lutein uptake by Caco-2 cells was not significantly altered by the presence of anthocyanins. The maintained intracellular lutein content may partly explain the additive CAA seen in all of the lutein-anthocyanin combinations. It seems that the interaction between anthocyanins and carotenoids on cellular antioxidant activity is partly relevant to the interference of anthocyanins with the cellular uptake of carotenoids. In a previous study, we found that some anthocyanins increase the intracellular content of β-carotene to certain levels where it exerts pro-oxidant activity, which partly explains the observed antagonism of CAA in some of the mixtures [7]. In this study, we found that the cellular uptake of lutein was not affected by the presence of anthocyanins, and the interactive cellular antioxidant effects in all tested lutein-anthocyanin mixtures were additive. The effect of anthocyanins on lutein uptake, however, did not show relevance to the interactive anti-inflammatory effects. The intracellular content of lutein was not significantly changed by the presence of anthocyanins, but some of the lutein-anthocyanin mixtures showed non-additive anti-inflammatory effects on the suppression of interleukin-8 secretion and NO production. This indicates that the combined anti-inflammatory effects between lutein and anthocyanins might not be a consequence of the uptake interaction between the compounds. The synergistic effect of a phytochemical mixture on cellular bioactivities can be the result of the multi-target effects of its phytochemical components on different biomarkers (e.g., oxidative and/or defensive enzymes, inflammatory mediators, gene expression) [6,13]. Molecular mechanisms of anti-inflammatory antagonism between phytochemicals, however, have not been uncovered. There is a limitation of method availability for the prediction of expected gene expressions of inflammatory markers resulting from the combined activity of phytochemicals. Inhibition of Liposome Peroxidation The preparation of unilamellar liposomes (0.5 mg/mL) was based on the method of Roberts and Gordon [20] with some modifications described in our previous paper [7]. The final concentration of lutein and/or anthocyanins in liposomes was 0.25% mol/mol lipid. This concentration was selected after preliminary trials (data not shown). At concentrations higher than 0.25%, loss of lutein was seen during the preparation of liposome (i.e., some lutein was visibly retained on the polycarbonate membrane when passing the liposome suspension through the membrane to form unilamellar liposomes). In addition, lutein at 0.25% mol/mol lipid has been reported to be retained more than 80% in liposomes [21]. The liposome suspension then underwent Fe 3+ /ascorbate-induced peroxidation as described previously by Tan et al. [21]. The percentage inhibition of thiobarbituric acid reagent species (%TBARS) was calculated as: where: A cb , A sb : absorbance of non-phytochemical liposomes (control), and liposomes incorporated with phytochemicals, which was measured at 535 nm prior to the induction of peroxidation; A c , A s : absorbance of non-phytochemical liposomes (control), and liposomes incorporated with phytochemicals, which was measured at 535 nm after lipid peroxidation (60 min, 37 • C). In Vitro Anti-Inflammatory Assay: Lipoxygenase Inhibition Lutein working solution was prepared in 0.064 mM ethylenediaminetetraacetic acid (EDTA) containing 0.54% (v/v) Tween 80. Anthocyanin working solutions were prepared in 50 mM phosphate buffer (pH 7.4). The final concentration of each phytochemical in the working solution was 0.024 mM. Trials on enzyme reaction kinetics were conducted to determine the optimum enzyme concentration (i.e., 400 U/mL) for maximal enzyme activity (data not presented). The lipid oxidative reaction started by adding an aliquot of the enzyme substrate: linoleic acid (1.25 mM) [17]) into a test tube containing the phytochemicals (final concentration: 0.2-2 µM of lutein and/or 2-12 µM of anthocyanins) and lipoxygenase (400 U/mL). The inhibitory effect of lutein and/or anthocyanins on lipoxygenase activity was assayed according to the protocol of Durak et al. [22] and calculation of the activity was based on the following equation: where: A c : Absorbance of control sample for 100% enzyme activity (no test compounds, added enzyme); A cb : Absorbance of control blank (to correct for background absorbance of substrate); A s : Absorbance of test sample (added test compounds and enzyme); A sb : Absorbance of sample blank for 0% enzyme activity (added test compounds, no enzyme, to correct for background absorbance of the test compounds). General Cell Culture Conditions Human Caco-2 cells (passages 45-55) were well maintained in a complete growth medium containing: Dulbecco's modified eagle medium (DMEM Gibco TM , Life Technologies), foetal bovine serum (10%, Bovogen Biologicals, Keilor East, VIC, Australia), GlutaMax TM (1%, Gibco TM , Life Technologies), non-essential amino acids (1%, Gibco TM , Life Technologies) and anti-microbial agents (penicillin and streptomycin, 1%, Gibco TM , Life Technologies). The cells were grown in 25 cm 2 Corning ® flasks (Corning Inc., New York, NY, USA) in a CO 2 incubator (Touch 190S, LEEC Limited, Nottingham, UK) at 37 • C and 5% CO 2 , and were routinely subcultured after being confluent at 80%. In every cellular experiment, the cells were seeded at 2.5 × 10 5 cells/mL and grown for 14 days with a change of medium every day after 100% confluence. The final concentration of lutein or anthocyanins that were loaded onto the cells in each treatment was 2.5-7.5 µM. Tween 40 (0.1% final) was used to deliver lutein into the cells [23]. Cellular Antioxidant Assay A cell-based assay for testing the antioxidant activity of phytochemicals was adopted from Wolfe and Liu [24] with some modifications as described in our previous paper [7]. Cellular antioxidant activity (CAA) was determined as: where: • AUC s is the integrated area under the sample fluorescence versus time curve; • AUC c is the integrated area under the control fluorescence versus time curve; • Fluorescence excitation was measured at 485 nm and fluorescence emission was measured at 520 nm every 5 min for 12 cycles at 37 • C. TNF-α-Induced Inflammation A modified protocol adopted from Peng et al. [25] was used. Caco-2 cells were seeded on 48-well plates (Corning COSTAR ® , Corning Inc.) for 14 days, and subsequently treated with 200 µL lutein and/or anthocyanins (2.5-7.5 µM) at 37 • C and 5% CO 2 . Human TNF-α (50 µL, 500 ng/mL) (Gibco TM , Life Technologies) was added to the cells in each well to induce inflammation for 24 h. The cell supernatants were analysed for interleukin-8 (IL-8) and nitric oxide secretion. A human IL-8 ELISA kit (BD OptEIA TM , BD Biosciences, San Diego, CA, USA) was used to determine IL-8 concentration, and a Griess reagent kit (Invitrogen TM , Life Technologies) was used to measure total nitric oxide (as nitrite) produced by the cells. Vanadium chloride (VCl 3 ) (8 mg/mL in 1 M HCl) was used to convert nitrate to nitrite [26]. Cellular Uptake of Lutein Caco-2 cells were seeded on 6-well plates (Corning COSTAR ® , Corning Inc.) for 14 days and subsequently treated with 2 mL lutein and/or anthocyanins (2.5-7.5 µM) for 4 h at 37 • C, 5% CO 2 . The cells were rinsed with 2 mL cold Dulbecco's Phosphate-Buffered Saline (DPBS) containing 0.1% Tween 40 followed by a wash with 2 mL pure DPBS. The cells were lysed in 3 mL of cold water for 30 min [27]. The cell lysate was used immediately for lutein extraction. Extraction of Lutein from Cell Lysate A modified protocol of carotenoid extraction from cell lysates adopted from Biehler et al. [27] was used. In brief, the cell lysate was mixed with 0.1% butylated hydroxytoluene (BHT)containing hexane:ethanol:acetone (2:1:1, v/v/v, 4 mL) and an aliquot of the internal standard: trans-β-Apo-8 -carotenal (Sigma Aldrich, Sydney, Australia). The tubes were sonicated (2 min) and centrifuged (4000× g, 5 min). The hexane phase was collected and a secondary extraction of the cell lysate was carried out with hexane (2 mL, containing 0.1% BHT). The hexane phase was pooled, dried under nitrogen and stored at −80 • C until LC-MS analysis. Lutein Analysis by LC-MS Identification and quantification of lutein in the cultured cells was carried out following a method previously developed by our research group [7] on a HPLC system (Accela, Thermo Fisher Scientific Inc., Waltham, MA, USA) connected to a LTQ Orbitrap XL TM mass spectrometer (Thermo Fisher Scientific Inc.) equipped with an atmospheric pressure chemical ionization source (APCI). Lutein extracts and standards (20 µL) were injected into a 2.1 mm × 250 mm C30 column (Acclaim TM , 3 µm particle size, Thermo Fisher Scientific Inc.). Mobile phase components and the gradient of mobile phase were given in our previous report [7]. MS instrumental parameters were set as shown in [7]. The identification of lutein was based on its relative retention time and its accurate mass (m/z 568.43). Extracted ion current chromatograms of m/z 568.0-568.5 were plotted for the identification and quantification of lutein. Instrument control and data processing were performed using XCalibur TM software (version 2.2, Thermo Fisher Scientific Inc., San Jose, CA, USA). Mode of Interaction Determination A comparison between the experimental effect of every mixture with its expected additive effect was made to determine the mode of interaction. An equation given by Fuhrman et al. [12] was used to calculate the expected activity. The mode of phytochemical interaction is defined as: • Synergy: the experimental inhibitory activity is greater than the expected activity; • Antagonism: the experimental inhibitory activity is lesser than the expected activity; • Addition: the experimental inhibitory activity is equal to the expected activity. Data Analysis All chemical-and cell-based experiments were conducted at least in triplicate. One-way analysis of variance (ANOVA) and Tukey's test were performed to compare means for significant difference at p < 0.05 using Minitab (version 9.0, Minitab Inc., State College, PA, USA). Conclusions The combinations of lutein and anthocyanins did not show synergistic antioxidant effects in the tested chemical and cellular models. Lutein and anthocyanins (1:1, 2 µM) showed an antagonistic interaction on lipid peroxidation in a phosphatidylcholine liposome membrane. All of the combinations at the tested ratios (1:1, 1:3 and 3:1, total concentration of 10 µM), however, showed additive effects on cellular antioxidant activity in a Caco-2 cell model. The cellular uptake of lutein (2.5-5 µM) was not affected by the presence of anthocyanins (5-7.5 µM), which could partly explain the observed additive cellular antioxidant activity. Only the mixture of LUT with MG showed anti-inflammatory synergy in the suppression of interleukin-8 at all tested ratios. Some lutein-anthocyanin combinations showed antagonism in the suppression of pro-inflammatory mediators (IL-8, NO) despite the fact that at the concentrations tested, lutein uptake was not affected by the presence of anthocyanins. Future studies should be designed to unravel the molecular mechanisms of anti-inflammatory antagonism of mixed phytochemicals. An understanding of phytochemical combinations and the appropriate concentrations can lead to designing foods or supplements with better targeted functions and absorption.
2018-08-17T19:44:03.369Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "824490102897d5b4627e63c8deb066141b5ebf97", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/8/2035/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "824490102897d5b4627e63c8deb066141b5ebf97", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
243905731
pes2o/s2orc
v3-fos-license
The Potential Utility of Prebiotics to Modulate Alzheimer’s Disease: A Review of the Evidence The gut microbiome has recently emerged as a critical modulator of brain function, with the so-called gut-brain axis having multiple links with a variety of neurodegenerative and mental health conditions, including Alzheimer’s Disease (AD). Various approaches for modulating the gut microbiome toward compositional and functional states that are consistent with improved cognitive health outcomes have been documented, including probiotics and prebiotics. While probiotics are live microorganisms that directly confer beneficial health effects, prebiotics are oligosaccharide and polysaccharide structures that can beneficially modulate the gut microbiome by enhancing the growth, survival, and/or function of gut microbes that in turn have beneficial effects on the human host. In this review, we discuss evidence showing the potential link between gut microbiome composition and AD onset or development, provide an overview of prebiotic types and their roles in altering gut microbial composition, discuss the effectiveness of prebiotics in regulating gut microbiome composition and microbially derived metabolites, and discuss the current evidence linking prebiotics with health outcomes related to AD in both animal models and human trials. Though there is a paucity of human clinical trials demonstrating the effectiveness of prebiotics in altering gut microbiome-mediated health outcomes in AD, current evidence highlights the potential of various prebiotic approaches for beneficially altering the gut microbiota or gut physiology by promoting the production of butyrate, indoles, and secondary bile acid profiles that further regulate gut immunity and mucosal homeostasis, which are associated with beneficial effects on the central immune system and brain functionality. Introduction Microbiota dysbiosis, characterized as the disproportional increase or decrease in abundance of certain bacterial strains, has been associated with multiple complications, including obesity [1], type 2 diabetes (T2DM) [2], and neurodegenerative diseases such as Alzheimer's Disease (AD) [3]. AD is the most common neurodegenerative disease affecting about 5 million people in the U.S., and about 25 million people worldwide [4]. Only about 5-10% of AD patients present with early onset dementia directly linked to genetic mutations that are causal for AD development [5]. The vast majority of AD patients, on the other hand, develop neurodegenerative disease due to a combination of factors including but not limited to apolipoprotein E genotype [6,7], presence of metabolic syndrome and certain lifestyle factors [8], and, as recently revealed, microbiome composition [3]. AD is a neurodegenerative disease characterized by memory loss and a progressive loss of cognitive function involving the extracellular accumulation of pathogenic amyloid-β (Aβ) peptides that oligomerize and aggregate, forming plaques [9], and the intracellular accumulation of hyperphosphorylated tau proteins that form neurofibrillary tangles [10]. The causes for the formation of Aβ plaques and neurofibrillary tangles are not clear. However, chronic neuroinflammation and dysfunctional microglia have emerged as key drivers of these processes [11,12]. Notably, neuroinflammation has recently been found to be modulated by the gut microbiome via the gut-brain axis [13]. The links between microbiome composition and AD are intriguing and provide potential ways to ameliorate or even prevent AD progression through modifying the microbiome. This could be achieved via various ways, including fecal transplant and consumption of probiotics or prebiotics. Prebiotics are oligosaccharide molecules that are non-digestible to the human host, and which serve as substrates for microorganisms in the gut, and thus modulate the composition and/or function of gut microbes in a manner that is beneficial to the host [14,15]. In this review, we discuss the evidence linking gut microbiome composition and function with AD and its associated co-morbidities, provide an overview of prebiotic types and their effects, discuss evidence for the effectiveness of prebiotics in modulating gut microbiome composition and microbial metabolite production, and discuss the potential for prebiotics to induce a beneficial shift in the gut microbiome and modify health outcomes relevant for individuals with AD. Links between Gut Microbiome Composition and AD and Associated Co-Morbidities The importance of diet in modifying the gut microbiome has been emphasized through many intervention studies in humans and animal models. Studies have demonstrated that diet affects gut microbiota composition and diversity [16][17][18][19][20][21][22][23][24][25]. Diet composition and duration of intervention are the two most relevant diet-related factors in shaping the gut microbiome. The most well-studied dietary interventions thus far have involved the comparison of high-fat or Western diets enriched in animal-derived foods vs. lower-fat or plant-based diets ( Figure 1). From animal studies to human studies the diversity and proportion of microbes have been found to be consistently altered by diets depleted vs. enriched in plant substrate. Specifically, diets depleted in non-digestible fiber and enriched in protein and fat have been consistently linked with an increase in protein-and fatdegrading bacteria belonging to the phyla Firmicutes, Proteobacteria, and Deferribacteres, and a decrease in Bacteroidetes and butyrate-producing species, which are generally known to be beneficial for human health [26][27][28][29][30]. Conversely, fiber-enriched diets are typically associated with increases in the abundance of species in the phylum Bacteroidetes, the genus Prevotella, and Bifidobacterium spp. [31][32][33][34][35]. These changes in gut microbiota composition are closely associated with host health and disease. The health effect is not only attributed to the enrichment of beneficial gut microbes but to the production of secondary metabolites such as short chain fatty acids (SCFAs) from the degradation of non-digestible carbohydrates by specific fiber-fermenting taxa [36][37][38]. The presence of these taxa is associated with protection from AD, and a number of associated co-morbidities including T2DM and cardiovascular disease (CVD). In the next several paragraphs we review the evidence linking gut microbiome alterations to AD, as well as associated co-morbidities. Studies have shown a connection between the composition and diversity of gut microbes and AD ( Figure 1) [3,39,40]. In a recent study a reduction in overall gut microbiome richness as well as decreases in Bifidobacterium and Adlercreutzia under Actinobacteria, SMB53 (family Clostridiaceae), Dialister, Clostridium, Turicibacter, and cc115 (family Erysipelotrichaceae) under Firmicutes were observed in AD participants [3]. On the other hand, Blautia, Phascolarctobacterium, and Gemella under Firmicutes, Bacteroides and Alistipes under Bacteroidetes, and Bilophila under Proteobacteria were increased in AD patients [3]. In addition, 13 genera were associated with cerebrospinal fluid (CSF) biomarkers of AD [3], showing that gut microbiome composition or diversity may contribute to AD development. Firmicutes and Bacteroidetes are two dominant phyla in the human gut [41] and it has been observed that the Firmicutes/Bacteroidetes ratio is associated with obesity, gut dysbiosis, and a number of diseases including diabetes and CVD. However, the use of this ratio as an assessment of the health state of the gut microbiota is controversial, as contradictory results have been reported [3,39,[42][43][44][45]. Gut microbiota composition assessment metrics that are based on measurements at the phylum level are unlikely to be useful since individual genera and species, even strains, under a particular phylum can play opposite roles in overall gut health, taking on different metabolic roles, producing different metabolites, and interacting with other microbes in the gut in different ways such that the overall effect of all individual species of that phylum is complex (Figure 1). Microorganisms 2021, 9, x FOR PEER REVIEW 3 of 20 Figure 1. The potential association of prebiotic-gut-Alzheimer's Disease (AD) in individuals with prolonged high vs. low fiber diet. The intake of dietary fiber may further influence gut health, immune system, and brain function. High dietary fiber intake may help maintain healthy gut microbiota, which is associated with increase in SCFA production, mucus secretion and decrease in pathogens. Healthy gut physiology leads to regulated gut immune system and immune homeostasis in which positively affects the brain. Anti-inflammatory metabolites signal the brain and its central immune system, which may potentially contribute to functional brain and prevention of AD onset or development. Bacterial genera that are shown to be less abundant in AD patients were Bifidobacterium and Adlercreutzia under Actinobacteria, SMB53 (family Clostridiaceae), Dialister, Clostridium, Turicibacter, and cc115 (family Erysipelotrichaceae) under Firmicutes. Low dietary fiber intake may alter gut microbiota leading to dysbiosis in the gut, decrease in SCFA production, and increase in pathogens. Dysbiosis in the gut may cause compromised gut immune system and inflammation in the gut. Pro-inflammatory metabolites signal the brain and its central immune system and potentially bring chronic damage to the brain, which may result in dysfunctional brain, AD onset or development. Bacterial genera that are shown to be more abundant in AD patients were Blautia, Phascolarctobacterium, and Gemella under Firmicutes, Bacteroides and Alistipes under Bacteroidetes, Bilophila under Proteobacteria. Studies have shown a connection between the composition and diversity of gut microbes and AD ( Figure 1) [3,39,40]. In a recent study a reduction in overall gut microbiome richness as well as decreases in Bifidobacterium and Adlercreutzia under Actinobacteria, SMB53 (family Clostridiaceae), Dialister, Clostridium, Turicibacter, and cc115 (family Erysipelotrichaceae) under Firmicutes were observed in AD participants [3]. On the other hand, Blautia, Phascolarctobacterium, and Gemella under Firmicutes, Bacteroides and Alistipes under Bacteroidetes, and Bilophila under Proteobacteria were increased in AD patients [3]. In addition, 13 genera were associated with cerebrospinal fluid (CSF) biomarkers of AD [3], showing that gut microbiome composition or diversity may contribute to AD development. Firmicutes and Bacteroidetes are two dominant phyla in the human gut [41] and it has been observed that the Firmicutes/Bacteroidetes ratio is associated with obesity, gut dysbiosis, and a number of diseases including diabetes and CVD. However, the use of this ratio as an assessment of the health state of the gut microbiota is controversial, as contradictory results have been reported [3,39,[42][43][44][45]. Gut microbiota composition assessment metrics that are based on measurements at the phylum level are unlikely to be useful since individual genera and species, even strains, under a particular phylum can play opposite roles in overall gut health, taking on different metabolic roles, producing different low fiber diet. The intake of dietary fiber may further influence gut health, immune system, and brain function. High dietary fiber intake may help maintain healthy gut microbiota, which is associated with increase in SCFA production, mucus secretion and decrease in pathogens. Healthy gut physiology leads to regulated gut immune system and immune homeostasis in which positively affects the brain. Anti-inflammatory metabolites signal the brain and its central immune system, which may potentially contribute to functional brain and prevention of AD onset or development. Bacterial genera that are shown to be less abundant in AD patients were Bifidobacterium and Adlercreutzia under Actinobacteria, SMB53 (family Clostridiaceae), Dialister, Clostridium, Turicibacter, and cc115 (family Erysipelotrichaceae) under Firmicutes. Low dietary fiber intake may alter gut microbiota leading to dysbiosis in the gut, decrease in SCFA production, and increase in pathogens. Dysbiosis in the gut may cause compromised gut immune system and inflammation in the gut. Pro-inflammatory metabolites signal the brain and its central immune system and potentially bring chronic damage to the brain, which may result in dysfunctional brain, AD onset or development. Bacterial genera that are shown to be more abundant in AD patients were Blautia, Phascolarctobacterium, and Gemella under Firmicutes, Bacteroides and Alistipes under Bacteroidetes, Bilophila under Proteobacteria. The onset and progression of AD has been linked directly to neurodegenerative processes secondary to the deposition of Aβ plaques and aggregation of hyperphosphorylated tau tangles [46]. Recently, the pathogenesis of AD has been hypothesized further to be triggered by amyloid fibers of bacterial origin, which induce a proinflammatory response [47]. A recent study found that amyloid-positive cognitively impaired patients had higher Escherichia/Shigella and lower Eubacterium rectale and Bacteroides fragilis abundances compared to amyloid-negative cognitively normal controls, and these compositional changes were correlated with an increased production of pro-inflammatory cytokines and a reduction of anti-inflammatory cytokines [48]. In a cross-sectional study in Australian women consumption of a "junk food" (high sugar, high fat) diet was highly associated with Aβ deposition, whereas consumption of the Mediterranean diet was associated with higher cognitive scores than other diet groups [49]. Interestingly, in a small study participants with mild cognitive impairment consuming a modified Mediterranean-ketogenic diet consisting of less than 20 g/d of carbohydrate were found to have higher abundances of Enterobacteriaceae, Akkermansia, Slackia, Christensenellaceae and Erysipelotrichaceae and lower abundances of saccharolytic Bifidobacterium and Lachnobacterium compared to cognitively normal partic- ipants [50]. In a follow-up study the low-carbohydrate modified Mediterranean-ketogenic was found to have a potential beneficial effect in AD patients in preventing memory decline [51]. However, these studies were conducted in small cohorts (e.g., 17 individuals, 11 MCI patients and 6 controls), thus the effects of low-carbohydrate, low-fiber diets, even in the context of high monounsaturated and polyunsaturated vs. saturated fat ratios such as those seen in the Mediterranean diet, need to be further investigated in larger trials. T2DM and AD have been known to share several pathophysiological features including hyperglycemia leading to increased Aβ production, and impaired glucose transport and subsequent glucose metabolism [52]. A new potential AD biomarker, S100B, has been investigated to learn the common pathophysiology of these diseases [53]. A cross-sectional study conducted with 100 South Indian AD patients showed that elevated levels of S100B protein in serum were significantly associated with clinical dementia rating scores compared to healthy controls [54]. Serum S100B protein levels in T2DM patients were also shown to be positively correlated with cognitive function [55]. In patients with clinically diagnosed T2DM a high-fiber diet composed of whole grains and prebiotics promoted strain specific growth of acetate and butyrate producing bacteria Faecalibacterium prausnitzii, Lachnospiraceae bacterium, and Bifidobacterium pseudocatenulatum [56]. The treatment group had improved levels of hemoglobin A1c, as well as increased glucagon-like peptide-1 production compared to the control group [56]. These results suggest the high-fiber diet induced gut microbial alteration is correlated with improvement of blood glucose regulation in T2DM patients. These findings have important implications for the management of AD due to the high rates of T2DM comorbidity in AD patients. In addition to a link with T2DM, CVD has also been linked with AD [57,58]. The occlusion of blood vessels that support the deep brain result in silent brain infarcts [59]. This type of infarct is shown to be associated with lower cognitive function related to attention, memory, and language [60]. CVD may directly affect poor blood flow to the brain causing cerebrovascular disease [61]. Meta-analyses of prospective cohort studies exploring the association of coronary heart disease with dementia or cognitive impairment found that coronary heart disease is associated with an increased risk of dementia or cognitive impairment [62,63]. It is well-established that as much as 80% of the risk for CVD is attributable to diet and lifestyle factors [64][65][66]. Many human studies have demonstrated an inverse association between the consumption of dietary fiber and the incidence of CVD [67][68][69][70][71]. Patients with primary hypertension showed a high frequency of opportunistic pathogens such as Klebsiella spp., Streptococcus spp., and Parabacteroides merdae, whereas Roseburia spp. and F. prausnitzii which are SCFA-producers were abundant in healthy individuals [72]. Another study found that total and LDL-cholesterol levels were lowered after the consumption of flaxseed fiber [73]. However, although the consumption of maize-derived whole grain cereal led to increases in bifidobacteria, no significant changes were observed in serum lipids [74]. Further studies examining the role of dietary fiber and specific increases or decreases of gut microbes as well as their metabolites on CVD endpoints are needed. Taken together, the overall findings from the published literature suggest that modifying gut microbial composition and diversity toward a profile associated with healthy individuals consuming healthy diets may help attenuate AD progression. Diets and prebiotic approaches that aim to increase beneficial bacterial species that have been found to be depleted in AD patients such as Bifidobacterium spp., and approaches that aim to decrease the abundance of deleterious bacterial species such as Bilophila may be beneficial for the prevention of AD (Figure 1). Overview of Prebiotic Types and Their Roles in Modifying Gut Microbiota Dietary fibers, which are somewhat difficult to define, can be classified according to their solubility. Insoluble fiber, which does not dissolve in water, passes through the digestive tract providing bulking by absorbing water. Soluble fiber, on the other hand, dissolves in water and is mostly fermented by commensal bacteria residing in the colon and contributing to satiety [75,76]. Although this general categorization of fibers according to their solubility may be useful, insoluble fibers are fermented to a certain degree and some soluble fibers may be non-viscous. Recently, the classification of fiber according to functionality is gaining attention. The functionality depends on the structure and fermentability of the specific dietary fiber. Thus, types of dietary fiber and subsequent gut microbial composition, diversity, and richness changes are highly intriguing areas for further research. It is especially relevant to patients with AD given that particular dietary fibers may modify the gut microbiome in a beneficial direction, increasing the levels of metabolites that improve cognitive function and attenuate neurotoxicity [77]. Here, we have listed a number of dietary fibers with known impacts on enrichment of certain gut microbes, suggesting their potential as prebiotic supplements for AD patients (Table 1). Cellulose and hemicellulose are major water-insoluble, non-starch polysaccharides found in plant cell walls. Cellulose degradation is known to be conducted by Ruminococcus spp. and Bacteroides spp. producing SCFAs as a byproduct [78][79][80]. Some species of gut microbes, including Butyrivibrio spp. Clostridium spp. and Bacteroides spp. are observed to break down hemicellulose [81]. Lignin is also a water-insoluble, non-starch polysaccharide that constitutes plant cell walls together with cellulose and hemicellulose, however its interaction with gut microbes is not well-documented. One study has shown that lignin supports the prolonged survival of bifidobacteria in an in vitro condition [82]. Resistant starch, another type of dietary fiber that is water-insoluble is a starch polysaccharide which is not degradable by the α-amylase enzyme of the host. Resistant starch was shown to increase the ratio of Firmicutes to Bacteroidetes [92]. At the genus level, Bifidobacterium and Ruminococcus have been identified to relatively thrive when exposed to resistant starch [83]. Fructan is a polymer of five carbon membered ring fructose molecules, which consists of several different types depending on the chemical bond. Fructo-oligosaccharide (FOS) and inulin are major forms of fructan considered as dietary fibers that are capable of being fermented by multiple members of the gut microbiota community [93]. FOS is a short chain oligosaccharide of fructose linked by β (2→1) glycosidic bonds. Inulin is a heterogeneous polysaccharide with β (2→1) linkage and terminal glucose. These fructan molecules have a bifidogenic effect that enhances the relative abundance of Bifidobacterium spp. in the host gut [84][85][86]94]. Similarly, galacto-oligosaccharide (GOS) is a short chain polymer of mainly galactose linked with a β (1→4) bond and terminal glucose [95]. FOS and GOS are commercially used to produce infant formula to mimic the properties of human milk [96]. These oligosaccharides are important nutrients to develop the gut microbiome of infants leading to colonization of beneficial bifidobacteria [97,98]. The promotion of these gut microbiota in infants decreases the niche for pathogenic bacteria and helps to enhance gut barrier function [87,[99][100][101]. FOS supplementation in chronically stressed mice was demonstrated to prevent intestinal barrier impairment and neuroinflammation along with improved depression-like behavior and significant changes in the abundance of Lactobacillus reuteri [102]. FOS from Morinda officinalis were also tested in rats with AD-like symptoms and mice with inflammatory bowel disease showing the potential of FOS as a prebiotic that improved gut barrier integrity, alleviated neuronal degradation, downregulated AD markers, and maintained the diversity and stability of the gut microbiome of the host [103]. Beta-glucan is a polysaccharide that contains β-D-glucose linked by glycosidic bonds. A linear, non-branched β-glucan mostly found in the bran of cereals such as oats and barley is water-soluble and consists of β-D-glucose with (1→3), (1→4)-linkage [104]. This physicochemical property of β-glucan results in increased viscosity and a thickening effect on feces, and it provides beneficial, saccharolytic gut microbes with fermentable substrate to consume [105][106][107]. Consumption of high molecular weight β-glucan increased the proportion of Bacteroides and Prevotella [88]. Supplementation of either whole grain oats or oat bran elevated the production of SCFAs and produced a bifidogenic effect [89]. Pectin is a water-soluble dietary fiber mainly found in the skin of apples. Pectin is a component of the primary cell wall and middle lamella which contribute to adherence of adjacent plant cells. The structure of pectin is very complex and the pectic polysaccharides are abundant in galacturonic acids. Homogalacturonan is a polymer of galacturonic acid bonded with α-1,4-linkage and the types of pectin may vary according to its side chain sugars [108]. These complex pectins are known to be degraded by gut microbiota whose diversity is found to be preserved by pectin in ulcerative colitis patients [109]. Pectins derived from apples were found to be utilized by beneficial colonic bacteria including Bifidobacterium, Lactobacillus, Enterococcus, suggesting a prebiotic capacity of pectin [90]. Gums are commonly found in food thickeners because of their capability of gel formation and emulsion stabilization. Particularly, gum arabic is well determined for its solubility in water, becoming viscous depending on its concentration. Gum arabic is a complex heteropolysaccharide mainly containing 1,3-linked β-D-galactose units with 1,6-linked β-D-galactose side chains attached to rhamnose, glucuronic acid and arabinose residues [110,111]. It is accessible to the gut microbes having a potential to increase probiotic bacteria in the human gut. At a dose of 10 g for 4 weeks gum arabic resulted in significantly higher numbers of Bifidobacterium, Lactobacillus, and Bacteroides spp. in a human clinical trial [91]. The structural complexity of dietary fibers and the associated diversity of gut microbes that consume them require further research. It is important to determine the utilization of specific fibers by distinct microbiota and to demonstrate which structural traits and/or components of these fibers affect cognitive function via altering the gut microbiome in future studies. Effectiveness of Prebiotics in Modulating Gut Microbiome Composition and Microbial Metabolite Production The overall impact of the gut microbiome on the production of microbial metabolites and gut barrier function is summarized in Figure 2. derived from apples were found to be utilized by beneficial colonic bacteria including Bifidobacterium, Lactobacillus, Enterococcus, suggesting a prebiotic capacity of pectin [90]. Gums are commonly found in food thickeners because of their capability of gel formation and emulsion stabilization. Particularly, gum arabic is well determined for its solubility in water, becoming viscous depending on its concentration. Gum arabic is a complex heteropolysaccharide mainly containing 1,3-linked β-D-galactose units with 1,6linked β-D-galactose side chains attached to rhamnose, glucuronic acid and arabinose residues [110,111]. It is accessible to the gut microbes having a potential to increase probiotic bacteria in the human gut. At a dose of 10 g for 4 weeks gum arabic resulted in significantly higher numbers of Bifidobacterium, Lactobacillus, and Bacteroides spp. in a human clinical trial [91]. The structural complexity of dietary fibers and the associated diversity of gut microbes that consume them require further research. It is important to determine the utilization of specific fibers by distinct microbiota and to demonstrate which structural traits and/or components of these fibers affect cognitive function via altering the gut microbiome in future studies. Effectiveness of Prebiotics in Modulating Gut Microbiome Composition and Microbial Metabolite Production The overall impact of the gut microbiome on the production of microbial metabolites and gut barrier function is summarized in Figure 2. Gut barrier integrity changes and differences in signaling molecules in healthy vs. unhealthy gut. In healthy gut barrier, dietary fiber from diet is digested by the beneficial gut microbiota which produces secondary metabolites such as SCFA (butyrate), indoles, and secondary bile acid profiles. Butyrate is known to use Gpr109a as a receptor expressed in the enterocyte which produces IL-18, or it may directly affect T regulatory (Treg) cells. IL-18 and Treg cells can both regulate gut immunity. SCFA also stimulates mucus production by goblet cells for healthy mucosal barrier. Indoles are ligands for pregnane X receptor (PXR) acting as transcription factor in sustaining mucosal homeostasis and regulation of tight junction Figure 2. Gut barrier integrity changes and differences in signaling molecules in healthy vs. unhealthy gut. In healthy gut barrier, dietary fiber from diet is digested by the beneficial gut microbiota which produces secondary metabolites such as SCFA (butyrate), indoles, and secondary bile acid profiles. Butyrate is known to use Gpr109a as a receptor expressed in the enterocyte which produces IL-18, or it may directly affect T regulatory (T reg ) cells. IL-18 and T reg cells can both regulate gut immunity. SCFA also stimulates mucus production by goblet cells for healthy mucosal barrier. Indoles are ligands for pregnane X receptor (PXR) acting as transcription factor in sustaining mucosal homeostasis and regulation of tight junction complexes. Secondary bile acid profiles are ligands for farnesoid X receptor (FXR) and can be found in both healthy and unhealthy gut. The physiological roles of secondary bile acid profiles are unclear and may have possible relationship with cognition. In impaired gut barrier, gut microbiota are dysbiosed and byproducts such as peptidoglycan and LPS are released from opportunistic pathogens. The mucosal barriers are attenuated which provides more close contact of pathogens near enterocytes altering tight junction proteins. The peptidoglycan and LPS may pass through compromised tight junction increasing pro-inflammatory cytokines and possibly contributing to depressive-like behavior. The fermentation of dietary fiber or prebiotics by gut microbiota and the major metabolites from that process have been elucidated in many studies [112][113][114][115]. Particularly, butyrate is the preferred energy source of apical colonocytes [116]. Furthermore, SCFA lower the pH of the gut, suppressing the growth of pathogens [117], mediate gut immune regulation [118], and influence gut motility [119]. Thus, SCFAs act as signaling molecules that induce downstream pathways modulating the physiology, immunity, and metabolism of enterocytes. Gpr109a is a type of G protein-coupled receptor specifically activated by butyrate and is expressed in enterocytes, immune cells, and even in microglia [120][121][122]. Butyrate binding to the gpr109a receptor triggers several cellular signaling pathways ( Figure 2) including those involving the colonic epithelium, macrophages, and dendritic cells. For example, Gpr109a signaling is known to promote anti-inflammatory properties by inducing IL-18 and IL-10 production, which induces differentiation of naïve T cells to T regulatory cells, thus supporting overall gut immunity by preventing colonic inflammation [123]. Neurotransmitters are another class of signaling molecule that plays an important role in the gut-brain axis. Serotonin, for example, is known to be mostly released from epithelial enterochromaffin cells [124,125]. The gut microbiota play a key role in promoting serotonin synthesis by host enterochromaffin cells. SCFA or secondary bile acids produced by gut microbes mediate serotonin production by enterochromaffin cells, which can further affect gut motility via the enteric nerve and brain serotonergic systems [126,127]. These findings suggest that certain prebiotic supplements, which stimulate the production of SCFA and secondary bile acids by specific microbes, can improve neurological function and behavior via upregulation of serotonin [128]. Another interesting neurotransmitter that connects gut and brain function is Gamma-aminobutyric acid (GABA). GABA is a crucial inhibitory neurotransmitter in the central nervous system and its alteration in GABAergic mechanisms is related to central nervous system disorders [129]. A recent study demonstrated the link between the gut microbiome (Bacteroides spp.) and GABA production, a response negatively correlated with depression [130]. Fecal microbiota from healthy control and schizophrenia patients were compared and each were transplanted to germ-free mice. Gut microbial dysbiosis shown in schizophrenia was related to changes in the GABA cycle which, in turn, may affect neurobehavioral status such as schizophreniarelevant behaviors [131]. The production of neurotransmitters, particularly serotonin and GABA was distinctly linked with Bifidobacterium and Lactobacillus genera [132]. These findings highlight the potential role of prebiotics that promote the composition of these specific microbes, because their presence has been linked with decreased dysbiosis in the gut and the production of functional neurotransmitters, which may contribute to enhancing enteric health and attenuating AD-related neurobehavioral disorders. In addition to neurotransmitters, prebiotics may also play an important role in regulating cytokine expression. Soluble fiber (pectin) treatment in mice resulted in faster recovery from endotoxin-induced sickness behaviors along with changes in the concentrations of cytokines, including IL-1RA, IL-4, IL-1β and TNF-α in the brain [133]. The pectin-supplemented mice also had increased concentrations of cecal acetate, propionate, and butyrate as a byproduct of pectin fermentation, which was associated with increased gastrointestinal IL-4 [133]. These findings suggest that soluble fiber not only affects the gastrointestinal tract and peripheral immune system but also neuroimmune system function. In another study in adult and aged mice a high fiber diet with inulin led to increased levels of cecal SCFA production including butyrate and acetate [134]. A reduction in inflammatory infiltrate was observed in the aged mice on the high fiber diet, and researchers specifically showed that sodium butyrate had anti-inflammatory effects on microglial profile, lowering inflammatory gene expressions [134]. These data suggest that butyrate produced from prebiotic fermentation may be a potent modulator of gut immune function and directly linked to microglial function in the brain. Gut microbiota derived metabolites such as SCFA and indole are critical for sustaining intestinal barrier function (Figure 2). Acetate and butyrate, for example, improve goblet cell differentiation and stimulate mucus production by goblet cells to maintain healthy mucosal barrier [135]. Mice fed a low-fiber Western style diet were found to have a defect in mucin production, which was prevented by supplementation with a synbiotic of Bifidobacterium longum and inulin [136], suggesting that when SCFA-producing microbes are present in the gut along with a preferred substrate, the net effect is enhanced mucosal barrier function. In addition to a decrease in fiber-fermenting microbes and thus SCFA production, a diet deficient in fiber can also promote the enrichment of mucus-degrading gut microbes such as Akkermansia muciniphila [137]. Bifidobacterium bifidum, which has the ability degrade mucin [138] may protect thinning of the mucus layer by inhibiting Akkermansia muciniphila, as was shown in mice with omeprazole-induced small intestine injury [139]. Paradoxically, the presence of Akkermansia muciniphila has been linked with beneficial health effects [140][141][142][143], as well as negative health effects in individuals with certain health conditions [144,145]. The roles of specific microbes and their metabolites in the maintenance vs. degradation of the mucosal barrier are context-specific and require further study. Prebiotics may be a useful strategy to prevent mucus degradation by supporting the growth of SCFA-producing microbes and thus increasing mucin production, as well as sustaining the homeostasis of mucolytic vs. non-mucolytic bacteria in the gut. Butyrate is known to regulate the expression of tight junction protein complexes [146]. Sodium butyrate was shown to increase Claudin-1 expression and induced redistribution of ZO-1 and Occludin in vitro [147]. Butyrate treatment accelerated the assembly of tight junctions by reorganizing the tight junction proteins in a Caco-2 cell monolayer model [148]. No studies have demonstrated a direct link between butyrate derived from the gut on tight junctions supporting endothelial cells that form the blood-brain barrier. However, these findings of a beneficial effect of butyrate on barrier function in the gut epithelium raises the question of whether a similar benefit may also be found in endothelial cells. A link between butyrate and brain function has been suggested. Bourassa et al. hypothesized that butyrate could be used as an important alternative energy substrate in the Alzheimer's brain where glucose utilization has been found to be reduced [149][150][151]. Indoles are a class of molecules produced by gut microbes that have the potential to affect gut and brain function. In a germ-free mouse model, oral administration of indole led to up-regulation of tight and adherens junction-associated molecules in the epithelial cells of the colon [152]. Indole 3-propionic acid acts as a ligand for pregnane X receptor and increased expression of junctional protein-coding mRNAs while decreasing TNF-α in a mouse model [153]. The effect of indole 3-propionic acid was also tested in the Caco-2/HT29 coculture model and showed an increase in tight junction proteins, mucins, and goblet cell secretion products [154]. However, the role of indole and its derivatives is controversial in terms of the gut-brain axis [155,156]. Studies have demonstrated potent neuroprotective properties of indoles, which cross the blood-brain barrier and protect the brain from oxidative stress [157] as well as prevent electron leakage from neuronal mitochondria [158,159]. However, other studies report excessive production of indole by gut microbes may negatively affect emotional behavior in rats due to the neurodepressive properties of oxidized derivatives of indole, oxindole and isatin [160]. Indoxyl sulphate, an oxidized and sulphated form of indole produced from the liver, may reduce the efflux of neurotransmitters through the organic anion transporter 3, causing accumulation of metabolites [161,162]. Thus, the effects of indoles on gut barrier and brain function require further study, as the variety of indole metabolites produced by the gut microbes and their co-metabolism by the host generate a complex suite of molecules with differential effects. Bile acids are a category of metabolite that is modulated by gut microbial metabolism, and which may have effects on the gut-brain axis. Bile acids are produced in hepatocytes and play a critical role in fat digestion and absorption. Most (95%) bile acids are recycled back to the liver via enterohepatic recirculation after reaching the terminal ileum. However, bile acids that are not recycled are excreted in feces or may be metabolized by the colonic microbiota, forming secondary bile acids via a series of microbial enzyme activities including deconjugation and 7α-dehydroxylation [163]. Thus, secondary bile acids are gut microbe-derived metabolites that may further regulate bile acid signaling of the host, affecting the activation of the enteroendocrine bile acid receptor, farnesoid X receptor ( Figure 2) [164]. Several papers have shown a connection between bile acid metabolism and AD. In AD patients, significantly lower serum concentrations of a primary bile acid (cholic acid) and increased secondary bile acid (deoxycholic acid) were observed compared to cognitively normal older adults [165]. Increased deoxycholic acid to cholic acid ratio is known to be strongly associated with cognitive decline [166]. The ratio of primary to secondary bile acids was positively correlated with the abundance of Bifidobacterium in a human clinical trial [167]. Recently, alteration in bile acid profiles was shown to have an association with cognitive decline and AD-related genetic variants [165]. There are likely hundreds if not thousands of microbially produced molecules that likely play important roles in host health. Among these, butyrate, indole, and bile acids, are to date, the most well-studied, and their roles in gut health, brain function, and specific roles in the pathophysiology of AD, are starting to emerge. As we gain knowledge on both short-term and long-term effects of diet on the brain mediated by the gut microbiome, it will be important to establish a dossier of evidence of benefit of specific prebiotics for the pathophysiology of AD. In the following section, we discuss potential prebiotic approaches to supplement AD patients. Current Evidence for Effectiveness of Prebiotics in AD Animal Models and Human Trials The effectiveness of prebiotics for the treatment of AD will ultimately need to be evaluated on the basis of their ability to either improve or prevent cognitive decline. However, other symptoms of AD related to behavioral and emotional changes are also viable targets of prebiotic intervention studies in AD patients. The current literature showing the potential effects of prebiotics on cognitive function in both animal models and human studies mainly focuses on the effects of fructans, both in the form of oligosaccharides and inulin, β-glucan from yeast or the bran of cereals, plant polysaccharides, and polysaccharides synthesized from sugars. This evidence is summarized below. Animal Models Animal models have been used in several studies to evaluate the effect of prebiotics on AD, particularly mice due to their reliability on intervention and ease of sampling. In this section, animal studies on administration of prebiotics that led to improvement in AD associated brain disorders are summarized. Bimuno-GOS intake in pregnant mice affected the offspring's exploratory behavior and brain gene expression as well as reducing anxiety [168]. Additionally, fecal butyrate and propionate levels were increased after Bimuno-GOS supplementation in postnatal mice [168]. In another study, behavioral testing was performed on mice from the least stressful (three-chamber test) to the most stressful (forced swim test) for 5 weeks during a 10-week prebiotic administration period including lead-in and lead-out periods [169]. The prebiotic treatment with a FOS+GOS combination resulted in a reduction of stress-related (depression and anxiety) behaviors, and reversed chronic stress (elevations in corticosterone and proinflammatory cytokine levels) in the supplemented mice compared to the control mice with no prebiotic treatment [169]. In a rat model exhibiting oxidative stress, mitochondrial dysfunction, and cognitive decline in the brain induced by high fat diet-induced obesity these outcomes were improved and cognitive function was restored by 12-week supplementation of either prebiotic (xylo-oligosaccharide), probiotic (Lactobacillus paracasei HII01), or combined treatment with similar efficacy [170]. The effectiveness of mannan-oligosaccharide was tested in a 5xFamilial AD transgenic mouse model [171]. The treatment with mannan-oligosaccharide reduced Aβ accumulation in the brain and suppressed neuroinflammatory responses [171]. Mannan-oligosaccharide not only improved cognitive and behavioral disorders, but also gut barrier integrity by reshaping the composition of gut microbiota, specifically increases in the relative abundances of Lactobacillus and decreases in Helicobacter [171]. Importantly, the observed changes in gut microbiota composition and butyrate production were negatively correlated with oxidative stress in the brain and behavioral deficits [171]. Human Trials Studies on the effects of prebiotic supplementation directly on cognitive and behavioral outcomes in Alzheimer's patients are currently lacking. However, a few human intervention studies were conducted to test the effectiveness of certain prebiotics alone or with probiotics on improving symptoms associated with AD such as behavioral, mood, memory, anxiety, and cognitive disorders. Fructan and GOS-based prebiotics show promising and consistent results in clinical trials in decreasing anxiety and improving cognitive and behavioral outcomes. The prebiotic Bimuno-GOS improved antisocial behaviors in autistic children [172]. Trans-GOS stimulated bifidobacteria in the gut of irritable bowel syndrome patients and lowered anxiety [173]. Short chain FOS enhanced fecal bifidobacteria and reduced anxiety scores [174]. Inulin in healthy participants resulted in better recognition and improved recall [175]. In obese patients adhering to calorie restrictions for 3 months supplementation with 16 g/d of inulin had moderate impact on mood and cognition, with responders who experienced an increase in Coprococcus and Bifidobacterium having stronger benefits than non-responders [176]. Importantly, in most of these intervention studies, subjects supplemented with fructan or GOS prebiotics showed increases in bifidobacteria in general along with improvement in their symptoms. Many studies have already reported the connection between the increase in bifidobacteria and beneficial health outcomes (Table 1). Indeed, the growth of bifidobacteria is selectively stimulated by fructans [177]. The increase in Bifidobacterium longum 1714 strain in healthy mice showed stress resistance and pro-cognitive effects [178,179]. The same Bifidobacterium strain from this preclinical study displayed association with reduction in stress and improvement in memory in healthy volunteers [180]. The results from these studies suggest a strong connection between prebiotics, the gut microbiome, particularly bifidobacteria, and brain function. Other studies provide supporting evidence that prebiotics modulate brain function in a manner that would be consistent with desired improvements in symptoms of AD but were not necessarily linked with or did not examine gut microbiome composition. Beta-glucans from yeasts, plants or cereals have been shown to have beneficial health effects on the profile of mood state in healthy individuals [181,182]. Plant polysaccharides, which mainly consist of non-starch polysaccharides found in foods were shown to have effect on healthy adults, improving their recognition and memory performance [183,184]. Polydextrose, which is a synthesized prebiotic, was supplemented in healthy females and showed moderate improvement in cognition as well as significant change in abundance of Ruminiclostridium 5 compared to the placebo group [185]. Other studies have found 30-60 mL of lactulose for 3 months improved cognitive function and health-related quality of life in patients with minimal hepatic encephalopathy [186]. Concluding Remarks Although human clinical studies examining the effects of specific prebiotics on gut microbiome-mediated cognitive health outcomes in AD patients are lacking, there is mounting evidence that prebiotics have the potential to be a viable approach for ameliorating symptoms associated with AD. Promoting the growth and activity of beneficial, SCFAproducing microbes such as bifidobacteria is emerging as a clear therapeutic target for improving gut barrier function, decreasing inflammation, and improving cognitive and behavioral outcomes. A variety of prebiotic types, particularly fructans, have been found to be effective in modulating gut microbiome composition and microbial metabolite production, and modifying health outcomes relevant for individuals with AD. More research is needed to determine which prebiotics, at what dosages, and in which context (e.g., on what dietary background, in combination with specific probiotics, at what frequency, etc.) are the most effective for not only decreasing AD-associated symptoms such as anxiety and depression, but also potentially improving cognition or preventing the loss of cognitive function in individuals at risk for AD. Further mechanistic research to determine how changes in the gut microbiome related to prebiotic supplementation alter neuroinflammatory signaling are also needed so that targeted, effective, potentially personalized therapies can be developed to treat and prevent the progression of neurodegenerative processes in AD.
2021-11-10T16:27:45.821Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "48620a8aeb3c707d32fdc66c73b43da71da7be4c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/9/11/2310/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7dc4e51aa8353d3e16a5216a5c634ec1b0d406f9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
85459353
pes2o/s2orc
v3-fos-license
Speed and energy optimized quasi-delay-insensitive block carry lookahead adder We present a new asynchronous quasi-delay-insensitive (QDI) block carry lookahead adder with redundant carry (BCLARC) realized using delay-insensitive dual-rail data encoding and 4-phase return-to-zero (RTZ) and 4-phase return-to-one (RTO) handshaking. The proposed QDI BCLARC is found to be faster and energy-efficient than the existing asynchronous adders which are QDI and non-QDI (i.e., relative-timed). Compared to existing asynchronous adders corresponding to various architectures such as the ripple carry adder (RCA), the conventional carry lookahead adder (CCLA), the carry select adder (CSLA), the BCLARC, and the hybrid BCLARC-RCA, the proposed BCLARC is found to be faster and more energy-optimized. The cycle time (CT), which is expressed as the sum of the worst-case times taken for processing the data and the spacer, governs the speed. The product of average power dissipation and CT viz. the power-cycle time product (PCTP) defines the low power/energy efficiency. For a 32-bit addition, the proposed QDI BCLARC achieves the following reductions in design metrics on average over its counterparts when considering RTZ and RTO handshaking: i) 20.5% and 19.6% reductions in CT and PCTP respectively compared to an optimum QDI early output RCA, ii) 16.5% and 15.8% reductions in CT and PCTP respectively compared to an optimum relative-timed RCA, iii) 32.9% and 35.9% reductions in CT and PCTP respectively compared to an optimum uniform input-partitioned QDI early output CSLA, iv) 47.5% and 47.2% reductions in CT and PCTP respectively compared to an optimum QDI early output CCLA, v) 14.2% and 27.3% reductions in CT and PCTP respectively compared to an optimum QDI early output BCLARC, and vi) 12.2% and 11.6% reductions in CT and PCTP respectively compared to an optimum QDI early output hybrid BCLARC-RCA. The adders were implemented using a 32/28nm CMOS technology. Introduction The 2017 edition of the International Roadmap for Devices and Systems [1] suggests that asynchronous design could be a potential solution to address the increasing power/energy consumption of a digital circuit/system. Substantiating this, in [2], a 128-point, 16-bit, radix-8 fast Fourier transform (FFT) processor was implemented in the robust QDI asynchronous design a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 style and it was compared with a conventional synchronous FFT processor implementation, and both these were realized using a 65nm CMOS technology. It was noted that, the QDI FFT processor is 34× more energy-efficient than its synchronous equivalent. The QDI design style is a promising alternative to the synchronous design style, and different types of QDI implementations exist. QDI circuits are known to be robust to process, voltage, timing and temperature variations [3,4], which is important to note since the issue of variability [5] is quite common in the nanoelectronics era. Moreover, QDI circuits are less affected by electromagnetic interference compared to synchronous circuits [6]. These properties make QDI circuits preferable for secure applications [7,8]. Further, QDI circuits and systems are modular [9], and hence they are convenient to reuse or replace thus obviating the need for extensive timing re-runs and analysis. Furthermore, QDI circuits are naturally elastic [10] unlike synchronous circuits, and they are suitable for subthreshold operation [11]. A QDI circuit is the practically realizable delay-insensitive circuit which includes the weakest compromise of the isochronic fork [12]. The isochronic fork assumption implies that all the wires branching out from a node/junction would experience concurrent rising and falling signal transitions. Usually, the isochronic fork assumption is confined to a small circuit area and hence their realization would not be difficult. It has been shown in [13] that QDI circuits are realizable in the nano-electronics regime. Addition is a fundamental operation in computer arithmetic, which is realized using the adder, and an effective adder design is of interest and importance. This article deals with the high-speed and energy-efficient QDI realization of the adder. In a latest work [14], several asynchronous implementations of a 32-bit adder were considered and analyzed. QDI full adders based on [15,16,17] are strongly indicating (acknowledging) implying that these full adders would wait for the arrival of all the primary inputs and then process them to produce the required primary outputs. When such strong-indication full adders are cascaded to form an N-bit RCA, the RCA would be weakly indicating [14]. The main drawback with this weakly indicating RCA is that a worst-case critical path delay involving N full adders would be encountered for processing the 'data' (called 'forward latency') and a similar critical path delay would be encountered for processing the 'spacer' (called 'reverse latency') which affects their speed (CT) and increases their energy (PCTP). The terminologies 'data' and 'spacer' in the context of RTZ and RTO handshake protocols are explained in Section 3. Reference [18] yields a weak-indication QDI full adder based on the concept of binary decision diagram, whose sum output would wait for the arrival of all the primary inputs while its carry output need not thus potentially speeding-up the carry propagation. When N instances of the weak-indication full adder of [18] are cascaded to form a QDI RCA, the RCA would be weakly indicating. Although the forward and reverse latencies of a N-bit QDI RCA based on [18] are data-dependent, they would still involve N full adders in the worst-case, which is not optimum from the speed and energy perspectives. In [19], a biased weak-indication QDI full adder was proposed where the sum output of the full adder is responsible for indicating the arrival of all the primary inputs while the carry output is not. When N weak-indication full adders corresponding to [19] are cascaded, the resulting QDI RCA would encounter a data-dependent forward latency, and a constant reverse latency governed by the sum of the propagation delays of just two full adders. Although the forward latency may be dictated by the sum of the delays of N full adders, the reverse latency would be dictated by the sum of the delays of only two full adders, which is useful for optimizing the speed and energy parameters. It is to be noted that the forward and reverse latencies of N-bit QDI RCAs constructed using the full adders of [20] and [21] are theoretically the same as discussed for [19]. However, [20] presents an improved weak-indication full adder compared to [19], with the carry output logic of the former being better optimized than the latter. Reference [21] presents an early output QDI full adder whose sum output is responsible for indicating the arrival of all the primary inputs while the carry output is freed from the indication constraint. In general, an early output circuit is able to produce all the primary outputs after receiving a subset of the primary inputs, which may correspond to either data or spacer but not both. An N-bit weak-indication QDI RCA incorporating the early output full adder of [21] would have a forward latency equal to the sum of the delays of N full adders, and a reverse latency equal to the sum of the delays of just two full adders. However, the forward latency of the RCA based on [21] is less compared to the forward latencies of [19] and [20] since the carry output logic of the full adder of [21] is better optimized compared to the carry output logic of the full adders of [19] and [20]. Reference [22] presented early output full adders which when cascaded lead to relativetimed RCAs. Relative-timed RCAs [22] experience a forward latency equivalent to the sum of the delays of N full adders and the optimal constant reverse latency equivalent to the delay of just one full adder. Relative-timed circuits [23] are like early output circuits in that after receiving a subset of the primary inputs (data or spacer), they are able to produce all the primary outputs (data or spacer respectively). However, relative-timed circuits usually incorporate additional timing assumptions with respect to sequencing the arrival of internal signals within the circuit besides the assumption of isochronic forks, which may be rather sophisticated to realize. Relative-timed circuits are not QDI circuits, however they are able to facilitate improvements in the design parameters such as less area, higher speed, and less energy but at the expense of a compromise in the robustness. In contrast, strong-indication, weak-indication and early output QDI circuits are robust. QDI CLAs have also been discussed in the literature [24,25,26,27] and these correspond to weak-indication or early output type. Among these, [24] presents a full-custom design at the transistor level while [25,26,27] present semi-custom designs which correspond to a gatelevel synthesis. In general, QDI CLAs are classified into QDI CCLA [25] and QDI BCLAs and BCLARCs [14,26,27]. QDI CCLA, BCLAs and BCLARCs tend to have lesser forward latencies compared to the forward latencies of some QDI and relative-timed (non-QDI) RCAs [14]. However, this advantage may be offset by their greater reverse latencies compared to the reverse latencies of QDI and relative-timed RCAs [14]. These observations are also applicable for a comparison made between QDI CSLAs [28] and QDI and relative-timed RCAs [15,16,17,18,19,20,21,22]. QDI CLAs and CSLAs consume more area compared to the area occupancies of QDI and relative-timed RCAs, as observed in [14]. A QDI BCLA does not incorporate redundant carry output logic [29] while a QDI BCLARC does, and the latter is able to facilitate considerable reductions in forward and reverse latencies and cycle time compared to the former. Hence, QDI BCLARCs are preferable among the category of QDI CLAs. A hybrid QDI BCLARC-RCA architecture, which incorporates an appropriate size RCA in the least significant adder bit positions as a replacement for one or more instances of a sub-BCLARC may enable a further optimization of the design metrics compared to the basic QDI BCLARC architecture. However, this is not guaranteed and should be ascertained case-by-case based on timing analysis. In [14], a hybrid QDI BCLARC-RCA outperformed the QDI RCAs, CSLAs, and other BCLAs and BCLARCs mentioned above in terms of speed and energy. This article presents a new QDI BCLARC that outperforms all the QDI and non-QDI RCAs, CSLAs, CCLA, BCLAs, BCLARCs and hybrid BCLARC-RCAs described in [14] and [22] in terms of speed (CT) and energy (PCTP). The rest of the article is organized as follows. Section 2 mentions the frequently used acronyms and their expansions for a quick reference. Section 3 discusses the design preliminaries of QDI and non-QDI (relative-timed) asynchronous circuits. Section 4 describes the proposed QDI sub-BCLA block without and with the redundant carry output and the resulting QDI BCLAs, BCLARCs and BCLARC-RCAs by considering an example 32-bit addition. Section 5 presents the design metrics for several 32-bit QDI and non-QDI asynchronous adders corresponding to 4-phase RTZ and 4-phase RTO handshaking, and they are compared. Finally, Section 6 draws the conclusions. Acronyms and Expansions Widely used acronyms and their expansions are given below for a ready reference. QDI and Non-QDI Circuits-A Background The design fundamentals of QDI and non-QDI (i.e., relative-timed) asynchronous circuits are discussed in this section to provide a background. Data encoding, handshaking, and timing parameters The general schematic of a QDI or a relative-timed circuit stage employing delay-insensitive data encoding and a 4-phase handshaking is shown in Fig 1A, which corresponds to the transmitter-receiver analogy. The technical schematic is shown in Fig 1B. In Fig 1B, the current stage and next stage registers are analogous to the transmitter and the receiver, shown in Fig 1A, and a QDI or a relative-timed circuit is sandwiched between the current stage and the next stage register banks. The register bank comprises a series of registers, with one register allotted for each of the rails of a dual-rail encoded data input. A register implies a 2-input Muller C-element [30]. The C-element will output 1 or 0 if all its inputs are 1 or 0 respectively. If the inputs to a C-element are not identical then the C-element would retain its existing steady-state. The circles with the marking 'C' represent the C-elements in the figures. The application of input data to a QDI or relative-timed circuit which adheres to the 4-phase RTZ handshaking follows the sequence: data-spacer-data-spacer, and so forth. It may be noted that the application of data is followed by the application of the spacer, which implies that there is an interim RTZ phase between the successive applications of input data. The interim RTZ phase ensures a proper and robust data communication i.e., handshaking between the transmitter and the receiver. The RTZ handshake protocol is specified by the following four steps: • First, the dual-rail data bus specified by (X1, X0), (Y1, Y0) and (Z1, Z0) assumes the spacer, and therefore the acknowledgment input (ACKIN) is equal to binary 1. After the transmitter transmits a data, this would cause rising signal transitions i.e., binary 0 to 1 to occur on one of the dual rails of the entire dual-rail data bus • Second, the receiver would receive the data sent and drive the acknowledgment output (ACKOUT) to 1. ACKIN is the Boolean complement of ACKOUT and vice-versa • Third, the transmitter waits for ACKIN to become 0 and would subsequently reset the entire dual-rail data bus, i.e., the dual-rail data bus assumes the spacer again • Fourth, after an unbounded (but a finite and positive) time duration, the receiver would drive ACKOUT to 0 and then ACKIN would assume 1. With this, a single data transaction is said to be completed and the asynchronous circuit is permitted to start the next data transaction According to dual-rail data encoding and the 4-phase RTO handshaking [33], an input V is encoded as (V1, V0) and V = 1 is represented by V1 = 0 and V0 = 1, and V = 0 is represented by V0 = 0 and V1 = 1. Both these assignments are called data. The assignment V1 = V0 = 1 is called the spacer, and the assignment V1 = V0 = 0 is deemed illegal to maintain the delayinsensitivity. The application of input data to a QDI or relative-timed circuit conforming to the 4-phase RTO handshaking follows the sequence: spacer-data-spacer-data, and so forth. It may be noted that there is an interim RTO phase between the successive applications of input data. The interim RTO phase ensures a proper and robust data communication between the transmitter and the receiver. The RTO handshaking process is specified by the following four steps: • First, ACKIN is equal to binary 1. After the transmitter transmits the spacer, this would cause rising signal transitions i.e., binary 0 to 1 to occur on all the rails of the dual-rail data bus • Second, the receiver would receive the spacer sent and drive ACKOUT to 1 • Third, the transmitter waits for ACKIN to become 0 and would then transmit the data through the dual-rail data bus • Fourth, after an unbounded (but a finite and positive) time duration, the receiver would drive ACKOUT to 0 and subsequently ACKIN would assume 1. With this, a single data transaction is said to be completed and the asynchronous circuit is permitted to start the next data transaction In a QDI or relative-timed circuit, the time taken to process the data in the datapath, highlighted by the red dashed line in Fig 1B, is called forward latency, and the time taken to process the spacer is called reverse latency. Since there is an intermediate RTZ or RTO phase between the application of two input data sequences, the cycle time is expressed as the sum of forward and reverse latencies. The cycle time of a QDI or a relative-timed asynchronous circuit is the equivalent of the clock period of a synchronous circuit. The cycle time governs the speed at which new data can be input to an asynchronous circuit. The gate-level details of example completion detectors corresponding to RTZ and RTO handshaking is shown at the bottom of Fig 1B, within the dotted green boxes. The completion detector indicates i.e., acknowledges the receipt of all the primary inputs given to an asynchronous circuit stage. In the case of 4-phase RTZ handshaking, ACKOUT is produced by using a 2-input OR gate to combine the respective dual rails of each encoded primary input and synchronizing the outputs of all the 2-input OR gates using a C-element or a tree of C-elements. In the case of 4-phase RTO handshaking, ACKOUT is produced by using a 2-input AND gate to combine the respective dual rails of each encoded primary input and then synchronizing the outputs of all the 2-input AND gates using a C-element or a tree of C-elements. QDI circuits QDI circuits are classified into three types as strong-indication [34], weak-indication [34], and early output [35] circuits. The input-output timing relations of QDI circuits are illustrated by the representative timing diagrams shown in Fig 2A and 2B with respect to RTZ and RTO handshaking. Strong-indication circuits would wait to receive all the primary inputs (data and spacer), and after receiving them would process to produce the required primary outputs (data and spacer respectively). On the other hand, weak-indication circuits can produce all but one of the primary outputs after receiving a subset of the primary inputs. Nevertheless, only after receiving the last primary input, they would produce the last primary output. A connection of strong-indication sub-circuits may not result in a strong-indication circuit; rather, a weak-indication circuit may result. For example, if two strong-indication full adders are connected, a weak-indication 2-bit RCA would result. This is because if all the inputs to one of the full adders are provided, the corresponding sum and carry output bits of that full adder could be produced regardless of the arrival/non-arrival of the inputs to the other full adder in the RCA. However, only after all the inputs to the other full adder are provided, its corresponding sum and carry output bits would be produced. This scenario is characteristic of weak-indication. For implementing arithmetic functions, weak-indication is preferable to strong-indication and this is due to the following reasons: i) strong-indication arithmetic circuits tend to encounter worst-case forward and reverse latencies for the application of data and spacer, and therefore the cycle time of strong-indication arithmetic circuits is always the maximum (worst-case timing), ii) weak-indication arithmetic circuits may encounter data-dependent forward and reverse latencies or a data-dependent forward latency and a constant reverse latency, and so the cycle times of weak-indication arithmetic circuits are usually less compared to strong-indication arithmetic circuits. An early output circuit is however more relaxed compared to strong-and weak-indication circuit counterparts. After receiving a subset of the primary inputs (data or spacer), an early output circuit can produce all the primary outputs (data or spacer respectively). This implies the late arriving primary inputs may not be acknowledged by the circuit. However, this is not a cause for concern because isochronic fork assumptions are imposed on all the primary inputs, and all the primary inputs are given to the completion detector that precedes the early output circuit, as seen in Fig 1B. Hence, the acknowledgment of the late arriving primary inputs by the completion detector also implies the receipt of those primary inputs by the asynchronous circuit. Thus, the problem of wire orphan(s) i.e., unacknowledged signal transitions on the wire(s) due to the late arrival of primary input(s) is overcome by the assumption of isochronic forks, which is imposed on all the primary inputs. Either the data may be produced early, or the spacer may be produced early in an early output circuit and not both. Accordingly, an early output circuit is categorized as early set or early reset kind. The early set and reset behaviors of early output circuits are highlighted by the dotted green ovals in Fig 2A and 2B. An early output RCA is preferable to a strong-indication and a weak-indication RCA for achieving better optimizations in speed and power/energy [14]. In general, an early output circuit can achieve enhanced optimizations in the design metrics compared to strong-and weak-indication counterparts. In a QDI circuit, the logic decomposition should be performed safely [36]. Safe QDI logic decomposition [17] is essential to avoid the problem of gate orphans, which are unacknowledged signal transitions occurring on the intermediate gate output(s). For an illustration of gate and wire orphans, the interested reader is referred to [37]. However, we discuss about orphans in the following section. The signal transitions will have to occur monotonically in a QDI circuit from the first logic level, which receives the primary inputs, up to the last logic level, which produces the primary outputs [38]. The signal transitions should either be seen rising or falling throughout an entire QDI circuit. In general, the signal transitions will be rising (i.e., binary 0 to 1) for the application of data, and falling (i.e., binary 1 to 0) for the application of spacer in a QDI circuit that corresponds to RTZ handshaking. On the other hand, the signal transitions will be rising for the application of spacer and falling for the application of data in a QDI circuit that corresponds to RTO handshaking. For monotonicity of signal transitions, the monotonic cover constraint [9] should be incorporated into a QDI logic description. For example, if a QDI logic function is expressed in the sum-of-products form, only one product term should be activated for the application of input data, i.e., the product terms comprising the sum-of-products expression of a QDI logic function should be mutually orthogonal (also called disjoint), i.e., the logical conjunction of any two product terms in a QDI logic function should yield zero. Thus, a QDI logic function is ideally expressed in the disjoint sum-of-products form [39], which would consist of mutually disjoint products to satisfy the monotonic cover constraint. An example illustration of the monotonic cover constraint is given in Section 2.2 of [14], and an interested reader may refer to the same for details. Embedding the monotonic cover constraint and performing safe QDI logic decomposition are central to the correct implementation of a QDI circuit. Incorporating the monotonic cover constraint in a QDI logic function would ensure the activation of just one signal path from a primary input to a primary output for the application of an input data. This is useful to facilitate the proper acknowledgment of signal transitions throughout an entire QDI circuit, thus avoiding the likelihood of any gate orphan occurrence (s). Gate orphans are troublesome unlike wire orphans as they may affect the robustness of a QDI circuit and if they are imminent, restricting them from affecting the circuit robustness may require incorporating additional timing assumptions which are likely to be sophisticated, and may be difficult to realize [22]. Relative-timed (Non-QDI) circuits Relative-timed circuits [23] are not QDI circuits although they may embed the monotonic cover constraint and adopt safe QDI logic decomposition for their physical realization. This is because relative-timed circuits tend to incorporate extra timing assumptions (in addition to the assumption of isochronic forks), to eliminate any potential problem due to gate orphan(s). Usually, the extra timing assumptions are related to the delayed arrival of some internal input signals, which are subject to a specific time bound. If the timing assumptions are upheld in a relative-timed circuit the circuit would appear to be QDI, and supposing they are violated, the circuit would not be QDI. Relative-timed circuits are early output circuits; however, they are non-QDI unlike the latter. A couple of relative-timed RCAs were presented in [22], which were realized using early output full adders. Relative-timed circuits are seen to be competitive to early output QDI circuits as they could pave the way for enhanced optimizations of the design metrics compared to strong-indication, weak-indication and early output QDI circuits but at the expense of a compromise in the robustness. Hence, only strong-indication, weakindication and early output QDI circuits are robust and are guaranteed to be gate-orphan free. Generic CCLA and BCLA architectures-A brief comparison In general, an N-bit CCLA is constructed by cascading (N/M) M-bit CCLAs where N modulo M equals 0 [40]. The M carry outputs of a M-bit CCLA are produced by lookahead based on the corresponding generate and propagate functions and also the carry input. Of the M carry outputs, excepting the most significant lookahead carry output, the remaining (M-1) carry outputs are XOR-ed with the corresponding propagate functions to produce the respective sum output bits. The most significant lookahead carry output produced by a M-bit CCLA is propagated to the next M-bit CCLA to serve as its carry input, which is utilized to produce its corresponding sum and carry output bits. An N-bit BCLA [41], also called the section-carry based CLA [25], is also realized using (N/ M) M-bit BCLAs where N modulo M equals 0. However, a M-bit BCLA comprises a M-bit BCLG, three full adders, and a final 3-input XOR function. A M-bit BCLG produces just one carry output by lookahead based on the propagate and generate functions and the carry input, which is then propagated to the successive M-bit BCLA to serve as its carry input. The carry input to an M-bit BCLA along with its corresponding augend and addend inputs are processed by a kind of sub-RCA which is also of size M-bits that features a cascade of (M-1) full adders and a final 3-input XOR function to produce the respective sum output bits. Hence, the intermediate carries in a M-bit BCLA are not produced by lookahead; rather they are produced in a ripple-carry fashion. QDI BCLA and BCLARC architectures The architectures of QDI BCLA and QDI BCLARC for an example 32-bit addition are shown in Fig 3. We consider the 32-bit addition here so as to facilitate a straightforward comparison with the recent published literature [14,22]. Fig 3A and 3B, (X01, X00) and (Y01, Y00) denote the least significant dual-rail encoded augend and addend inputs, and (X311, X310) and (Y311, Y310) represent the most significant dual-rail encoded augend and addend inputs. The dual-rail encoded carry input and output are denoted by (C01, C00) and (C321, C320) respectively, and the carry input can be set to 0 for RTZ handshaking and set to 1 for RTO handshaking. The critical datapaths traversed for the application of data and spacer in the adders are highlighted by the green and red dashed lines in Fig 3A and 3B respectively. It can be noticed in Fig 3 that the 4-bit BCLG, the 4-bit BCLGRC, the full adder, and the XOR3 function form the basic building blocks of the QDI BCLA and the QDI BCLARC. This work presents the novel and efficient design of a 4-bit BCLG and BCLGRC, which are QDI. The 4-bit BCLG and BCLGRC form the heart of the 4-bit BCLA and the 4-bit BCLARC, which eventually form the building blocks for the QDI BCLA and the QDI BCLARC. QDI realizations of the full adder and the XOR3 function, which were discussed in our previous The architectures remain the same for RTZ and RTO handshaking. The critical paths traversed for the application of data and spacer also remain the same for RTZ and RTO handshaking. One non-redundant lookahead carry output is produced by each 4-bit QDI BCLG in (a), whereas a non-redundant lookahead carry output and a redundant lookahead carry output is produced by each 4-bit QDI BCLGRC in (b). FA refers to the full adder and XOR3 refers to the 3-input XOR function, and both these belong to (QDI) early output type. https://doi.org/10.1371/journal.pone.0218347.g003 Speed and energy optimized quasi-delay-insensitive block carry lookahead adder work [14], have been utilized here to realize the BCLA and the BCLARC. The XOR3 function is referred to as the sum logic in [14]. Gate-level realizations of the 4-bit QDI BCLG/BCLGRC, the early output QDI full adder, and the early output QDI XOR3 function corresponding to RTZ handshaking are shown in Fig 4A, 4B and 4C respectively. The equivalent gate-level circuits corresponding to RTO handshaking are depicted in Fig 5A, 5B and 5C respectively. It is proved in [42] that any asynchronous circuit corresponding to RTZ handshaking can be transformed into that corresponding to RTO handshaking and vice-versa by replacing the logic gates by their respective duals while retaining the C-elements and their respective inputs as such. We shall describe the basic building blocks shown in Fig 4 which correspond to RTZ handshaking, and the discussion will be applicable to those in Fig 5, which correspond to RTO handshaking. Fig 4A shows the proposed 4-bit QDI BCLG/BCLGRC. (C01, C00) represents the dual-rail carry input, (C41, C40) represents the dual-rail lookahead carry output, and (RC41, RC40) is the redundant dual-rail lookahead carry output, which is logically equivalent to (C41, C40). The equations for (C41, C40) are given in (1) and (2), which are applicable for (RC41, RC40). In (1) and (2), G3 to G0 represent the carry-generate functions, P3 to P0 represent the carry-propagate functions, and K3 to K0 represent the carry-kill functions. The logic expressions for these functions are given in Fig 4A. The carry-propagate, carry-generate, and carrykill functions are mutually orthogonal, which implies that only one of these functions corresponding to a set of primary inputs will be activated for the application of an input data. For example, referring to Fig 4A, either G3 or P3 or K3 will alone assume 1 during a data phase and the rest will continue to maintain 0 from the earlier RTZ phase. Eqs (1) and (2) are thus inherently in the disjoint sum-of-products form. Note that in Figs 4A and 5A, if the circuit portion shown in red is omitted, they represent the '4-bit QDI BCLG', and if the circuit portion shown in red is included, they represent the '4-bit QDI BCLGRC'. The circuit portion shown in green lines in Figs 4A and 5A signifies the internal completion detection, which is crucial to ensure freedom from gate orphan(s). The QDI BCLG features only the lookahead carry output (C41, C40), and the QDI BCLGRC features the extra redundant lookahead carry output (RC41, RC40). The proposed 4-bit BCLG and 4-bit BCLGRC belong to the early output type; the BCLG and the BCLGRC will wait for the arrival of required data on the primary inputs to produce the corresponding primary outputs. However, after the assumption of spacer by a subset of the primary inputs, all the primary outputs could assume the spacer. In Fig 4A, R1, R2, R3, R4, C1, C2, ICD, NC41 and NC40 represent the intermediate outputs. These internal outputs manifest in Fig 5A as well. Each set of the respective carry-generate, carry-propagate and carry-kill functions (for example, G3, P3 and K3) are OR-ed in Fig 4A (AND-ed in Fig 5A) and their outputs viz. R1 to R4 are given to a C-element tree. The output of the C-element tree is denoted as ICD, which is the output of the internal completion detector. NC41 and NC40 are equivalent to C41 and C40. But NC41 and NC40 are synchronized with ICD to produce C41 and C40. This is to ensure that when C41 and C40 are produced all the internal data processing within the 4-bit BCLG/BCLGRC is completed and all the internal outputs have settled to the correct steady-state. Ensuring internal completion detection is necessary for the proposed BCLG/BCLGRC to guarantee that they are QDI. To illustrate the importance of and the need for internal completion detection in Fig 4A (and Fig 5A), let us assume that P3 = P2 = P1 = G0 = 1 after an RTZ phase. As a result, NC41 would assume 1. Also, R1 = R2 = R3 = R4 = 1. Therefore, C1 = C2 = 1 and ICD = 1. Since NC41 = ICD = 1, C41 = 1 and C40 = 0. Subsequently, in the next RTZ phase, let us assume that only P3, P2 and P1 have become 0 and G0 is still 1. Given this, NC41 will assume 0. Supposing, NC41 was used to represent C41, this will incorrectly convey that the BCLG/BCLGRC has assumed the spacer although the internal data processing has not been completed because G0 has not yet become 0. This violates the QDI principle because in a QDI circuit, the production of primary outputs should unambiguously confirm the receipt of the primary inputs and the completion of internal computation within the circuit for the processing of data and spacer. This will avoid the likelihood of any gate orphan(s), which would occur if the output(s) of intermediate gate(s) remain unacknowledged. Cycle time calculation of proposed QDI BCLA and BCLARC It would be useful to analyze the (worst-case) CTs of the proposed QDI BCLA and BCLARC to gain an insight into which of these architectures would be beneficial in terms of the speed prior to physical realization. To estimate the CT, the estimation of forward and reverse latencies is essential since CT is the summation of forward and reverse latencies. (3), the last term on the right-side represents the propagation delay of the input register (T Register ), which is the propagation delay of the 2-input C-element since the C-element represents the register. Referring to Fig 4A, the longest (critical) datapath is traversed in the least significant BCLG which involves an AO22 complex gate, a 3-input OR gate, and three 2-input C-elements. As in the previous works, the 2-input C-element was custom-realized based on a 32/28nm CMOS technology [43] by modifying the AO222 complex gate realization by introducing feedback which required 12 transistors. Besides the C-element, all the other gates in the cell library [43] were directly utilized. In the subsequent intermediate BCLGs, the datapath traversal would encounter relatively fewer gates which involves a 2-input C-element, a 2-input OR gate, and a final 2-input C-element. The datapath traversal in the full adder would involve an AO22 gate, and the datapath traversal via the XOR3 function would involve a 2-input C-element and a 2-input OR gate. With T AO22 , T OR3 , T CE2 and T OR2 representing the propagation delays of an AO22 complex gate, a 3-input OR gate, a 2-input C-element, and a 2-input OR gate respectively, (3) is expanded and given by (4). Note that there is a one-to-one correspondence between the terms Fig 5(a). The circuit portion shown in green lines signifies the internal completion detection. -bit BCLG/BCLGRC, (b) early output QDI full adder, and (c) early output QDI XOR3 function. All the circuits correspond to 4-phase RTZ handshaking. Note that if the circuit portion shown in red is omitted in (a), it is called 4-bit BCLG; if the circuit portion shown in red is included in (a), it is called 4-bit BCLGRC-this interpretation of 4-bit BCLG and 4-bit BCLGRC is also applicable to https://doi.org/10.1371/journal.pone.0218347.g004 Speed and energy optimized quasi-delay-insensitive block carry lookahead adder present on the right-side of (3) and (4). Let the reverse latency of the QDI BCLA shown in Fig 3A that corresponds to RTZ handshaking be denoted as RL BCLA_RTZ which is expressed by (5). Compared to (3), the processing of the spacer in the QDI BCLA involves fewer gates, i.e., two full adders and one XOR3 less. This is because the least significant full adder present in the most significant 4-bit BCLA of Fig 3A would wait for the arrival of the carry input (C281, C280) to process it to produce the sum output bit (SUM281, SUM280). Referring to Fig 4B, the carry outputs of all the full adders can be produced early and when they are given as the carry inputs for the successive full adders in the cascade, the sum outputs of those full adders could be produced simultaneously. This time delay is less compared to the reverse latency of the QDI BCLA shown in Fig 3A. Thus, (5) is expanded and given as (6), and there is a one-toone correspondence between the terms present on the right-side of (5) and (6). The CT of the QDI BCLA (Fig 3A) can be calculated by substituting the propagation delays of minimum-size gates present in the cell library in (4) and (6), and then adding up the forward and reverse latencies. Based on the theoretical calculations, the forward and reverse latencies of the 32-bit QDI BCLA are found to be 2.583ns and 2.367ns, resulting in a CT of 4.95ns for RTZ handshaking. The detailed expressions for forward and reverse latencies corresponding to RTO handshaking are given by (7) and (8) (7) and (8) are deduced by replacing the propagation delays of the gates mentioned in (4) and (6) with the propagation delays of their dual gates, however, with the exception of T CE2 , which is retained as such. This is because the 2-input C-elements and their respective inputs are retained as such while transforming a circuit corresponding to RTZ handshaking into that that corresponds to RTO handshaking [42]. Based on (7) and (8), the forward and reverse latencies of the 32-bit QDI BCLA, shown in Fig 3A, are calculated to be 2.842ns and 2.632ns, resulting in a CT of 5.474ns for RTO handshaking. Cycle time of QDI BCLARC. To theoretically estimate the (worst-case) CT of the proposed QDI BCLARC that corresponds to RTZ handshaking, let us consider Fig 3B and Speed and energy optimized quasi-delay-insensitive block carry lookahead adder one (non-redundant) lookahead carry output has to be produced which represents the carry overflow. Starting from the least significant 4-bit BCLARC, each 4-bit BCLARC produces a nonredundant lookahead carry output and a redundant lookahead carry output. The redundant lookahead carry output of a 4-bit BCLGRC is propagated to the successive 4-bit BCLGRC (or 4-bit BCLG) as its carry input, whereas the non-redundant lookahead carry output is propagated to a cascade of three full adders and an XOR3 present in the successive 4-bit BCLARC (or 4-bit BCLA). Referring to Fig 4A, the critical datapath would be traversed in the least significant 4-bit BCLGRC involving an AO22 complex gate, a 4-input AND gate, a 4-input OR gate, and an AO21 complex gate. In the subsequent intermediate 4-bit BCLGRCs, the datapath traversal would involve just one AO21 complex gate. The forward latency of the BCLARC corresponding to RTZ handshaking (FL BCLARC_RTZ ), shown in Fig 3B, is expressed by (9), where T BCLGRC denotes the propagation delay of the 4-bit BCLGRC shown in Fig 4A. In (9), T BCLGRC_INT specifies the propagation delay of a 4-bit BCLGRC present in an intermediate nibble position of the adder, and T BCLGRC_LS specifies the propagation delay of the least significant 4-bit BCLGRC. Eq (9) is expanded and given by (10), where T AO21 , T AND4 and T OR4 denote the propagation delays of the AO21 complex gate, the 4-input AND gate, and the 4-input OR gate respectively. There is a one-to-one correspondence between the terms present on the right-side of (9) and (10). The critical datapath traversed for the application of the spacer in the case of the 32-bit QDI BCLARC is highlighted by the red dashed line in Fig 3B. Since the 4-bit QDI BCLGRC shown in Fig 4A is of early output type, and because this is used to construct the QDI BCLARC of Fig 3B, the redundant lookahead carry outputs of all the 4-bit BCLGRCs could assume the spacer simultaneously. But, the redundant lookahead carry output produced by a 4-bit BCLGRC is given as the carry input for the successive 4-bit BCLGRC (or 4-bit BCLG) to produce the corresponding non-redundant lookahead carry output. This carry output then serves as the carry input for the least significant full adder present in the following 4-bit BCLARC (or 4-bit BCLA) to produce the corresponding sum output bit. With RL BCLARC_RTZ representing the reverse latency of the QDI BCLARC, that corresponds to RTZ handshaking, as shown in Fig 3B, and referring to Fig 5, it is expressed by (11). In (11), T BCLG_LS may be replaced by T BCLG_INT without any loss of generality since the reverse latency would be the same. The expanded version of (11) is given by (12), and there exists a one-toone correspondence between the terms present on the right-side of (11) and (12). Based on (10) and (12), the forward and reverse latencies of the QDI BCLARC, shown in Fig 3B, which corresponds to RTZ handshaking are calculated to be 1.171ns and 0.849ns, which results in a CT of 2.02ns. The detailed expressions for forward and reverse latencies corresponding to RTO handshaking are given by (13) and (14). Eqs (13) and (14) are deduced by replacing the propagation delays of the gates mentioned in (10) and (12) with the propagation delays of their dual gates, however, excluding T CE2 which is retained as such. Based on (13) and (14), the forward and reverse latencies of the 32-bit QDI BCLARC ( Fig 3B) corresponding to RTO handshaking are calculated to be 1.245ns and 0.933ns, which results in a CT of 2.178ns. Based on the theoretical calculations of CT, it is noted that the QDI BCLARC architecture achieves 59.1% and 60.2% reductions in CT than the QDI BCLA architecture for a 32-bit addition with respect to RTZ and RTO handshaking respectively. This implies the former (BCLARC) is more beneficial than the latter for performing addition at an enhanced speed. Based on the simulation results obtained, which will be discussed in the next section, it is found that the QDI BCLARC architecture achieves 57% and 55.7% reductions in CT over the QDI BCLA architecture for a 32-bit addition with respect to RTZ and RTO handshaking respectively. Hence, a good correlation is evident between the theoretical calculations and the practical estimates of CT. Although the theoretical calculations of CT may be approximate, nevertheless they are useful as they give a valuable design insight, which is the QDI BCLARC architecture is preferable to the QDI BCLA architecture. Nevertheless, differences between the theoretical calculations and the practical estimates are expected because the interconnect delays and the parasitic are not accounted for in the theoretical calculations of CT. Results and discussion Fifty-six 32-bit QDI and non-QDI (relative-timed) asynchronous adders, which correspond to various architectures such as RCA, CSLA, CCLA, BCLA, BCLARC, and hybrid BCLARC-RCA were physically realized using a 32/28nm CMOS technology [43], including the input registers and the completion detector as shown in Fig 1B. Of the fifty-six asynchronous adders, twentyeight correspond to RTZ handshaking and a similar number corresponds to RTO handshaking. As mentioned earlier, the 2-input C-element was custom-realized by modifying the AO222 gate to implement the asynchronous adders. A typical-case PVT specification of a high V t standard digital cell library with a recommended supply voltage of 1.05V and an operating junction temperature of 25˚C was considered for the implementations and simulations. The registers and completion detectors associated with the asynchronous adders are maintained the same with respect to RTZ and RTO handshaking. This implies the differences between the simulation results of the adders are attributable to the differences between their logic compositions. The default wire load model was considered in the simulations. A virtual clock source was used to constrain the input and output ports of the adders, which did not feature in the adder designs or simulations and hence it does not contribute to the design metrics. Test benches comprising about two thousand (random) input vectors including data and spacer, which separately correspond to RTZ and RTO handshaking, as used in our prior work [14], were used to verify the functionalities of the adders. The input vectors corresponding to RTZ and RTO handshaking bear a logical equivalence. Functional simulations of all the adders were performed and their respective switching activities were captured which were subsequently used to estimate the average power dissipation. Synopsys EDA tools were used to estimate the design metrics of the adders. The design metrics estimated include forward and reverse latencies, CT, area, and average power dissipation. The forward latency of an asynchronous circuit is similar to the critical path delay of a synchronous circuit and it is directly estimated. The reverse latency may differ from the forward latency, which is evident from the critical datapaths highlighted in Fig 3A and 3B. The reverse latencies of the asynchronous adders were ascertained from the gate-level timing analysis, and this method was followed for RTZ and RTO handshaking, as done in our previous work [14]. The design metrics of the adders corresponding to RTZ handshaking are given in Table 1, and the design metrics corresponding to RTO handshaking are given in Table 2. Adder legends are provided in the second columns of Tables 1 and 2 to conveniently refer to the individual adders during the discussion. The related literature references pertaining to those adders are also given in Tables 1 and 2. The adders have been grouped according to their architectural type and not according to the chronological order of appearance in the literature. Adders Z12 and Z13 (O12 and O13) are BCLAs, constructed using the full adder of [19], the XOR3 function derived from the full adder functionality, and the early output 4-bit BCLG and BCLGRC of [25], which correspond to RTZ (RTO) handshaking. Adders Z14 and Z15 (O14 and O15) are also BCLAs (BCLA and BCLARC respectively), constructed using the full adder of [20], the XOR3 function derived from the full adder functionality, and the early output 4-bit BCLG and BCLGRC of [25], which correspond to RTZ (RTO) handshaking. Adders Z24 and Z25 (O24 and O25) represent the proposed BCLAs (BCLA and BCLARC respectively), which were realized using the novel 4-bit BCLG and BCLGRC blocks described in Section 4.2, the full adder of [21] and the XOR3 function derived from the full adder functionality, which correspond to RTZ (RTO) handshaking. Adders Z26 to Z28 (O26 to O28) are hybrid BCLARC-RCAs, which are derived from Z25 (O25). It may be seen from Table 3 that some building blocks require the same area for both RTZ and RTO handshaking. For example, the full adder of [15] used to construct Z1 and O1 requires the same area for physical implementation based on RTZ and RTO handshaking. Likewise, the XOR3 function used to construct Z12 to Z15 and O12 to O15 require the same areas for physical realization based on RTZ and RTO handshaking. This is because some of the dual gate equivalents in the digital cell library [43] feature the same area, as remarked in [14] and [20]. For examples, the minimumsize 2-input AND and OR gates of [43] require the same area, which is found to be the case with the gate duals such as AO22 and OA22 gates, and AO222 and OA222 gates. This kind of similar area occupancies by the dual gate equivalents may not be common in all standard digital cell libraries. However, the propagation delay and leakage and dynamic power components of the gates used for RTO handshaking (such as 2-input AND, OA22 and OA222 gates) are generally less than the corresponding metrics of the dual gate equivalents used for RTZ handshaking (such as 2-input OR, AO22 and AO222), as noted from [43]. This explains why the Tables 1 and 2, and validated in [33]. Unfortunately, the physical details about the library components [43] cannot be discussed in detail here due to the proprietary nature of the information. Referring to the diverse asynchronous adders given in Tables 1 and 2, in terms of area, the RCA architecture is preferable to the CSLA and CLA architectures. This is true even in the case of a synchronous digital design [45,46]. Hence, from the area perspective, Z8 and O8 are preferable with respect to RTZ and RTO handshaking. Z9 and O9 are discounted as they are non-robust relative-timed RCAs. As mentioned earlier, CT governs the speed of a QDI or a relative-timed asynchronous circuit that employs delay-insensitive data encoding and a 4-phase handshaking. Among the RCAs, Z9 and O9 report the least CT with respect to RTZ and RTO handshaking. However, (10), to process the data, the critical path traversed in the proposed BCLARC (Z25 of Table 1) would involve two 2-input C-elements including a register, seven AO21 gates, four AO22 gates, a 4-input AND gate, a 4-input OR gate and a 2-input OR gate, resulting in a theoretical forward latency of 1.171ns and a practical forward latency of 1.76ns. On the other hand, the critical path traversed in an RCA (say, Z8 of Table 1) to process the data would Speed and energy optimized quasi-delay-insensitive block carry lookahead adder encounter a register, thirty-two AO22 gates, a 2-input C-element and a 2-input OR gate, resulting in a theoretical forward latency of 2.576ns and a practical forward latency of 3.10ns. For a similar discussion regarding reverse latency, referring to (12), to process the spacer, the critical path traversed in the proposed BCLARC (Z25) would involve four 2-input C-elements including a register, one AO21 gate, one AO22 gate, a 4-input AND gate, a 4-input OR gate and two 2-input OR gates, resulting in a theoretical reverse latency of 0.849ns and a practical reverse latency of 1.11ns. On the other hand, the critical path traversed in the RCA (Z8 of Table 1) to process the spacer would encounter a register, two AO22 gates, a 2-input C-element and a 2-input OR gate, resulting in a theoretical reverse latency of 0.416ns and a practical reverse latency of 0.61ns. Although the reverse latency of Z8 is less than Z25, the significantly reduced forward latency of Z25 vis-à-vis Z8 compensates to achieve a considerable net reduction in CT for Z25 compared to Z8. According to the theoretical calculations, the CT of proposed BCLARC (Z25) is 2.02ns and the CT of Z8 is 2.992ns, implying a theoretical reduction in CT by 32.5% for Z25 compared to Z8. According to the practical estimates given in Table 1, Z25 reports a 22.6% reduction in CT than Z8. Similarly, based on the practical estimates, O25, which is the RTO counterpart of Z25, achieves a 18.4% reduction in CT than O8, which is the RTO counterpart of Z8. Overall, the proposed BCLARCs Z25 and O25 feature reduced CTs compared to the CTs of all the other adders in Tables 1 and 2 respectively. Usually, BCLA architectures incorporating redundant carries tend to have reduced forward and reverse latencies and CT compared to those of plain BCLA architectures which do not have redundant carries, i.e., the QDI BCLARC architecture is preferable to the QDI BCLA architecture in terms of the timing. This observation is already substantiated by the deliberations in Section IV and would be further evident upon comparing Z12 and Z13, Z14 and Z15, Z17 and Z18, Z19 and Z20, and Z24 and Z25 in Table 1, and by comparing O12 and O13, O14 and O15, O17 and O18, O19 and O20, and O24 and O25 in Table 2. Further, this agrees with the observation made in [29] that introducing redundant logic, which can be interpreted as the redundant carry output logic introduced in the BCLARC architecture, which is not available in the BCLA architecture, facilitates overall reductions in the timing. In the case of CCLAs [26] i.e., Z16 of Table 1 and O16 of Table 2, which are QDI and of early output type, their forward and reverse latencies are equal. This is because the same critical path would be traversed for processing the data and the spacer, and the critical path is datadependent. Moreover, there is no opportunity for introducing redundant carries in the CCLA architecture to speed-up the carry propagation since the lookahead carry output of say, a 4-bit CCLA is provided as the carry input for the successive 4-bit CCLA in the cascade. As a result, CTs of Z16 and O16 are considerably greater than the CTs of all the BCLARCs. The proposed BCLARC i.e., Z25 achieves a 47.8% reduction in CT compared to Z16. Based on RTO handshaking, O25 achieves a 47.1% reduction in CT compared to O16. In Tables 1 and 2, hybrid BCLARC-RCAs are also considered. They are denoted by Z21 to Z23 and Z26 to Z28 in Table 1, and O21 to O23 and O26 to O28 in Table 2. A hybrid BCLARC-RCA architecture replaces one or more less significant sub-BCLARC(s) with a similar size RCA, which consists of full adders. For example, Z21 and Z26, Z22 and Z27, and Z23 and Z28 in Table 1 incorporate a 4-bit RCA, an 8-bit RCA and a 12-bit RCA in the least significant adder bit positions as a corresponding replacement for one, two and three instances of a 4-bit BCLARC respectively. While the replacement of one or more 4-bit BCLARCs by a corresponding size RCA could help to reduce the area, it is not guaranteed that such a replacement will always have a beneficial impact on the CT, and rather the contrary might result. The CTs of Z21, Z22 and Z23, and Z26, Z27 and Z28 given in Table 1 reveal that increasing the size of the sub-RCA in the least significant adder bit positions increases the forward latencies of hybrid BCLARC-RCAs although their reverse latencies remain a constant. The constant reverse latency is because of the traversal of the same critical datapath, shown using the red dashed line in Fig 3B. The forward latencies of Z26, Z27 and Z28, belonging to Table 1, are expressed by (15) to (17). These are obtained by modifying (10) while considering the replacement of sub-BCLARC(s) with a similar sized sub-RCA. To construct the sub-RCA, the QDI early output full adder of [21] was used, and this was used to construct the hybrid BCLARC-RCAs in [14,27] as well. By substituting the propagation delays of the gates from [43] in (15), (16) and (17), the theoretical forward latencies of Z26, Z27 and Z28 in Table 1 were calculated to be 1.226ns, 1.451ns and 1.676ns respectively. In Section 4.3.2, the theoretical forward latency of Z25 was calculated to be 1.171ns. Hence, theoretically, Z25 has a reduced forward latency than Z26, Z27 and Z28, which is supported by the practical estimates given in Table 1. The reverse latency of Z25 was theoretically calculated to be 0.849ns in Section 4.3.2, and the same reverse latency is applicable for Z26, Z27 and Z28 in Table 1. Hence, theoretically, the CTs of Z25, Z26, Z27 and Z28 equate to 2.02ns, 2.075ns, 2.3ns and 2.525ns respectively. This shows that Z25, which is the proposed BCLARC, has a reduced CT than the CTs of Z26, Z27 and Z28, which are the hybrid BCLARC-RCAs. Theoretically, the CT of Z25 is 2.7% less than the CT of Z26, and practically (based on the results given in Table 1), the CT of Z25 is found to be 3.4% less than the CT of Z26. Thus, there is a correlation between the theoretical and practical estimates of CT, and the theoretical calculations tend to provide a valuable design insight. Based on (15), (16) and (17), and considering the duals of the respective gates with the exception of the 2-input C-elements, the forward latencies of O26, O27 and O28, which are the RTO counterparts of Z26, Z27 and Z28, as mentioned in Table 2, could be theoretically modeled. This can be done by modifying (15), (16) and (17) by replacing the propagation delays of specified gates with the propagation delays of their dual gate equivalents, however, excluding the delay of the 2-input C-element which is retained as such. Theoretically, the forward latencies of O26, O27 and O28 are calculated to be 1.286ns, 1.497ns and 1.708ns respectively. Given that (14) is applicable for O25, O26, O27 and O28, their CTs are theoretically calculated to be 2.178ns, 2.219ns, 2.43ns and 2.641ns. This shows that O25, which represents the RTO equivalent of the proposed BCLARC, has a reduced CT than the CTs of hybrid BCLARC-RCAs viz. O26, O27 and O28. Hence, based on the proposed 4-bit BCLGRCs, portrayed by Figs 4A and 5A, it is inferred that the proposed BCLARC is preferable to the hybrid BCLARC-RCAs on the basis of CT with respect to both RTZ and RTO handshaking. The proposed BCLARC achieves a substantial reduction in CT compared to the CTs of other BCLARCs and also in comparison with the optimum CT of a hybrid BCLARC-RCA, reported in the latest work [14]. Hence, hybrid BCLARC-RCAs corresponding to [25,27] were not considered as they would be sub-optimum. With respect to power dissipation, almost all the asynchronous adders, whether they are QDI or non-QDI, dissipate quite nearly the same power with the standard deviation from the mean of the power dissipation estimated to be 33.5 for RTZ handshaking (Table 1) and 33.1 for RTO handshaking ( Table 2). The small values of standard deviations are because all the asynchronous adders mentioned in Tables 1 and 2 embed the monotonic cover constraint, discussed in Section 3.2. Hence, the power dissipation of QDI and non-QDI (relative-timed) adders do not vary considerably and are confined to small ranges of 2161μW-2312μW in the case of Table 1, and 2157μW-2303μW in the case of Table 2. PCTP governs the low power/energy aspect. The PCTPs of the asynchronous adders were calculated and normalized. The normalization was performed such that the highest PCTP among the set of asynchronous adders corresponding to a particular handshake protocol was normalized to 1, and the actual PCTPs of the remaining adders were divided by the highest PCTP. Thus, after normalization, the least value of PCTP reflects the optimum low power/ energy design. The plots of normalized CT and PCTP values corresponding to RTZ handshaking are shown side-by-side in Fig 6A and 6B, and the similar plots for RTO handshaking are portrayed by Fig 7A and 7B. Given that the average power dissipation of all the asynchronous adders is quite nearly the same, it may be observed that the differences in their PCTP are mainly due to the differences in their CTs. In other words, CT mainly influences the PCTP of the asynchronous adders. This may be evident upon perusing Fig 6A and 6B, and also Fig 7A and 7B. Conclusions This article presented a new QDI early output sub-BCLG/BCLGRC that forms the basis for constructing a QDI early output BCLA/BCLARC. In particular, we discussed the design of a 4-bit QDI BCLA and a 4-bit QDI BCLARC which serve as the building blocks for constructing the QDI early output BCLARC. For an example, we considered a 32-bit addition and Writing -original draft: P. Balasubramanian.
2019-06-23T13:13:01.520Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "10417e5c7222399ebfdb787c712eb87051024927", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0218347", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c231ac3b2fddece5bd0864a6ebd156d0abbb71b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
139535845
pes2o/s2orc
v3-fos-license
Pore Morphology of Heavily Doped P-Type Porous Silicon Tuning the pore diameter of porous silicon (PS) is essential for some applications such as biosensing, where the pore size can filter the entrance of some analytes or increase its sensitivity. However, macropore (>50 nm) formation on p-type silicon is still poorly known due to the strong dependence on resistivity. Electrochemically etching heavily doped p-type silicon usually forms micropores (<5 nm), but it has been found that bigger sizes can be achieved by adding an organic solvent to the electrolyte. In this work, we present the results of using dimethylformamide (DMF), dimethylsulfoxide (DMSO), potassium hydroxide (KOH) and sodium hydroxide (NaOH) for macropore formation in p-type silicon with a resistivity between 0.001 and 0.02 Ω∙cm, achieving pore sizes from 5 to 100 nm. Introduction Porous silicon (PS) is a nanostructured material generated by electrochemically etching silicon (Si) into electrolytes containing hydrofluoric acid (HF) [1].The growth of the pores is a combination of two chemical reactions: a direct dissolution of Si in fluoride and the oxidation of Si in the presence of oxygen and its later dissolution [2].Both reactions are strongly dependent on etching conditions, both chemical and electrical [3]. PS has many potential application areas such as optoelectronics and biosensing [4][5][6], mainly because it retains the advantages of silicon technology while adding the ability of controlling optical properties.Porosity, thickness, pore diameter, pore morphology and distance between pores are some of the tunable properties available during fabrication [7].High porosities combined with small pore diameters commonly lead to high sensitivity sensors.Contrarily, high pore diameters allow a better adsorption ability.On the other hand, when using PS membranes for sensing, longer distances between pores are preferred for a better endurance.Unfortunately, not all combinations of parameters are possible or at least different approximations must be carried out to overcome the initial limitations. The macropore (pores with average diameter greater than 50 nm) formation mechanism on ntype silicon is well known [3,8].Low-doped p-type silicon wafers have also been used for macropore formation [9].However, achieving mesopores (5-50 nm) and macropores in moderately-doped and heavily-doped silicon is more challenging.Aqueous HF-based electrolytes (HF diluted in water and optionally surfactants such as alcohols) will yield only micropores (<5 nm) and mesopores if no additional fabrication step is introduced [10,11].In order to overtake this limitation, several approaches have been developed: • PS oxidation and HF dissolution.When PS is thermally oxidized, a SiO2 layer is formed on its surface.Dissolution of this oxide in HF can increase the pore diameter up to half the SiO2 layer thickness [12].This process can be repeated but doing so will reduce the pore walls and jeopardize the structural stability. • Post-treatment with alkaline mixtures.KOH and NaOH solutions anisotropically etch Si and can be used to expand the pores after fabrication [13], although this method has the same limitations as the previous one.They have also been used for partially dissolving the PS film and obtaining a pattern that-if used afterwards in a new electrochemical etching-can yield macropores [6]. • Organic electrolytes.The combination of HF-based solutions with non-aqueous electrolytes, e.g., dimethylformamide and dimethylsulfoxide, facilitates the Si dissolution during anodization [14,15].This method offers more control over the PS properties. Even though numerous studies have reported the obtainment of macropores on heavily-doped p-type silicon, there is still a lack of analyses for certain ranges of resistivity. For cleaning purposes, all samples were pretreated for 30 min in a 3:1 volumetric mixture of sulfuric acid (H2SO4) and hydrogen peroxide (H2O2), both purchased from BASF (Germany), for removing organic residues off the substrate.Afterwards, they were dipped into a solution of <5% HF for 30 s in order to eliminate the native oxide layer. Electrochemical etching of Si was performed under galvanostatic conditions in a vertical cell in which a Pt electrode worked as a cathode and the Si itself as anode.Different aqueous and organic solutions have been used as electrolytes. Field emission scanning electron microscopy (FESEM) was performed using both a Hitachi S-4500 SEM and a Zeiss Ultra 55 microscope. Results and Discussion The results of the pore morphology dependence with different etching parameters for a resistivity of 0.01-0.02Ω•cm are summarized in Table 1. The lower the HF concentration within the electrolyte, the bigger the pore would be.However, etching heavily doped silicon wafers with very low HF concentrations (Figure 1a) lead to pore interconnection.This implied thin walls and a slow vertical growth.PS films created this way would be structurally weak and only a small range of porosities can be achieved.Rising the HF concentration reduced the pore diameter but strengthened the structure.PS films formed this way were not appropriate to be used as membranes but they may be suitable for sensing gas or humidity, due to its high sensitivity. Comparing Figure 1b,c, we can see that both samples have similar pore diameters.The difference between those two images is pore interspacing, being higher in the case of the KOH etching of the PS film.This effect was caused due to the deposition of the Si on the surface and could be reduced if agitation is added.Adding DMF to the electrolyte yielded higher pore diameters.It can be seen in Figure 1d,e that increasing DMF proportion directly increased the pore size.These PS samples are more convenient than the ones etched with aqueous electrolytes for applications such as biodetection since the pore size allows the entrance of molecules and proteins.Instead, anodization with DMSO resulted in pores extremely close to adjacent pores, creating a hive-like structure (see Figure 1f) and adopting a hexagonal form instead of the typical circular one.PS films formed with 1:1:9 DMF and DMSO were structurally more fragile than the other electrolytes, and also organic electrolytes had a negative impact on vertical uniformity but pore diameters up to 50 nm could be achieved. In Table 2, the results for a resistivity of 0.001-0.005Ω•cm are shown.Oxidation of PS films and removal dipping in HF as well as alkaline etching were omitted as they yielded intermediate pore sizes. In this case, for some HF concentrations, a small layer of micropores appeared on top of the desired layer.It could be easily removed by dipping the sample in a 1 M NaOH dissolution in order to achieve PS films with pore diameters up to 100 nm (see Figure 2).These sensors display lower sensitivities but are preferred in some applications for its adsorption ability. Conclusions Pore morphology and fabrication properties for PS formed in heavily doped p-type Si (100) have been presented.The effect of organic electrolytes such as DMF and DMSO and alkaline etchants e.g., KOH and NaOH has been reported.The use of organic electrolytes on Si wafers with a resistivity of 0.01-0.02Ω•cm showed an increase the pore diameter of up to 50 nm.On the other hand, pore sizes of around 100 nm could be achieved on wafers with a lower resistivity.Macropores could be formed in heavily doped p-type Si, which could allow for a wider range in the design of PS based sensors for different applications. Table 1 . Influence on pore morphology of fabrication parameters for a resistivity of 0.01-0.02Ω•cm. Table 2 . Influence on pore morphology of fabrication parameters for a resistivity of 0.001-0.005Ω•cm.
2019-04-30T13:08:37.210Z
2018-11-14T00:00:00.000
{ "year": 2018, "sha1": "11299057910bf64ff2897957df47554ba7ce87f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/4/1/14/pdf?version=1548292456", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7b8811c44904ffc02c7cd8fb0be93fede80e4082", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
259226175
pes2o/s2orc
v3-fos-license
Does the Sound of a Singing Bowl Synchronize Meditational Brainwaves in the Listeners? This study aims to verify if the beating sound of a singing bowl synchronizes and activates brain waves during listening. The singing bowl used in this experiment produce beats at a frequency of 6.68 Hz, while it decays exponentially and lasts for about 50 s. Brain waves were measured for 5 min in the F3 and F4 regions of seventeen participants (eight males and nine females, average age 25.2) who heard the beating singing bowl sounds. The experimental results showed that the increases (up to ~251%) in the spectral magnitudes of the brain waves were dominant at the beat frequency compared to those of any other clinical brain wave frequency bands. The observed synchronized activation of the brain waves at the beating sound frequency supports that the singing bowl sound may effectively facilitate meditation and relaxation, considering that the beat frequency belongs to the theta wave region which increases in the relaxed meditation state. Introduction A singing bowl is a bowl-shaped percussion instrument [1]. The singing bowl has a peculiar feature in that it sounds as well as creates a beat, lasting for a long time after it has been played [2]. The singing bowl sound has often been used to reduce the degree of tension, anxiety, and depression [3]. The singing bowl sound is known to facilitate physiological and psychological responses, such as stabilizing blood pressure and heart rate [2,4]. Although the singing bowl sound is reported to give positive effects in meditation or alternative medicine, the mechanism of its psychoacoustic effects remains unclear [3,4]. It is presumed that the singing bowl sound may play a critical role in the beneficial responses of the brain through its strong beat. If the brain waves are activated and synchronized at the beat frequencies located in theta waves, the brain is likely shifted to a relaxed meditation state [1]. Meditation effects that evoke psychophysiological changes may result in increases in the theta waves [5][6][7][8][9]. However, no systematic study on such synchronized activation has been reported. The present study aims to examine if the singing bowl beating sound gives rise to a significant increase in the brain waves (electroencephalogram, EEG) being dominant at the beating frequency. Materials and Methods A total of seventeen participants (male: 8, female: 9, average age: 25.2 ± 3.5) participated in this study. They were healthy adults without hearing disabilities, cognitive difficulties or neurological damage. Participants were voluntarily recruited from the University in Jeju, Korea. EEG was performed on participants who voluntarily consented after hearing an explanation about the purpose of the study, the experimental method, the right to voluntarily participate in the study as a research subject, and the right to withdraw Singing Bowl Sound The singing bowl used in this study is 260 mm in diameter and 115 mm in depth, a product of Best Himalaya, Nepal (Figure 1a,b). It was played with a cylindrical mallet of 192 mm in height and 48 mm in diameter (Figure 1c). Each percussion produces a sound modulated with a strong beat that lasts for about 50 s. Figure 1a illustrates a schematic overview of the experimental tools and space, including the relative location between the singing bowl and the subject. Acoustic Apparatus for Recording and Acoustic Analysis The singing bowl sound was recorded using a mobile sound analysis system (Noise-Book, 4820MHS II, Head Acoustics). The frequency characteristics of the recorded sound were analyzed using FFT in MATLAB. In order to determine the spectral properties of the low frequency beating phenomenon of the singing bowl sound, we first reconstructed the envelope of the recorded sound signal using a Hilbert transform. The frequency spectrum of the envelope was then plotted in the frequency range of 0~50 Hz employed in clinical EEG. Brain Wave Measurements Brain waves were recorded at the F3 and F4 positions of the international standard 10-20 system on the left and right sides of the dorsolateral prefrontal cortex (DLPFC), known to be sensitive to brain activity during meditation [5,8,[10][11][12][13][14]. The EEG signals were acquired using an EEG measurement instrument (LXE1104, Laxtha, Republic Korea) via wet electrodes (Figure 1d). The measured EEG signals were stored on a PC in digital form with the sampling rate of 256 Hz. The participants in this study comprised 17 healthy Singing Bowl Sound The singing bowl used in this study is 260 mm in diameter and 115 mm in depth, a product of Best Himalaya, Nepal (Figure 1a,b). It was played with a cylindrical mallet of 192 mm in height and 48 mm in diameter (Figure 1c). Each percussion produces a sound modulated with a strong beat that lasts for about 50 s. Figure 1a illustrates a schematic overview of the experimental tools and space, including the relative location between the singing bowl and the subject. Acoustic Apparatus for Recording and Acoustic Analysis The singing bowl sound was recorded using a mobile sound analysis system (Noise-Book, 4820MHS II, Head Acoustics). The frequency characteristics of the recorded sound were analyzed using FFT in MATLAB. In order to determine the spectral properties of the low frequency beating phenomenon of the singing bowl sound, we first reconstructed the envelope of the recorded sound signal using a Hilbert transform. The frequency spectrum of the envelope was then plotted in the frequency range of 0~50 Hz employed in clinical EEG. Brain Wave Measurements Brain waves were recorded at the F3 and F4 positions of the international standard 10-20 system on the left and right sides of the dorsolateral prefrontal cortex (DLPFC), known to be sensitive to brain activity during meditation [5,8,[10][11][12][13][14]. The EEG signals were acquired using an EEG measurement instrument (LXE1104, Laxtha, Republic Korea) via wet electrodes (Figure 1d). The measured EEG signals were stored on a PC in digital form with the sampling rate of 256 Hz. The participants in this study comprised 17 healthy adults with normal hearing, as confirmed by a hearing test conducted using an Audiometer (120 Audiometer, Beltone, Chicago, IL, USA). In addition, verbal confirmation was received to ensure that the participants had no history of any auditory disorders or diseases. Figure 1e presents a flow chart of the entire experiment which took about 700 s. The participants laid down on a comfortable bed-chair. After the electrodes were attached, they closed their eyes for approximately 5 min in a relaxed position. When a stable EEG was observed, the EEG was recorded for 50 s. After that, the singing bowl was played 6 times for 5 min at intervals of 50 s, and at the same time, the brain waves were recorded. After the sixth round of playing the percussion instrument, an additional EEG was measured for 50 s without listening to the singing bowl sound. All experiments were conducted with the participants who had their eyes closed. EEG Analysis The measured time history of the brain waves was converted into the spectral magnitude or power of each clinical frequency band of EEG via FFT. The clinical frequency bands are divided into the five spectral regions: delta (0~4 Hz), theta (4~8 Hz), alpha (8~13 Hz), beta (13~30 Hz) and gamma (30~50 Hz). The spectral powers of the brain waves were compared before and after listening to the singing bowl sound to examine changes in the brain waves of the participants. In order to test the temporal response of the brain waves to the singing bowl sound, temporal variations in the changes in magnitude of each spectral band of EEG were monitored at a time interval of 50 s. The spectral band powers of each subject were normalized to the total spectral power (0~50 Hz) to eliminate the variability in the degree of subject-to-subject EEG activity. Results The measured time history of the singing bowl sounds (top) and brain waves (middle) are presented in Figure 2. The three bottom panels magnify the brain waves recorded at the characteristic temporal locations (beginning, halfway and end) of the experiment, illustrating that the magnitudes of the brain waves increase with time and are significantly larger at the end of the experiment than those at the beginning. The increase was apparent in the low frequency components, as seen in the magnified figures. These types of changes in EEG are known to be common in psychological relaxation or meditation [5]. adults with normal hearing, as confirmed by a hearing test conducted using an Audiometer (120 Audiometer, Beltone, Chicago, IL, USA). In addition, verbal confirmation was received to ensure that the participants had no history of any auditory disorders or diseases. Figure 1e presents a flow chart of the entire experiment which took about 700 s. The participants laid down on a comfortable bed-chair. After the electrodes were attached, they closed their eyes for approximately 5 min in a relaxed position. When a stable EEG was observed, the EEG was recorded for 50 s. After that, the singing bowl was played 6 times for 5 min at intervals of 50 s, and at the same time, the brain waves were recorded. After the sixth round of playing the percussion instrument, an additional EEG was measured for 50 s without listening to the singing bowl sound. All experiments were conducted with the participants who had their eyes closed. EEG Analysis The measured time history of the brain waves was converted into the spectral magnitude or power of each clinical frequency band of EEG via FFT. The clinical frequency bands are divided into the five spectral regions: delta (0~4 Hz), theta (4~8 Hz), alpha (8~13 Hz), beta (13~30 Hz) and gamma (30~50 Hz). The spectral powers of the brain waves were compared before and after listening to the singing bowl sound to examine changes in the brain waves of the participants. In order to test the temporal response of the brain waves to the singing bowl sound, temporal variations in the changes in magnitude of each spectral band of EEG were monitored at a time interval of 50 s. The spectral band powers of each subject were normalized to the total spectral power (0~50 Hz) to eliminate the variability in the degree of subject-to-subject EEG activity. Results The measured time history of the singing bowl sounds (top) and brain waves (middle) are presented in Figure 2. The three bottom panels magnify the brain waves recorded at the characteristic temporal locations (beginning, halfway and end) of the experiment, illustrating that the magnitudes of the brain waves increase with time and are significantly larger at the end of the experiment than those at the beginning. The increase was apparent in the low frequency components, as seen in the magnified figures. These types of changes in EEG are known to be common in psychological relaxation or meditation [5]. Figure 3 shows a typical measured waveform of the singing bowl sound. It gradually diminishes in amplitude for more than 40 s after hitting the percussion instrument and persists for approximately 50 s (Figure 3a). A part of the waveform (marked by 'A') was expanded in the time axis to reveal a low frequency variation of the sound, which is called a beat (Figure 3b). It was observed that the beat repeated at an interval of approximately 0.15 s. The temporal history of the repeating singing bowl sound (top) and the brain wave (middle) of the subject listening to the singing bowl sound. The bottom three panels magnify the brain waves in the time axis, recorded at the beginning, in the middle and at the end of the experiment. Figure 3 shows a typical measured waveform of the singing bowl sound. It gradually diminishes in amplitude for more than 40 s after hitting the percussion instrument and persists for approximately 50 s (Figure 3a). A part of the waveform (marked by 'A') was expanded in the time axis to reveal a low frequency variation of the sound, which is called a beat (Figure 3b). It was observed that the beat repeated at an interval of approximately 0.15 s. Figure 3c is the frequency spectrum of the singing bowl sound. The fundamental frequency (marked 'B') that determines the pitch of the singing bowl sound was found to be 482.61 Hz. This frequency corresponds to a B4 note in the musical scale. As seen in Figure 3c, the singing bowl sound contains not only the fundamental frequency but also additional spectral components. The spectral components were observed at 773.15 Hz, 1102.56 Hz, 1464.81 Hz and 1870.86 Hz, corresponding to the musical scales near G5, C#6, F#6 and A#6, respectively. The number and magnitude of these spectral components determine the tonal property of the singing bowl sound. In addition, as seen in box 'B' in Figure 3c, an additional frequency component (relatively small but significant) appears near the fundamental frequency (482.61 Hz). The minute frequency difference of 6.68 Hz between them causes the beating phenomenon. Figure 3c is the frequency spectrum of the singing bowl sound. The fundamental frequency (marked 'B') that determines the pitch of the singing bowl sound was found to be 482.61 Hz. This frequency corresponds to a B4 note in the musical scale. As seen in Figure 3c, the singing bowl sound contains not only the fundamental frequency but also additional spectral components. The spectral components were observed at 773.15 Hz, 1102.56 Hz, 1464.81 Hz and 1870.86 Hz, corresponding to the musical scales near G5, C#6, F#6 and A#6, respectively. The number and magnitude of these spectral components determine the tonal property of the singing bowl sound. In addition, as seen in box 'B' in Figure 3c, an additional frequency component (relatively small but significant) appears near the fundamental frequency (482.61 Hz). The minute frequency difference of 6.68 Hz between them causes the beating phenomenon. Temporal and Spectral Characteristics of the Singing Bowl Sound In order to calculate the frequency spectrum of the beat, we reconstructed its time domain signal using a Hilbert transform, plotted in Figure 3b as the envelope of the singing bowl sound. The envelope, or in other words, the beat signal, represents the rhythm in the music at which the pitched singing bowl sound changes slowly with time. Figure 3d is the frequency spectrum of the beat rhythm plotted in the frequency range of 0~50 Hz, used in clinical brain waves. As shown in Figure 3d, the strongest beat was observed to occur at 6.68 Hz, while a pair of minor beats appeared at the either side from about 1 Hz to 15 Hz. Note that the frequency of the strongest beat is located in the theta wave band (4~8 Hz), well observed in meditation. Figure 3e is the time-frequency representation of the beat signal, showing the temporal variations in the multiple beat frequencies. The spectrogram was calculated using a short time FFT with a window length of 4 s and a time resolution of 0.5 s. As expected, it is clearly seen that the strongest beat is shown at 6.68 Hz. Its loudness was at a maximum at the beginning of playing the singing bowl (t = 0) and started to decrease rapidly from 10 s to 30 s. Minor multiple beats are seen at the frequencies near 10 Hz, 13.3 Hz, 16.2 Hz and 36 Hz, disappearing within 10~20 s. Synchronized Activation of Brain Waves at the Beat Frequency Seven spectral bands were considered in the study, including the five clinical frequency bands (delta: 0~4 Hz, theta: 4~8 Hz, alpha: 8~13 Hz, beta: 13~30 Hz and gamma: 30~50 Hz), the entire frequency range (0~50 Hz) and the beat frequency (6.68 Hz). The mean and standard error of the spectral magnitude of the brain waves recorded for the 17 individuals are analyzed at the temporal middle of each 50 sec singing bowl sound (t = 25, 75, 125, 175, 225, 275, 325 and 375 s). The initial monitoring time, t i = 25 s, represents the temporal middle of the 50 s with no sound, and the final time, t f = 375 s, is that after the last (sixth) singing bowl sound. The spectral magnitude of the brain waves measured in F4 was observed to be similar or slightly larger than those measured in F3. However, there was no statistically significant difference observed between the measurement locations (F3 and F4) in all frequency bands. The ranges of the minimum (p = 0.075) to the maximum (p = 0.973) p values are shown to be large enough to state that the location effects may not be significant. Data collected at each monitoring time were checked for statistical normality using the Shapiro-Wilk test. The spectral magnitudes of each frequency band of brain waves are different to one another in their initial value. This makes it difficult to compare their temporal changes to one another. To remove the effect of the initial value difference, the magnitude of each frequency band needs to be normalized to the initial value. In addition, the magnitude of the measured brain waves varies from subject to subject. The spectral power of a particular clinical frequency band is often expressed as a ratio (in %) to the total power of the overall frequency range (0~50 Hz) to compensate for the differences in participants. Figure 4 shows the temporal changes in the spectral power of the measured brain waves, plotted every 50 s for which the singing bowl was repeatedly played. The mean and standard error of the spectral magnitude of the brain waves recorded for the 17 individuals were analyzed at the temporal middle of each 50 sec singing bowl sound (t = 25, 75, 125, 175, 225, 275, 325 and 375 s). The initial monitoring time, ti = 25 s, represents the temporal middle of the 50 s with no sound before, and the final time, tf = 375 s, is that after the last (sixth) singing bowl sound. In order to more effectively compare the temporal changes in the magnitude of each frequency band of the brain wave, we averaged the values measured from the two locations of F3 and F4. This unification was justified by the statistical finding that the spectral magnitudes of every frequency band were not different between the two locations for the entire experimental duration, as the maximum and the minimum p values show in Appendix A Figure A1. . Temporal variations (in %) of each spectral band brain wave magnitude relative to its initial value and normalized to that of the overall frequency band, averaged with data measured at the two positions (F3 and F4) from the participants (n = 17) who heard the strongly beating singing bowl sounds repeated six times every 50 s for t = 50~350 s, plotted at every 50 s: (a) delta (0~4 Hz), (b) theta (4~8 Hz), (c) alpha (8~13 Hz), (d) beta (13~30 Hz), (e) gamma (30~50 Hz) and (f) beat (6.68 Hz). Note that the ranges of p values are presented for the statistical test on the changes from initial states, and the error bars represent the standard errors. A new parameter of the spectral magnitude of brain waves was introduced to effectively remove the effects of not only the initial value difference but also the subject dependence. Let M(fb, t) be the spectral magnitude of a frequency band of the brain wave at time t. The new parameter A(fb, t), introduced in the present study and defined in Equation (1), is the magnitude of a frequency band of the brain wave normalized to its initial value and to the magnitude of the overall frequency range. where fb represents the frequency band, t is the time variable and ti stands for the initial time, which is 25 s in the present study, as illustrated in Figure 4. The numerator of the right-hand side of Equation (1) represents the temporal history of the magnitude of each frequency band relative to its initial value, while the denominator is the temporal magnitude of the overall frequency band relative to its initial value. A(fb, t) stands for the rate of change in the spectral magnitude of each frequency band normalized to that of the whole frequency range (0~50 Hz). A new parameter of the spectral magnitude of brain waves was introduced to effectively remove the effects of not only the initial value difference but also the subject dependence. Let M(fb, t) be the spectral magnitude of a frequency band of the brain wave at time t. The new parameter A(fb, t), introduced in the present study and defined in Equation (1), is the magnitude of a frequency band of the brain wave normalized to its initial value and to the magnitude of the overall frequency range. A(fb, t)(in %) = < M(fb, t)/M(fb, ti) > < M(overall, t)/(M(overall, ti) > * 100 (1) where fb represents the frequency band, t is the time variable and ti stands for the initial time, which is 25 s in the present study, as illustrated in Figure 4. The numerator of the right-hand side of Equation (1) represents the temporal history of the magnitude of each frequency band relative to its initial value, while the denominator is the temporal magnitude of the overall frequency band relative to its initial value. A(fb, t) stands for the rate of change in the spectral magnitude of each frequency band normalized to that of the whole frequency range (0~50 Hz). Figure 4 shows A(fb, t) in %, i.e., the rate of change in the spectral magnitude of each frequency band ((a) delta: 0~4 Hz, (b) theta: 4~8 Hz, (c) alpha: 8~13 Hz, (d) beta: 13~30 Hz, (e) gamma: 30~50 Hz, (f) beat: 6.68 Hz), normalized to that of the whole frequency range (0~50 Hz) and averaged with the data measured at the two locations of F3 and F4 for the 17 participants. The temporal changes were plotted at every 50 s for the time from ti = 25 s to tf = 375 s, and the error bar represents the standard error. The data are provided in Table 1, together with the p values resulting from the statistical test on each temporal change from the initial value at t = ti. The p value (at t = tf) after the experiment is presented Figure 4, and, if it is not the minimum value, the minimum is also provided at its time location. Table 1. Temporal variations (in %) in the spectral band brain wave magnitudes relative to their initial values (0~50 s), normalized to those of the overall frequency band and averaged the data measured at the two positions (F3, F4) of the participants (n = 17) who heard the strongly beating singing bowl sounds repeated six times at every 50 s for t = 50~350 s. ( †: maximum change). As expected, the rate of change increased the most at the beat frequency with time ( Figure 4f). Among the clinical frequency bands, the increase rate was the largest in the delta wave (135.18%, p = 0.001), followed by the theta wave (117.07%, p = 0.002). In those two waves located in the low frequency range, the rate of change in the spectral magnitude increased with time, whereas they decreased with time in the high frequency range including alpha, beta and gamma waves. The tendency of the changes maintained during the silent time after the last singing bowl sound, except for the gamma wave and the beat frequency. This trend implies that the largest changes were observed after the last singing bowl sound rather than when the participants heard the last singing bowl sound. This is why the p value was at a minimum at t = 375 s rather than at t = 325 s (Figure 4a-d). At the beat frequency, however, the largest increase in the spectral magnitude was observed at the time when the participants heard the fifth singing bowl sound, just before the final one. This can be understood as an extension of the preceding repeated pattern of the (large and rapid) jump and (small and slow) fall, and it is expected to have a spectral magnitude larger than the previous maximum if the participants hear an additional (seventh) singing bowl sound after the last one. Figure 5 compares the maximum rate of the relative changes in the spectral magnitude of each spectral band (A(fb,t) in %), together with the frequency spectrum of the beat of the singing bowl sound. The rate of the increase is predominant at the beat frequency, which reaches 251.98% (p = 0.021) of its initial value at the time (t = 275 s) approaching the end of the experiment. This implies that the brain waves are most effectively synchronized at the beat frequency and activated by the singing bowl sound. Among the five clinical EEG frequency bands, the delta wave increased the most to 135.18% (p = 0.001) of its initial state, followed by the theta wave with a rise of 117.07% (p = 0.002). In contrast, the other three spectral bands decreased after the experiment. The gamma wave was down to 81.86% (p = 0.000), the alpha wave was down to 85.28% (p = 0.005) and the beta wave was down to 93.75% (p = 0.012) of their initial states. at the beat frequency and activated by the singing bowl sound. Among the five clinical EEG frequency bands, the delta wave increased the most to 135.18% (p = 0.001) of its initial state, followed by the theta wave with a rise of 117.07% (p = 0.002). In contrast, the other three spectral bands decreased after the experiment. The gamma wave was down to 81.86% (p = 0.000), the alpha wave was down to 85.28% (p = 0.005) and the beta wave was down to 93.75% (p = 0.012) of their initial states. Figure 5. Comparison of the maximum rate of the relative change in spectral magnitude of each spectral band (A(fb,t) in %), together with the frequency spectrum of the beat of the singing bowl sound. The rate of increase is predominant at the beat frequency, which reaches 251.98% (p = 0.021) of its initial value at the time (t = 275 s) approaching the end of the experiment. This implies that the brain waves are most effectively synchronized at the beat frequency and activated by the singing bowl sound. Among the five clinical EEG frequency bands, the delta wave increased the most to 135.18% (p = 0.001) of its initial state, followed by the theta wave, with a rise of 117.07% (p = 0.002). In contrast, the other three spectral bands decreased after the experiment. The gamma wave was down to 81.86% (p = 0.000), the alpha wave was down to 85.28% (p = 0.005) and the beta wave was down to 93.75% (p = 0.012) of its initial state. Discussion The singing bowl used in this study produces a sound that lasts for more than 50 s after playing it once and has a strong beat at the frequency of 6.68 Hz. When the participants were listening to the singing bowl sound, the spectral magnitudes of their brain waves were shown to increase with time at low frequencies (≤8 Hz, delta and theta waves), whereas they decreased with time at high frequencies (>8 Hz, alpha, beta and gamma waves) (Figure 4). Among the five clinical spectral bands, the rate of increase was the highest for the delta wave (135.18%, p = 0.001), followed by the theta wave (117.07%, p = 0.002). Under the present experimental conditions, where the participants heard six repeating singing bowl sounds for 300 s, the largest rate of increase (251.98%, p = 0.021) was observed at the beat frequency of the singing bowl sound (Table 1). This result suggests that, when Figure 5. Comparison of the maximum rate of the relative change in spectral magnitude of each spectral band (A(fb,t) in %), together with the frequency spectrum of the beat of the singing bowl sound. The rate of increase is predominant at the beat frequency, which reaches 251.98% (p = 0.021) of its initial value at the time (t = 275 s) approaching the end of the experiment. This implies that the brain waves are most effectively synchronized at the beat frequency and activated by the singing bowl sound. Among the five clinical EEG frequency bands, the delta wave increased the most to 135.18% (p = 0.001) of its initial state, followed by the theta wave, with a rise of 117.07% (p = 0.002). In contrast, the other three spectral bands decreased after the experiment. The gamma wave was down to 81.86% (p = 0.000), the alpha wave was down to 85.28% (p = 0.005) and the beta wave was down to 93.75% (p = 0.012) of its initial state. Discussion The singing bowl used in this study produces a sound that lasts for more than 50 s after playing it once and has a strong beat at the frequency of 6.68 Hz. When the participants were listening to the singing bowl sound, the spectral magnitudes of their brain waves were shown to increase with time at low frequencies (≤8 Hz, delta and theta waves), whereas they decreased with time at high frequencies (>8 Hz, alpha, beta and gamma waves) (Figure 4). Among the five clinical spectral bands, the rate of increase was the highest for the delta wave (135.18%, p = 0.001), followed by the theta wave (117.07%, p = 0.002). Under the present experimental conditions, where the participants heard six repeating singing bowl sounds for 300 s, the largest rate of increase (251.98%, p = 0.021) was observed at the beat frequency of the singing bowl sound (Table 1). This result suggests that, when the participants were listening to the singing bowl sound, their brain waves were activated and effectively synchronized at the beat frequency. The beat frequency of the singing bowl sound used in this study belongs to the theta wave spectral band. Numerous studies have observed psychophysiological changes due to the effects of meditation as an increase in theta waves [5][6][7][8][9][15][16][17][18][19][20][21][22]. The present finding that the brain wave is synchronized and activated at the beat frequency located in the theta wave may serve as an academic basis that a singing bowl sound can be used in meditation. In future studies, it would be of interest to consider beat frequencies located in the other clinical spectral bands. In the present study, in order to observe the response of brain waves to the singing bowl sound, the temporal changes in EEG signals relative to those of the initial resting state were observed. The normalized parameter A(fb,t) defined by Equation (1) is expected to effectively remove subject-dependent effects. A conventional approach of comparing the experimental group to the control may be unnecessary or inappropriate to study the individual response to singing bowl sounds. This study monitors brain waves for the limited time of 400 s from 50 s before the first singing bowl sound to 50 s after the last (sixth) singing bowl sound. As shown in Figure 4a-d, the changes in brain waves were extended even in silence after the last singing bowl sound. As discussed in Section 3.2 regarding the brain wave activity at the beat frequency (Figure 4f), it is interesting to test what would happen if the participants heard an additional (seventh) singing bowl sound. It is of interest to see if the pattern of (large and rapid) jumps and (small and slow) falls would repeat and if the spectral magnitude would increase compared to the present maximum value observed at the fifth singing bowl sound. A future study is suggested to include temporal information when the maximum rate of the increase in brain waves is achieved. In previous studies on meditation and brain waves, delta waves were observed to increase in the prefrontal cortex [23], as measured at the same location used in the present study. Tei et al. (2009) compared the activity of delta waves using low-resolution electromagnetic tomography (LORETA), where people either meditated (Qigong) or just rested with their eyes closed (control group) [24]. In the frontal lobe of the subjects who meditated, the delta waves were significantly different and stronger than those of the control group. In the present study, delta waves were shown to slightly decrease immediately after the first singing bowl sound, followed by a continuous increase to the highest increase rate (135.18%) among the five clinical brain waves. It is of interest to note that, even after the experiment ended up, the delta waves continued to increase at an enhanced rate. The participants laid down on a bed-chair and had their eyes closed, listening to the singing bowl sound. Such relaxed conditions may easily make the participants feel sleepy and five minutes would be sufficient for some of the participants to fall asleep. In fact, a few of them were found to snore in their sleep. Once they went into stage 1 sleep, delta waves were expected to keep increasing with time. Even after listening to the last singing bowl sound for 50 s, the participants kept laying down on the bed chair and, with no singing bowl sounds, additional EEG signals were measured for 50 s, which were expected to contain more delta waves. A reduction in alpha waves is known to be a common phenomenon in the entire range of relaxation therapy [25]. Numerous prior studies have reported a decrease in the alpha waves in yoga or transcendental meditation [7,8,18,21,26,27]. Various studies have also reported a decrease in alpha waves by approximately 50% due to an increase in theta waves in the first stage of sleep [25,28]. In the present study, the spectral magnitudes of alpha waves were smaller than those at rest before the experiment, and decreased steadily as the participants started to hear the singing bowl sound, reaching 85.28% of the initial states at the end of the experiment. The observed continuous decrease with time in alpha waves is attributed to the effect of the singing bowl sound that may induce the participants to relax or meditate. It appears that beta waves did not change over time with significance (Figures 4 and 5), but they were found to decrease by 6.25% (p = 0.012) at the end of the experiment. A number of studies have shown a decrease in beta waves during meditation [29][30][31]. In particular, beta wave decreases were reported to be associated with a relaxation response or Zen meditation [29][30][31]. A meditation process was not considered in the present experiment, but the participants lay down on a chair-bed and were listening to the singing bowl sound. The observed decrease in beta waves is speculated to result from the relaxed resting states into which the participants gradually slipped during the experimental period. The gamma wave activity during meditation is controversial. Some studies have shown a decrease in gamma waves during meditation [23], while the other studies reported an increase in gamma waves [32][33][34]. The present study shows that the gamma waves continuously decreased by up to about 12~18% when the participants were listening to the singing bowl sound. However, gamma waves were observed to rise again, approaching the initial state, as the participants stopped hearing the sound. It should be noted that the present study employed a singing bowl sound whose beat frequency is located in the theta wave region and, in future studies, it would be of interest to look at the gamma wave response to a singing bowl sound whose beat frequency is located in the gamma wave region. The present study was based on brain waves measured at limited locations (F3 and F4). The measurement locations of F3 and F4 are known to be sensitive to brain activity during meditation [5,8,10,[12][13][14][35][36][37], and they are reasonable locations for the singing bowl meditation of the present study. Further studies with measurements at various positions are required to expand and generalize the observed synchronized activation of the brain waves. In addition, the present results were obtained from a relatively small number of participants (n = 17). Fortunately, Shapiro-Wilk tests confirmed the normal distributions of the measured data, which ensured the reliability of the statistical tests performed in the present study. The beat of the singing bowl sound is determined by the size, material and structure of the instrument. The various singing bowls used in meditation are classified in accordance with the fundamental frequencies of their sounds as musical key tones. The fundamental frequency of the singing bowl used in the present experiment was approximately 480 Hz, which musically corresponds to B4. As shown in Figure 3c, the singing bowl sound used in this study was composed of multiple harmonic components at 773.15 (G5), 1102.56 (C#6), 1464.81 (F#6) and 1870.86 Hz (A#6). The tonal property of the singing bowl is not affected by the manner of playing and the sound volume. Nevertheless, the sound intensity and tonal properties are important psycho-acoustical parameters [38] which are expected to affect the brain waves independently. In the present experiment, an arbitrary single singing bowl was chosen, and the playing method and the sound intensity were not precisely controlled. A follow-up study is suggested to explore the interesting aspects of how the synchronized activation of the brain waves is related to playing techniques and the intensity of the beating sound for singing bowls with various key tones. Conclusions The beat frequency of the singing bowl sound used in this study was determined to belong to the theta wave region, which is known to increase during meditation. In this experiment, the brain waves of the participants who heard the singing bowl sound were observed to be activated in a few minutes with its strong beat rhythm. This study presents experimental evidence that the singing bowl sound likely activates brain waves that are effectively synchronized with the beating rhythm. The present findings underpin that the strongly beating singing bowl sound facilitates meditation, relaxation and psychological stability.
2023-06-23T04:07:28.875Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "ae46de9bd25bb8f25f68ddad50bfdb89804d67b8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "ae46de9bd25bb8f25f68ddad50bfdb89804d67b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
238987934
pes2o/s2orc
v3-fos-license
Spontaneous Pneumomediastinum in a Healthy Pediatric Patient Spontaneous pneumomediastinum (SPM) is a rare condition, especially in children with no predisposing factors. In the vast majority of patients, this condition is benign and self-limiting; however, there is always the possibility that serious and potentially life-threatening complications such as mediastinitis or cardiac tamponade could arise. Early recognition, prompt diagnosis, and appropriate management allow for ideal care and prevent unnecessary and excessive investigations in these patients. An eight-year-old female was admitted to the emergency department with SPM after swimming and no known predisposing lung conditions. The probable causative event was likely to be pressure changes in the alveoli during swimming. This is notable because the patient’s SPM occurred in the absence of an underlying cause such as asthma. The patient was admitted overnight for monitoring and pain control. The symptoms resolved the following day, along with a decrease in the size of the SPM on the chest X-ray. Physicians should be aware of the signs of SPM in young patients who present with chest pain in the absence of trauma or pulmonic disease. A review of literature highlighted the pathophysiology and recommended treatment course for similar cases. Introduction A pneumomediastinum is a potentially life-threatening condition in which there is an abnormal presence of air in the mediastinum, between the lungs. The typical patient presents with a nonspecific, pleuritic chest pain accompanied by dyspnea [1]. In most cases, it is a benign, self-limiting disease that infrequently occurs in the absence of trauma -its incidence ranging from 1 in 800 to 1 in 42,000 pediatric cases with spontaneous pneumomediastinum (SPM) [2]. It is rarer to encounter primary SPM in pediatric patients. This sub-classification of SPM occurs when the patient has no pre-existing lung condition like asthma [3]. Since pneumomediastinums rarely occur, most of the literature describing this disease is found in individual case reports. Although the course of this disease is typically benign, hospitalization and observation are common due to possible significant or life-threatening complications [4]. This paper discusses the clinical characteristics of SPM and recommends appropriate management of SPM in pediatrics. Case Presentation An eight-year-old Caucasian female presented to the emergency department with chest pain and throat pain, which had persisted for 20 minutes and made her awaken from sleep crying. Her mother reported no medical history of these symptoms. The patient's symptoms started as she was swimming and diving in a public pool. She denied choking on water, swallowing pool water, or trauma. The patient appeared well and only complained of losing her voice. The patient's mother stated that the patient was not given any medication prior to arrival and that the patient is a strong swimmer. Patient notes that her chest only hurts when she moves into a certain position, but nothing has made the pain better. The patient has no significant medical history or family history, is up to date on vaccinations, and was not prescribed any regular medications. Her only known allergy is to cefadroxil. On presentation, the patient was afebrile with stable vitals, though she spoke in a hoarse voice and had slight tonsillar swelling. Her trachea was found to be tender and midline. In addition, the patient reported mild pain on hyperextension of her neck, and palpable mild subcutaneous emphysema was noted in the neck. Treatment and management During her ER course, the rapid strep test was negative. A chest X-ray and chest CT was performed. The chest X-ray (PA + lateral) demonstrated a superior mediastinum pneumomediastinum ( Figure 1). The heart was of normal size, and no significant pericardial or pleural effusions were seen. No blebs were noted on the chest CT. Furthermore, the patient's O 2 saturation remained 97-100%. FIGURE 1: Initial Anterior/Posterior and Lateral Chest X-rays Air tracking into the soft tissues of the neck and continuous air in the superior mediastinum is seen in the anterior/posterior (left) and lateral (right) chest X-rays. No significant pericardial or pleural effusions are noted. Acetaminophen suspension was administered; the patient's pain was well controlled with acetaminophen. She was admitted for observation overnight, and thoracic surgery was consulted; a follow-up chest X-ray was recommended. Repeat imaging performed the next day showed a persistent pneumomediastinum that was slightly decreased as compared to the prior study ( Figure 2). FIGURE 2: Follow-Up Anterior/Posterior and Lateral Chest X-ray There is a mild decrease in the size of the SPM noted in the anterior/posterior (left) and lateral (right) chest Xrays in comparison to that seen in Figure 1. The patient no longer endorsed chest pain or breathing difficulties remaining the next day. As a result, the patient and her mother were counselled on emergent symptoms that would warrant a return to the emergency department after discharge. The patient was also instructed to follow up with her primary care provider the following day. Discussion Pathophysiology Although the exact pathophysiology behind SPM is unknown, the development of a pneumomediastinum has been suggested to follow elevated pulmonary pressures that lead to alveolar rupture [5]. For this patient, it is suspected that the alveolar rupture was related to an abrupt change in pressure when swimming. Free air may dissect the bronchovascular sheath and enter the mediastinum. This results in Hamman's sign, a crunching sound with systole, and subcutaneous emphysema, symptoms commonly found in SPM patients. According to previous studies, subcutaneous emphysema is the most relevant sign that aids in diagnosis [6]. This clinical sign was also present in our patient. Non-specific pleuritic chest pain and dyspnea are also common in clinical presentation in SPM; however, these signs are not as specific [7]. In more complicated cases, the free air may penetrate the neck through communications of the mediastinum with the retropharyngeal and submandibular space [8]. SPM may be further complicated by decreasing cardiac output and causing cardiac tamponade, by compressing the larynx and causing stridor, or by mediastinitis due to a tear in the patient's esophagus or alveolar rupture. Mediastinitis is a serious condition that carries high mortality if the patient is not treated properly or the condition is recognized too late [9]. Therefore, although most case reports solely observe SPM and provide supportive care, some recommend the administration of prophylactic antibiotics. Treatment options Not many protocols are available for the treatment of SPM. The treatment regimen is up to the discretion of the physician, with conservative therapy generally being utilized for SPM patients; however, empiric antibiotics are occasionally also included in management according to some case reports [9]. According to a meta-analysis, the survival rate of SPM is 92.5%, with no recurrence or complications; in 25.8% of patients, the patients required transfer to the intensive care unit [6]. This demonstrates that although supportive therapy is sufficient in most cases, it is crucial to observe the patient, in case the patient's status declines. Clinically stable patients are typically under observation and should receive supplemental oxygen if indicated -ambulatory treatment may be appropriate in some patients who do not require supplemental oxygen providing close follow-up can be done. Another topic of debate is whether patients should receive prophylactic antibiotics as a precaution for the development of mediastinitis [9]. For most cases, conservative management with bed rest, oxygen inhalation -if needed, analgesics, and supportive care proves adequate. It is suggested to limit the use of empiric antibiotics to preventing infections when the patient presents with leukocytosis and fever [10]. If this patient's alveolar rupture had been a result of swallowing water, prophylactic antibiotics could have been included in the treatment regimen due to the plethora of bacteria in a public pool to which the patient would have been exposed. Follow-up chest X-rays are debatable since most patients recover without complications. However, a patient with worsening or persistent symptoms should have follow-up chest Xrays and may require further radiologic studies. Conclusions Early diagnosis of SPM is paramount in its management. Not all patients need hospitalization as very stable patients can be discharged with close follow-up. Others may need hospitalization for observation and possible further studies. To limit radiation in children, repeat X-rays may not always be needed but should be done if necessary. Most recommendations advise that empiric antibiotics should only be added in the case of leukocytosis, fever, or significant risk of infection. There is a lack of evidence to support the routine use of empiric antibiotics in SPM. Complications from SPM are rare to occur, rendering prophylactic antibiotics unnecessary in the majority of cases. Although no harm has been reported in any previous cases from this addition in the treatment course, there is simply no significant benefit. Given the findings, adopting the recommendation of only adding empiric antibiotics to conservative management if leukocytosis and fever are present is ideal until there is more concrete evidence supporting a certain treatment course. Our patient had partial spontaneous resolution of SPM with supportive care, as evidenced in the second chest X-ray performed. Had our patient presented with leukocytosis and/or fever, the addition of prophylactic antibiotics would have been considered. Observation and conservative management of SPM without prophylactic antibiotics are just as safe and effective as with the inclusion of antibiotics in noncomplicated cases. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-10-16T05:12:56.830Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "74bdc417315a66b3d97aee26d36ad6814f0e34ca", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/68918-spontaneous-pneumomediastinum-in-a-healthy-pediatric-patient.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74bdc417315a66b3d97aee26d36ad6814f0e34ca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119147243
pes2o/s2orc
v3-fos-license
Lie algebras of conservation laws of variational ordinary differential equations We establish a new version of the first Noether Theorem, according to which the (equivalence classes of) first integrals of given Euler-Lagrange equations in one independent variable are in exact one-to-one correspondence with the (equivalence classes of) vector fields satisfying two simple geometric conditions, namely they simultaneously preserve the holonomy distribution of the jets space and the action from which the Euler-Lagrange equations are derived. Introduction The first Noether Theorem is surely one of the most celebrated and widely studied results on conservation laws: see, for instance, [9,10,5,12] and references therein. As far as we know, the strongest and most general version of this theorem has been given by Olver in [10,11]. There, in a very clear and precise way, Olver shows that there exists an exact one-to-one correspondence between the family of (equivalence classes of) conservation laws for given Euler-Lagrange equations on sections of a bundle π : E → M , and the collection of (equivalence classes of) some special vector fields, called generalized infinitesimal symmetries, defined on the bundle π ∞ : J ∞ (E) −→ M of the infinite jets of sections of E. We now recall that any jet bundle of finite order π k : J k (E) −→ M is completely determined, up to local equivalences, by the pair (N, D), formed by: -the total manifold N := J k (E) of the jet bundle; -a special distribution D ⊂ T N , called canonical differential system or holonomy distribution ( [14,15,13]). Indeed, by a result by Yamaguchi, the pair (N, D) characterizes the bundle π k : J k (E) −→ M in the following sense: if (N ′ , D ′ ) is another pair, formed by a manifold N ′ of dim N ′ = dim N and a non-integrable distribution D ′ ⊂ T N ′ on N ′ , satisfying an appropriate set of conditions, then there exists a local diffeomorphism between N ′ and N = J k (E), which maps D ′ into D and allows to consider locally N ′ as a jet bundle of order k ( [15], Thm. 2.4'). It is therefore natural to expect that Olver's correspondence between conservation laws and generalized infinitesimal symmetries might admit an equivalent formulation in terms of vector fields on the jet bundle satisfying the following simple conditions: their local flows preserve a) the holonomy distribution D and b) the action I, from which the Euler-Lagrange equations are derived. Such alternative formulation of Noether-Olver's correspondence is actually possible. In this paper, we prove it for Euler-Lagrange equations in one independent variable. The proof for the general case of equations in several independent variables will appear in a forthcoming paper ( [3]; see also [2]). Let us call infinitesimal symmetries for the action I, or shortly Isymmetries, the vector fields of a jet bundle J k (E), satisfying conditions (a) and (b). Our result indicates that the correspondence between I-symmetries and conserved quantities (better to say, constants of motion, depending on derivatives up to a fixed finite order, possibly higher than the order of the system), is an almost perfect analogue of the well-known bijection between first integrals of a time-independent Hamiltonian system and the Hamiltonian vector fields that preserve the Hamiltonian function H (see e.g. [6], §5.5). However, this analogy breaks down in the following crucial aspect. First of all, we stress the fact that the above correspondence is established for any system of Euler-Lagrange equations, derived by some variational principle. In particular, it equally applies to both Lagrangian and Hamiltonian settings. Hence, one can explicitly apply our construction to determine the I-symmetries associated with the first integrals of a time-independent Hamiltonian system that depend just on phase space coordinates (to distinguish them from all other constants of motion, we call them first integrals of elementary type). Comparing them with the Hamiltonian vector fields associated with such first integrals, one can realize the following somehow unexpected fact: the I-symmetries and Hamiltonian vector fields are different objects, even though there exists a very natural bijection between them. There is however a very simple reason behind such a difference: an Hamiltonian vector field corresponds to a first integral of elementary type (determined up to a constant) by means of a contraction with the canonical symplectic 2-form of the phase space; an I-symmetry corresponds to a first integral of the same kind by means of a contraction with the Poincaré-Cartan 1-form of the Hamiltonian system (see §4.3 for details). On the basis of this fact, our alternative presentation of the correspondence between conservation laws and I-symmetries can be considered as the natural generalization of the correspondence between first integrals of elementary type and infinitesimal symmetries of a Poincaré-Cartan 1-form, and not of the canonical symplectic form. In addition, the explicit details of our proof show the following facts: 1) For any k ≥ 0 and for any action I on curves γ : I ⊂ R → E, determined by a Lagrangian which depends on the k ′ -th order jets of such curves with k ′ ≤ k 2 − 1, there exists at least one 1-form, which is a natural analogue of the Poincaré-Cartan 1-forms of Hamiltonian systems (we call it 1-form of Poincaré-Cartan type). 2) For a generic action I, there exist several (not just one!) associated 1-forms of Poincaré-Cartan type and the explicit correspondence between I-symmetries and constants of motion does depend on the choice of one such 1-form. It is only the associated map between equivalence classes of I-symmetries and of conservation laws, which is independent of this choice. 3) For any fixed u ∈ J k (E), the collection g I of germs at u of Isymmetries has a natural structure of an infinite-dimensional Lie algebra, determined by the usual Lie brackets between vector fields. However, in general, the Lie algebra structure of g I does not induce a natural Lie algebra structure on the space ConstMot of germs of (locally defined) constants of motion. One can impose a corresponding natural Lie algebra structure on certain subspaces of ConstMot only if special restrictions are considered, as for instance if one consider only Hamiltonian systems and first integrals of elementary type. Nonetheless, there always exists a natural linear representation of g I on ConstMot, which makes ConstMot a g-module (see §3.3 below, for details). We observe that our construction of the correspondence between Isymmetries and conservation laws makes use only of classical operators of Differential Geometry, like e.g. exterior differentials, Lie derivatives etc., and it has been designed to admit simple and direct generalizations to Euler-Lagrange equations on supermanifolds. We plan to undertake this task in a future paper. As a conclusive remark, we remind that Noether theorems have a long story, clearly exposed in Kosmann-Schwarzbach's book [5] and summarized also in Olver's review [12]. In [5], p. 143-144, the author stresses the clarity and completeness of Olver's presentation in [10] and suggests further investigations towards other kinds of geometrical approaches to Noether theorems (see, for instance [4]). In our opinion, the results of this paper may be considered as a contribution in this direction. The paper is structured as follows. In §2, we introduce the definition of the holonomic distribution of a jet bundle J k (E), associated with a bundle π : E → R with 1-dimensional basis, and of variational equivalences between p-forms on J k (E). The interest for such equivalence relations is motivated by the following facts: i) a Hamiltonian or Lagrangian action I on curves γ : depending on the k-order jets of these curves, can be always defined as the integral of a 1-form of J k (E) along the traces in J k (E) of the curves of jets t → j k (γ)| t ; ii) two 1-forms on J k (E) determine the same action I if and only if they are variationally equivalent; iii) the Euler-Lagrange equations, which characterize the stationary curves for I, are given by the components of a special 2-form, which is variationally equivalent to the exterior differentials of the (variationally equivalent) 1-forms that determine I. In §3, we introduce the notion of infinitesimal symmetries of an action I and prove the advertised correspondence between (equivalence classes of) such infinitesimal symmetries and (equivalence classes of) constants of motion for the Euler-Lagrange equations of I. In §4, we determine the infinitesimal symmetries of the action, associated with a (time-independent) Hamiltonian system, and compare them with the Hamiltonian vector fields, associated with first integrals of elementary type. Finally, using Darboux Theorem, we get our final result, Theorem 4.4, which generalizes a previous theorem by Mukunda ([8]). Acknowledgements. We are grateful to Franco Cardin and Wlodzimierz Tulczyjew for very useful discussions on many aspects of this paper. 2. Geometrization of Euler-Lagrange equations of one independent variable 2.1. Notational remarks. In this paper we are concerned with the systems of ordinary differential equations for curves γ : I ⊂ R −→ M on an ndimensional manifold M , which are Euler-Lagrange equations determined by some variational principle. Main examples of such equations are given by the differential systems occurring in Lagrangian and Hamiltonian mechanics. In these cases, the manifold M plays the role of the configuration space or phase space of the considered physical system. The parameter t ∈ I ⊂ R of the curve has to be considered as the time coordinate. In our discussion, the 1-dimensional manifold R is constantly considered with a fixed orientation, namely the one determined by the trivial coordinate system Id R = (t) : R −→ R. The globally defined 1-form dt is referred to as standard volume form of R. It is immediate to realise that any (smooth) parameterized curve γ : I ⊂ R −→ M is uniquely associated with the corresponding (local) section of the trivial bundle π : So, with no loss of generality, in place of parameterized curves in M , all results of this paper are expressed in terms of local (smooth) sections of the trivial bundle (E = M × R, R, π). Consider an integer k ≥ 1. Given a local section γ : I → E = R × M , we use the notation j k t (γ) for the k-th order jet of γ at t ∈ I. The space of k-jets of local sections of the bundle (E, R, π) is denoted by J k (E). For any 1 ≤ ℓ ≤ k, we indicate by π k ℓ the natural projection We also consider the natural projections π k 0 : J k (E) −→ E and π k −1 : Given a section γ : I ⊂ R −→ E, we call lift of γ to the k-th order the associated curve of jets ξ(t, x) = (t, y 1 (x), . . . , y n (x)) , are called associated with ξ = (y i ). In general, any set of coordinates on E of this form is called set of adapted coordinates. Given a set of adapted coordinates ξ = (t, y i ) on I × U ⊂ E, we may consider the naturally associated coordinates which sends a given k-th order jet u = j k t (γ) into the N -tuple, with N = n(k + 1) + 1, defined by We call such coordinates a set of adapted coordinates of J k (E). The vectors in D and the vector fields with values in D are called holonomic. Consider a system of adapted coordinates (t, y i , y i (1) , . . . , y i (k) ). If γ is a section such that u = j k t (γ), the components of v = dγ for some holonomic p-form λ and some holonomic (p − 1)-form µ. By previous remarks, given a set of adapted coordinates ξ (k) = (t, y i (a) ), the holonomic 1-forms are exactly those that are linear combination of the 1-forms (here, y i at all points. Note also that if µ is holonomic, its differential dµ might be nonholonomic. For instance, the 1-forms ω i (a) , a ≤ k − 2, are holonomic, but their differentials are of the form dω i (a) = dy i (a+1) ∧ dt = ω i (a+1) ∧ dt and are not holonomic. Indeed, The relation of variational equivalence is an equivalence relation between p-forms defined on the same open subset U ⊂ J k (E). If α is a p-form on U , we call variational class of α the collection [α] of all p-forms that are variationally equivalent to α. The main motivation for considering the notion of variational classes is discussed in the next section. Here L : J k (E) −→ R denotes a k-th order Lagrangian, that is a C ∞ real function on J k (E). For such purposes, it is very convenient to consider the following notion. Here is a sequence of remarks that motivate this definition. (3) Let α be a 1-form on J k (E) and denote by α = (π k+1 k ) * (α) the pull-back of α on the jet space J k+1 (E). Let also W ⊂ J k+1 (E) be an open subset admitting a set of adapted coordinates ξ (k) = (t, y i , y i (a) ). The collection of 1-forms (dt, ω i (a) , dy j (k+1) ) is a coframe field on W and any 1-form is a linear combination of such 1-forms at any point. Since it follows that α| W has trivial components along the 1-forms like dy j (k+1) . It is therefore of the form for some smooth real functions L, α i(a) on W. Since k a=0 α i(a) ω i (a) is holonomic and dt coincides with the pull-back of the standard volume form dt of R, we conclude that [ α| W ] = [α L ] and the values of the functional I [α] on sections of W are given by This means that, locally, I [α] can be always identified with a functional of the form I L , given by an appropriate (k + 1)-th order Lagrangian L. (4) Let M = T * R N be the phase space of a classical mechanical system and H : T * R N −→ R the Hamiltonian, which determines the dynamics of the system. As it is well known, the Hamilton equationsq i = ∂H ∂p i , p j = − ∂H ∂q j are the Euler-Lagrange equations that arise from a variational principle on the action By these observations, it is clear that the actions determined by variational classes constitute a set that naturally includes and extends the class of all actions in Lagrangian and Hamiltonian mechanics. With the purpose of dealing with both kinds of such actions on the same footing, from now on our discussion is done in the general terms of variational classes and associated actions. We conclude with a very convenient definition. Using this definition, by a pull-back, a p-form α on J r (E) can be considered as p-form of order r on a jet space J k (E) for any k ≥ r. Variational Principles and Euler-Lagrange equations. We now want to introduce a definition of variational principles for actions given by variational classes, which directly implies the usual Euler-Lagrange equations in Lagrange or Hamiltonian settings. For this, we first need to consider the following generalized definition of variation with fixed boundary. Let γ : I −→ E = M × R be a local section and [a, b] ⊂ I a closed subinterval of its domain I. We call smooth variation of γ with fixed boundary up to order k any smooth map F : the action determined by a 1-form α of order r in J k (E). We say that γ satisfies the variational principle determined by Condition (2.6) clearly depends only on the first order jet in the variable s of the variation F . Indeed it is equivalent to a condition which involve some special vector fields, which we now introduce. Let γ : I −→ E = M × R be a section and a vector field, which is defined only at the points of where F (k) is the map ] ⊂ I and denote by F a smooth variation with fixed boundaries up to order r of γ| J . We also indicate by W the variational field along γ| [a,b] , which is determined by F by means of (2.7). By Stokes Theorem and the conditions satisfied by F at the points (a, s) and (b, s), From this, the claim follows. At a first glance, condition (2.8) looks difficult to be handled, because it involves the notion of variational vector fields, which are objects that might be hard to characterize in terms of explicit differential equations. On the other hand, we observe that (2.8) is satisfied if and only if . Indeed, if β = dα + λ + dµ, for some holonomic λ and µ, (2.9) By this fact, it turns out that it is very convenient to consider the following kind of 2-forms, which, as we will shortly see, lead naturally to the Euler-Lagrange equations of the considered variational principle. 11) to be satisfied at all points of U . It follows that σ satisfies (2.10) if and only if it is of the form for some smooth functions σ j and σ kℓ . Coming back to (2.8) and (2.4), by [13] Prop. A.2, if α is a 1-form, which is locally variationally equivalent to a 1-form Ldt of order r, and it is considered (through a pull-pack) as a 1-form on J k (E), with k ≥ 2r, the class [dα] on J k (E) contains exactly one source form σ ∈ [dα], which has the expression σ = σ i ω i (0) ∧ dt (2.12) in any set of adapted coordinates. For reader's convenience, we show the existence of a source form as above in the simple case, in which α is defined on an open set U ⊂ J k (E), k ≥ 2, endowed with adapted coordinates ξ (k) = (t, y i (a) ), and it is already of the form α = Ldt for some Lagrangian L of order 1. In this case, , a = 0, 1, are holonomic, we see that the variational class [dα] contains the source form . (2.13) We are now able to prove that that the sections which satisfy a variational principle, are exactly the solutions of an appropriate system of Euler-Lagrange equations, as expected. Theorem 2.8. Assume that α is a 1-form on a jet space J k (E), which is (locally) variationally equivalent to some form of order r of the kind Ldt for some r ≤ k 2 . Let also σ be a source form in [dα]. A section γ : for any t ∈ I . (2.14) Proof. First of all, we observe that if σ and σ ′ are source forms in the same variational class [dα], i.e., such that σ − σ ′ = λ + dµ for some holonomic λ and µ, then dµ is holonomic and the whole difference σ − σ ′ is holonomic. In fact, if dµ = 0 and not holonomic, in some set of adapted coordinates dµ is necessarily of the form for some non-trivial functions µ a i . But this would contradict the fact that σ and σ ′ are both source forms, hence both satisfying (2.11). Due to this and the fact that, for any section γ, the tangent vectorsγ By this remark, with no loss of generality, from now on we may assume that σ is the unique source form of [dα] described in (2.12). By (2.4) and Proposition 2.6, γ satisfies the variational principle if and only if for any closed subinterval [a, b] ⊂ I and any k-th order variational field W . b] is included in the domain of a system of adapted coordinates ξ (k) = (t, y i (a) ), we have that W and ı W σ are of the form We now observe that for any choice of functions f i : γ (k) ([a, b]) → R that vanish identically on neighborhoods of a and b, one can construct a smooth variation F with fixed boundary up to order k, whose associated variational field W satisfies the claim follows. By previous remarks and the proof of Theorem 2.8, using a set of adapted coordinates, the equation (2.14) is equivalent to the system The reader can directly check that (2.17) coincide with the Eulero-Lagrange equations of a Lagrangian L also in the cases in which L is of order higher than one. As we will shortly see, the (first) Noether Theorem establishes a natural correspondence between symmetries of I [α] and conservation laws. Indeed, such correspondence appears to be a bijection, provided that the objects that are called symmetries are specified in an appropriate way. To this purpose, the following definition is crucial. Definition 3.2. Let X be a vector field and α a 1-form on J k (E). a) X is called infinitesimal symmetry of D (shortly, D-symmetry) if, for any holonomic vector field Y , the Lie derivative L X Y is also a holonomic vector field. b) X is called infinitesimal symmetry for I [α] if it is a D-symmetry and L X α is holonomic for some (and hence for all) α ∈ [α]. b) If X is a D-symmetry and λ is a holonomic p-form, also the p-forms Φ X t * λ, t ∈] − ε, ε[⊂ R, and the Lie derivative L X λ, are holonomic. From this, it follows that if α and α ′ are variationally equivalent (i.e. α − α ′ = λ + dµ, with λ, µ holonomic), then L X α is holonomic if and only if L X α ′ is holonomic. This explains why the definition of infinitesimal symmetry for I In the next proposition, we show that the D-symmetries and the infinitesimal symmetries for an action I [α] coincide with the vector fields that satisfy an appropriate system of partial differential equations. Proposition 3.4. Let X and α be a vector field and a 1-form on J k (E), respectively, and ξ (k) = (t, y i (a) ) a system of adapted coordinates on U ⊂ J k (E). Then: 1) X| U is a D-symmetry if and only if it satisfies the following system of p.d.e.'s Proof. We recall that D| U is generated by the vector fields d dt and ∂ ∂y j We conclude with an explicit description of D-symmetries in adapted coordinates. In the next statement, ξ (k) = (t, y i , y i (a) ) is a fixed system of adapted coordinates on an open subset U ⊂ J k (E). Moreover, for any smooth map v = (v 0 , v 1 , . . . , v n ) : U ⊂ J k (E) −→ R n+1 we adopt the notation X v to indicate the vector field on U defined by (in this formula, we assume y i (k+1) = 0). Notice that, by (3.3) and (3.4), we may also write X v as = 0 for all 1 ≤ i ≤ n. Proof. By Proposition 3.4, a vector field on U is a D-symmetry if and only if it satisfies the equations for any 0 ≤ a ≤ k − 1. We recall that Hence, the first set of equations in (3.6) means that, for any 0 ≤ a ≤ k − 1, This shows that all components X i (a) , a ≥ 1, are uniquely determined by the components X i and, by induction, one can check that X is as in (3.3). In order to conclude, it suffices to show that the other equations in (3.6) are equivalent to = 0. Indeed, denoting by z A an arbitrary coordinate amongst (t, y i , y i (a) ), one has that L X ∂ ∂y j This means that the second set of equations in (3.6) is equivalent to Now, setting a = k − 1 and taking the derivative of (3.9) w.r.t. y i (k) for some i = j, we get . On the other hand, considering equation (3.9) with j = i and taking the derivative w.r.t. y j (k) we have . We have now all the ingredients for the two parts of the Noether Theorem, which are stated and proved in the next section. Noether Theorem. Definition 3.6. Let [α] be a variational class of 1-forms on J k (E), determined by a 1-form α, which is locally variationally equivalent to 1-forms Ldt of order r for some r ≤ k 2 . A 1-form α o ∈ [α] is called of Poincaré-Cartan type if dα o is a source form modulo a holonomic 2-form. The main example of such kind of 1-forms is given by the Poincaré-Cartan which is a source form on any jet space J k (E), k ≥ 1, of the trivial bundle π : Note that if α is a 1-form on J k (E), satisfying the assumptions of (3.6), then for any u ∈ J k (E) there exists a neighborhood U of u such that the variational class [α| U ] contains a 1-form of Poincaré-Cartan type. This can be directly seen as follows: consider a neighbourhood U admitting a system of adapted coordinates, and let σ ∈ [dα| U ] be the source form described in (2.12). Then σ = dα| U + dµ + λ = d(α| U + µ) + λ, for some holonomic µ and λ, and α o = α| U + µ is a 1-form of Poincaré-Cartan type in the variational class [α| U ]. We also remark that, replacing J k (E) by a jet space of higher order, one may safely assume that the variational class [α| U ] contains at least one 1form of Poincaré-Cartan type of order r ≤ k − 1. We will shortly see that such harmless assumption is often quite convenient. The notion of 1-forms of Poincaré-Cartan type leads to the following useful characterisation of infinitesimal symmetries of a given action. As in Proposition 3.5, we consider as fixed a system of adapted coordinates ξ (k) = (t, y i , y i (a) ) on an open subset U ⊂ J k (E) and for any R n+1 -valued smooth map v = (v i ) on U , we denote by X v the associated vector field defined in (3.3). Proposition 3.7. Assume that dim M ≥ 2 and let α o be a 1-form of Poincaré-Cartan type in [α] of order r ≤ k − 1 and X v a D-symmetry on U associated with v = (v i ). Then X v is an infinitesimal symmetry for I [α] if and only if it satisfies the linear differential equation 10) where σ is any source form of the variational class [α| U ]. Proof. Let λ be the holonomic 2-form defined by λ = dα o − σ. By Proposition 3.4 (2) and the fact that = 0 for all components X A v of X v (Proposition 3.5) and α o is of order r ≤ k − 1, the second equality is trivially satisfied for any 1 ≤ i ≤ k. By the first equation in (3.11), the claim follows. We can now state and prove the Noether Theorem in its two parts, direct and inverse. Proof. By definition of 1-forms of Poincaré-Cartan type, dα o = σ + λ, where σ is a source form in [dα o ] and λ is a holonomic 2-form. It follows that, for any section γ : Since σ is a source form of [dα o ], by Theorem 2.8, if γ is a solution of the variational principle of Now, in order to state and prove the inverse of this result, we need to consider a new notion. Let [α] be a variational class of 1-forms of J k (E) and assume that σ = σ i ω i (0) ∧ dt is a source form of the kind (2.12) on some open subset W ⊂ J k (E). Assume also that σ is of order r o ≤ k−1 and consider the differentials dσ i of the components σ i of σ. By the assumption on the order of σ, such differentials are equal to Due to this, for any k-th order lift γ (k) : I −→ W of a section γ of E, we have Hence a lifted section γ (k) corresponds to a solution of the Euler-Lagrange equations (3.12) if and only if it is a solution of the system of partial differential equations (3.13) The system (3.13) is usually called first prolongation of (3.12). We stress the fact if the functions (i.e., 0-forms) σ i which defined the Euler-Lagrange equations are of order r o , the functions that define the first prolongation (3.13) are 0-forms of order r o + 1 ≤ k. Consider now the integer p o := k − r o . Iterating the above argument, we can directly prove that the system (3.12) is equivalent to We call it full prolongation of (3.12) on the k-order jet space J k (E). Note that the order of the collection of functions appearing in a full prolongation is generically not less than k. 3) a p o -tuple of constants of motion (g (1) , . . . , g (po) ), on U , vanishing at all points γ (k) (t) of all lifts of the solutions of the variational principle such that Proof. Consider a system of adapted coordinates ξ (k) = (t, y i , y i (a) ) and let σ = σ i ω i (0) ∧ dt on W be a source form satisfying the non-degeneracy condition (a). By Propositions 3.5 and 3.7, we need to show that there exists a neighbourhood U of Z σ , a smooth R n+1 -valued map v = (v 0 , v i ) : U → R n+1 and p o constants of motion g (i) on U , vanishing on lifts γ (k) (I) of solutions, such that the vector field X v satisfies the system of linear equations If we express α and σ as sums of the form (3.18) We claim that the function df dt : W −→ R vanishes identically on Z σ . Indeed, since Z σ = {F σ = 0} is equal to the collection of the jets of the (k-th order lifts of) solutions to the variational principle, for any u ∈ Z σ , where we denoted by γ (k) the k-th order lift of a solution with u = γ (k) (t o ). Since f is a constant of motion and it is of order k − 1, we get which proves the claim. From this, the fact that F σ : W −→ R n(po+1) is a submersion at any u ∈ Z σ and standard properties of submanifolds (see e.g., [7], Lemma 2.1 and [10], Prop. 2.10), there exists a neighborhood U ⊂ W and n ·(p o + 1) smooth functions v j (ℓ) , 1 ≤ j ≤ n, 0 ≤ ℓ ≤ p o , on U (not uniquely determined!), such that Let g (1) : U → R be the smooth function defined by This function vanishes identically on the jets of the solutions (it is pointwise equal to a linear combination components of the map F σ ) and it is therefore a constant of motion. Furthermore, so that (3.19) can be re-written in the form . Iterating this line of arguments, we conclude that (3.19) is equivalent to an equality of the form for some appropriate smooth functions v i , g (ℓ) : U −→ R, where the g (ℓ) are constants of motion that vanish identically on the jets of the solutions of the variational principle. Since α 0 = α d dt is nowhere vanishing on W, we may consider the function and the corresponding (n + 1)-tuple of functions on U ⊂ W By construction, v satisfies (3.18) and X (f ) := X v is an infinitesimal symmetry satisfying (3.16). 3.3. Correspondence between infinitesimal symmetries and constants of motion. Let [α] be a variational class on J k (E), which is locally determined by a 1-form Ldt of order r with 2r ≤ k, and assume that X is an infinitesimal symmetry X for the action I is a constant of motion with the property that, for any k-lift γ (κ) of a section of E (here, σ is a source form of It is therefore convenient to consider the following definition. All such classes of germs have natural structures of vector spaces. The space Σ is also endowed with a natural Lie algebra structure, given by the usual Lie brackets between vector fields. Using the above notation, when dim M ≥ 2 and the non-degeneracy conditions (a) and (b) of Theorem 3.10 are satisfied, the two parts of Noether Theorem can be restated saying that for any given choice of a 1-form α o ∈ [α] of Poincaré-Cartan type of order r ′ ≤ k − 1, there exists a natural surjective linear map From the definition of the map ϕ (αo) , one has that ker ϕ (αo) = Triv (αo) and the above homomorphism induces an isomorphism of vector spaces This isomorphism does depend on the choice of α o . However, if one considers the quotients of the vector spaces Σ and ConstMot by the subspaces Triv and Null + Const, respectively, the surjective map (3.23) establishes a vector space isomorphism which is now independent on the choice of α o . A priori, there is no reason for Triv (αo) or Triv to be ideals of the Lie algebra Σ. Due to this, the quotients Σ/Triv (αo) and Σ/Triv cannot be expected to have a natural Lie algebra structure. However, something can be said on this regard, provided that we consider the following restricted class of infinitesimal symmetries. (3.24) is well-defined and is a linear representation. Composing with the isomorphism ı (αo) , we get the following linear map for any X ∈ Σ (αo) : where Z (f ) is any germ in Σ (αo) that is mapped onto f by ϕ (αo) . By construction, the map ρ determines a linear representation of Σ (αo) and we have the following: is a linear representation of the space of (germs of ) α o -symmetries Σ (αo) on the quotient space of (germs of ) constants of motion ConstMot/Null. Remark 3.14. A similar argument can be used to show the existence of a natural linear representation of Σ (αo) also on the quotient space ConstMot/(Null + Const). Infinitesimal symmetries and Hamiltonian vector fields in Hamiltonian mechanics 4.1. Notational issues. From now on, we assume that the configuration space M is a cotangent bundle M = T * N of an n-dimensional manifold N . We denote by π : T * N → N the canonical projection of T * N and for any system of coordinates η = (q 1 , . . . , q n ) : U ⊂ N −→ R n of N , we call associated coordinates on T * N the map . . , q n , p 1 , . . . , p n ) . In the following, we consider only this kind of coordinates on T * N and the systems of adapted coordinates on J k (E), E = T * N × R, are assumed associated with such coordinates and of the form The components of a vector field X on U ⊂ J k (E) along the coordinate vector fields ∂ ∂q i (a) (resp. ∂ ∂p j(a) ) are denoted by X i (a) (resp. X j(a) ), that is The holonomic 1-forms (2.2) are now denoted by We finally denote by ϑ and Ω the tautological 1-form and canonical symplectic 2-form, respectively, of T * N . We recall that they are defined by ϑ| β := β( π * (·)) and Ω = dϑ and that, in coordinates ξ η = (q i , p j ), they are given by the well-known expressions ϑ = p i dq i , Ω = dϑ = dp i ∧ dq i . If α H is considered as a 1-form of J 1 (E), we may see that it is (locally) variationally equivalent to the 1-form This means that the action I [α H ] is (locally) determined by the Lagrangian , which is clearly of order 1. Furthermore, showing that dα H is a source form, hence that α H is of Poincaré-Cartan type. These observations show that: 1) Theorems 3.8 and 3.10 can be used for I [α H ] whenever α H is considered on a jet space J k (E) with k ≥ 2. 2) If α H is taken as a 1-form on J 2 (E) and we consider adapted coordinates (t, q i (a) , p j (a) ) a=0,1,2 , the source form σ in the variational class [dα H ] of the kind (2.12) is 3) The system given by the full prolongation of the Euler-Lagrange equations, determined by (4.1), is Due to this and Proposition 3.7, given a (2n + 1)-tuple v = (v 0 , v i , v j ) of smooth functions on a subset W ⊂ J 2 (E), the D-symmetry is an infinitesimal symmetry for I [α H ] if and only if v satisfies the equation In addition, by Theorem 3.10, given a constant of motion f on W ⊂ J 2 (E), of order less than or equal to 1, we may locally determine a (2n + 1)tuple v, corresponding to an infinitesimal symmetry X v for I [α H ] and such that where g is a constant of motion that vanishes identically along the solutions of the variational principle. By the proof of Theorem 3.10, the constant of motion g (identically vanishing on solutions) and the infinitesimal symmetry X v are determined by the following steps: 2) . Note that such functions do exist, but are not uniquely determined by f . Step 2. Determine the constant of motion g by the formula . Then the infinitesimal symmetry X v is determined by the (2n and it is therefore equal to . Since the first integral f depends only on the coordinates of T * M , it follows that for any (q i o , p oi ) ∈ T * M . By arbitrariness of (q i o , p oj ), it follows that This shows that Step 1 of previous section can be easily solved by setting From this, following Step 2, we get that the (vanishing along solutions) constant of motion g is identically vanishing and that the infinitesimal symmetry X (f ) associated with f is . Consider now the natural immersion ii) Conversely, if Y (f ) is a Hamiltonian vector field on U ⊂ T * M , associated with a function f on U and satisfying (4.10), one can directly check that f : U −→ R is a first integral of elementary type. iii) In any open subset W ⊂ J 2 (E), where a system of adapted coordinates ξ (2) = (t, q i , p j , q i (1) , p j(1) , q i (2) , p (2)j) ) are defined, given a Hamiltonian vector field as in (ii), there is a unique infinitesimal symmetry X (f ) for the variational principle of I [α H ] of the form (4.3) and such that Y (f ) := π * X (f ) ı(E) . iv) Given u = ı(β o ) ∈ ı(E) ⊂ J 2 (E), the correspondence X −→ π * X| ı(E) determines an isomorphism between the Lie algebra of germs at β o of the infinitesimal symmetries as in (4.3) and the Lie algebra of germs at β o of the Hamiltonian vector fields on T * M , which satisfy (4.10). These facts can be nicely summarized using the following notion. By the above discussion, the correspondence between infinitesimal symmetries and constants of motion, given by Noether Theorems, determine the isomorphism of vector spaces ϕ : sp H −→ I elem /R , (4.12) where, for any Y ∈ sp H , the corresponding equivalence class ϕ(Y ) ∈ I elem /R is determined by the f (determined up to an additive constant) such that ı Y Ω| u = df | u . Since sp H has a natural structure of Lie algebra, the vector space isomorphism ϕ induces a natural Lie algebra structure on I elem /R. We remark that the Lie brackets of the induced Lie algebra structure are the usual Poisson brackets of the symplectic manifold (T * M, Ω). 4.4. The infinite-dimensional Lie algebra sp H . Let Ω o be the standard symplectic form of R 2n , i.e. and denote by sp (1) ∞ (2n, R) the Lie subalgebra of sp ∞ (2n, R), determined by the vector fields, commuting with ∂ ∂x 1 . We recall that sp ∞ (2n, R) is the infinite-dimensional Lie algebra of the germs at 0 of vector fields of R 2n , which preserve Ω o . Consequently, the (infinite-dimensional) Lie algebra sp Note that the second condition in (4.14) is equivalent to require that dx 2 (X) = L X dx 2 = L X (ı ∂ ∂x 1 Ω o ) = 0 . One can directly check that X ∈ sp (1) ∞ (2n, R) if and only if X is of the form where h and X i are functions that satisfy the equations where Ω ′ o denotes the standard symplectic form of R 2n−2 = { x ∈ R 2n : This means that Let H : U ⊂ T * M −→ R be a time-independent Hamiltonian. An element β ∈ U is called point of non-degeneracy for H if dH| β = 0. Proof. By the proof of Darboux Theorem (see e.g. [1]), since dH| u = 0, there exists a system of coordinates around u, in which Ω assumes the same expression of the standard symplectic form Ω o and the function H is equal to H = x 2 . From this, the conclusion follows. From the above proposition, around points of non-degeneracy, all Lie algebras of (germs of) first integrals of elementary type of all Hamiltonians are infinite-dimensional and mutually isomorphic. The same clearly occurs for any subalgebra g of such Lie algebras and gives rise to the following phenomenon (see also [8] for a constructive proof of this property for some special Lie algebras). Theorem 4.4. Assume that for a given Hamiltonian H there exists a collection of first integrals of elementary type, which (by means of Poisson brackets) constitutes a (finite or infinite) dimensional Lie algebra g ⊂ sp H at a point of non-degeneracy u. Then the same occurs for any other Hamiltonian H ′ in the following sense: around any point of non-degeneracy of H ′ , there exists a collection of (locally defined!) first integrals of elementary type for H ′ , which constitutes a Lie algebra g ′ ⊂ sp H ′ that is isomorphic to g.
2014-11-22T08:13:36.000Z
2014-11-22T00:00:00.000
{ "year": 2015, "sha1": "c55cae7f03a1f9d97929c4d396986e22a0fb47eb", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.geomphys.2014.11.005", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c55cae7f03a1f9d97929c4d396986e22a0fb47eb", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
111104151
pes2o/s2orc
v3-fos-license
On the Photonic Cellular Interaction and the Electric Activity of Neurons in the Human Brain The subject of Ultraweak Photon Emission (UPE) by biological systems is very fascinating, and both evidence of its effects and applications are growing rapidly due to improvements in experimental techniques. Since the relevant equipment should be ultrasensitive with high quantum efficiencies and very low noise levels, the subject of UPE is still hotly debated and some of the interpretations need stronger empirical evidence to be accepted at face value. In this paper we first review different types of interactions between light and living systems based on recent publications. We then discuss the feasibility of UPE production in the human brain. The subject of UPE in the brain is still in early stages of development and needs more accurate experimental methods for proper analysis. In this work we also discuss a possible role of mitochondria in the production of UPE in the neurons of the brain and the plausibility of their effects on microtubules (MTs). MTs have been implicated as playing an important role in the signal and information processing taking place in the mammalian (especially human) brain. Finally, we provide a short discussion about the feasible effects of MTs on electric neural activity in the human brain. Electromagnetic Radiation and Living Systems The relation of biological systems and electromagnetic radiation can be discussed from different points of view. Some of the interesting interactions are as follows: • Efficient excitation energy transfer of light by photosynthetic system Recently published experimental data in photosynthesis have provided support for the hypothesis that the system uses some nearly100% efficient excitation energy transfer of light (which means almost without dissipation), and it is suggested that quantum coherence plays an important role in this mechanism [65]. This subject has attracted the attention of representing physics and chemistry, especially quantum information theorists who aim to find out how quantum coherence makes the system so efficient. These inquiries resulted in the subject of quantum biology becoming a very popular topic in recent years. • Response of mammalian cells to near-infrared light In a series of studies spanning a period of some 25 years G Albrecht-Buehler (AB) demonstrated that living cells somehow have a molecular analogue of an eye which can process light information and react in an intelligent manner [66][67][68]. In his studies microtubular structures especially centrioles have been identified as the main candidates for light information processors [68]. He further showed that electromagnetic signals are the triggers for cell repositioning in physical space. It is still largely a mystery how the reception of electromagnetic radiation is accomplished by the centriole. Another mystery related to these observations is the original electromagnetic radiation emitted by a living cell [69]. Using pulsating infra-red signals scattered off plastic beads AB mimicked the effects of the presence of another living cell in the neighbourhood. The question that still remains unanswered and which we address here is the source of infra-red radiation speculated by AB to originate in the mitochondria and later on demonstrated to be correct using quantum mechanical arguments [69]. • Production of light by living systems Photon emission by biological systems can be produced by different mechanisms. In general, light emission can be classified into three groups: (1)Induced light emission, (2)Spontaneous light emission and (3)Black-Body radiation. Here, we discuss a subclass of the second group which is called Ultraweak Photon Emission (UPE). All living cells of plants, animals and humans continuously and spontaneously emit ultraweak photons (ultraweak electromagnetic waves) in the optical range of the spectrum, which is associated with their physiological states and can be measured with specific experimental techniques [57]. In different literature sources the UPE is referred to by different names such as ultraweak emission, biophotons, ultraweak bioluminescence, self-bioluminescent emission, photoluminescence, delayed luminescence, ultraweak luminescence, spontaneous chemiluminescence, ultraweak glow, biochemiluminescence, metabolic chemiluminescence, dark photobiochemistry and bioluminescence. • Transmission of light by living systems Recently, Sun et al. [70] demonstrated that a single neuron can conduct photon signals. Moreover, Wang et al. [71] presented an experimental proof of the existence of spontaneous and visible light induced UPE form freshly isolated rats whole eye, lens, vitreous humor and retina [71]. • Bio-communication There is growing experimental evidence that cells and tissues may interact over distances even when chemically isolated, most likely via electromagnetic fields [51]. Stemming from the pioneering experiments of Gurwitsch in 1920s [52], some researchers confirmed that cellular interactions can be mediated by electromagnetic fields e.g. see [53][54][55][56]. Overwhelming majority of the experiments focused on the study of electromagnetic cellular interactions examined in the optical region. For the review of the historical and recent theories and experiments on electromagnetic cellular interactions see [51]. UPE emission inside neurons There are experimental indications that ROS and RNS are responsible for UPE production in living systems [42,43] and are also necessary for synaptic processes and normal brain functions. Numerous findings have provided evidence of fundamental signal roles of ROS and RNS in cellular processes under physiological conditions. Free radicals and their derivatives act as signaling molecules in cerebral circulation and are necessary for molecular signaling processes in the brain such as synaptic plasticity, neurotransmitter release, hippocampal long-term potentiation, memory formation, etc. [57][58][59][60][61][62][63]. Recently Bókkon et al. put forward a molecular hypothesis about biophysical picture representation (intrinsic biophysical virtual visual reality) which states that external photonic signals from an object are converted into electrical signals within the retina and are conveyed to V1 and transformed into regulated UPE via redox processes inside V1 neurons [42,43]. Accordingly, spike-related retinotopic electrical signals -along classical axonal-dendritic pathways can produce synchronized biophotonic signals by redox processes within synchronized retinotopic V1 neurons. In this model, small groups of retinotopic visual neurons can function as visual pixels appropriate to the topological distribution of photonic signals on the retina. As a result, we can get an inherent biophysical picture of the object generated by UPE in early retinotopic V1 during visual perception and imagery [43,44]. This novel biophysical hypothesis may revive the Kosslyns depictive theory [45] and the homunculus (mind's eye) hypothesis [46]. Now the question arises how can this hypothesis be supported experimentally? It should be noted that visual circuits that are normally involved in the detection of visual perception features are also responsible for the generation of the phosphene light perception [43,48]. Recently Wang et al. presented [49] the first experimental evidence for the existence of spontaneous and visible light induced UPE from in vitro freshly isolated rats whole eye, lens, vitreous humor and retina. In addition, recently, Dotta and Persinger [50] measured significant increases in biophoton emission from near the right hemisphere but not the left for most volunteers when they imagined a white light in a dark room compared to simply casual thinking. These results support the above biophysical picture representation notion [42,43] and also indicate a more essential role of right hemisphere in visual imagery. Toward coherent states in biological systems? Biological systems operate within the framework of irreversible thermodynamics and nonlinear kinetic theory of open systems, both of which are based on the principles of non-equilibrium statistical mechanics. The search for physically-based fundamental models in biology that can provide a conceptual bridge between the chemical organization of living organisms and the phenomenal states of life and experience has generated a vigorous and so far unresolved debate [1,2]. Recently published experimental evidence has provided support for the hypothesis that biological systems use some type of quantum coherence in their functions. The nearly 100% efficient excitation energy transfer in photosynthesis is an excellent example [3]. Quantum coherence is a plausible mechanism responsible for the efficiency and co-ordination exhibited by biological systems. The hypothesis invoking long-range coherence in biological systems was proposed by H. Fröhlich [4][5][6] and followed by detailed investigations by Tuszynski et al. [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], Pokorný [22][23][24], Mesquita et al. [25][26][27][28] and others for over three decades. The possible role played by coherent states manifested outside low temperature physics has attracted considerable interest in both the physics and biology communities. The original Fröhlich model was very general and did not limit the mechanism of biological coherence to any particular cellular structure. In his model, when the energy supply exceeds a critical level, the dipolar ensemble of biologically relevant molecules populates a steady state of non-linear vibrations characterized by a high degree of structural and functional order [51]. This (electrically polarized) ordered state expresses itself in terms of longrange phase correlations, which are physically similar to such phenomena as superconductivity and superfluidity, where the behaviour of particles is collective and inseparable. The Wu-Austin Hamiltonian There are different approaches possible to be adopted in the analysis of the coherent state generation in biological systems based on Fröhlich coherent states as described in the works of Mesquita et al. [25][26][27]. The Wu-Austin Hamiltonian [29][30][31] is the basis of a quantum mechanical approach to Fröhlich coherent states. Bolterauer and Ludwig [72] investigated the thermodynamics of Wu and Austin system quantum mechanically and have shown that even without pumping their Hamiltonian can give rise to Bose condensation. However, the Wu-Austin Hamiltonian has the unphysical property of having no finite ground state. Turcu [32] have obtained a master equation for Fröhlich rate equations. The main aim of his work was to show that there is a rich family of Hamiltonians, modeling differently the pump and the thermal bath, from which the same Fröhlich-like rate equations can be obtained. We believe that the system of neuronal MTs is a good candidate for being described by one of these Hamiltonians. MTs are composed of tubulins which can be considered as biological electric dipoles. Pokorny provided a detailed analysis of the coherent states in MTs. He experimentally observed resonance effects in MTs in the 10 MHz range [22][23][24]. Criticism on Coherent states in living systems Recently, Reimers et al. [2] have shown that a very fragile Fröhlich coherent state may occur at sufficiently high temperatures and concluded that there is no possibility for the existence of Fröhlich coherent states in biological systems. Also they provided several diagrams in terms of effective temperature which was defined by the authors as T ef f = Ts T , where T s is the temperature of system and T is the temperature of the thermal bath. Physically, the parameter is wrong because a temperature ratio is a unitless quantity not a quantity with the unit of temperature. They have used the effective temperature parameter for the Wu-Austin Hamiltonian [29][30][31] and considered it in the high temperature limit. Their diagrams are mostly based on the effective temperature parameter and hence are, in our opinion, not acceptable due to the selfcontradictory arguments used in their derivations. For more details see [73]. In fact, the criticism raised by Reimers et al. [2,37] is mainly directed against the so-called Orch OR model which was proposed by Penrose and Hameroff to introduce a physical basis for consciousness. In some formulations of the OrchOR model, a manifestation of quantum coherence involved Fröhlich coherent states in MTs [33][34][35]. MTs are highly ordered in the neurons of the brain and can indeed be regarded as providing support for Fröhlich coherent states. In this context, the conclusions of our discussion above also apply to MTs. Therefore, we believe that it is still hypothetically possible to generate Fröhlich coherent states in MTs. However, another issue that arises when considering quantum states for MTs is the rapid decoherence problem. The question is how it is possible for MTs to be in a coherent state while the environment is relatively hot, wet and noisy? According to decoherence theories, sufficiently strong interactions with the environment cause decoherence, which destroys quantum effects [36]. For macroscopic particles there are two main natural ways of experiencing this decoherence: First, decoherence due to collisions with other particles and second the thermal emission of radiation due to the internal heat of an object [38,41]. Tegmark [39] has calculated decoherence times for MTs based on (7) is that Tegmarks formula yields decoherence times that increase with temperature contrary to well-established physical laws and the behavior of quantum coherent states. In view of these (and other) problems in Tegmarks estimates, Hagan et al. [40] assert that the values of quantities in Tegmarks relation are incorrect and thus the decoherence time should be approximately 10 10 times larger leading to a ms range of values for typical decoherence times. According to Hagan et al., MTs in neurons could possibly avoid decoherence via several mechanisms for quantum processing to occur there. Tegmark introduced a function for the decoherence rate [47] which is composed of two parts: one for short wavelengths and the other for long wavelengths. Every scattering calculation based on the Coulomb interaction and Tegmarks decoherence rate function leads to decoherence times that are proportional to temperature according to relations such as τ dec ∝ √ T , τ dec ∝ √ T 2 , τ dec ∝ √ T 3 , etc. Therefore, it can be expected that subsequent calculations based on these criteria are flawed in the hightemperature limit, i.e. as temperature approaches infinity, decoherence time increases too, and if temperature approaches absolute zero, decoherence time approaches zero, a very unphysical conclusion. Microtubules and centrioles MTs are biological hollow cylinders with a 17-nm inner diameter and a 25-nm outer diameter (see Figure1), composed of units called tubulin dimers, each of which has the dimensions 4nm×8nm×6nm [57]. MTs have been implicated as playing an important role in the signal and information processing taking place in the mammalian, and especially human brain. Earlier, MTs have been considered as optical cavities [74] with quantum properties [75]. Figure 2) [57]. Albrecht-Buehler has demonstrated that living cells possess a spatial orientation mechanism located in the centriole. This is based on an intricate arrangement of MT filaments in two sets of nine triplets, each of which is perpendicular to the other. This arrangement provides the cell with a primitive eye that allows it to locate the position of other cells within two to three degrees of angular accuracy in the azimuthal plane and the same accuracy with respect to the axis perpendicular to it [66]. Mitochondria and Microtubules Both mitochondria and microtubules can form dynamic networks in neurons. Moreover, the refractive index of both mitochondria and microtubules is higher than the surrounding cytoplasm, whose consequence is that mitochondria and microtubules can act as optical waveguides, i.e. electromagnetic radiation (UPE) can propagate within their network [44,64]. Regulated UPE (from mitochondrial radicals and excited molecules) can induce polymerization of microtubules. Then, according to the quality of absorbed UPE from mitochondria, microtubules can transport mitochondria in accordance with information processes in cells and neurons. There can be a mutual cross-talk/regulation between mitochondria and microtubules by redox and free radical processes [44]. MTs dynamics and Electric Neural Activity of Neurons Electrodynamic interactions between various cytoskeletal structures, with MTs playing a central role, and ion channels crucially regulate the neural information-processing mechanism. These interactions involve long-range ionic wave propagation along microtubule networks (MTNs) and actin filaments (AFs), and exhibit subcellular control of ionic channel activity. Hence, they have an impact on the computational capabilities of the entire neural function. Cytoskeletal biopolymers, most importantly AFs and MTs, constitute the basis for wave propagation, and interact with membrane components, leading to a modulation of synaptic connections and membrane ion channels. Association of MTs with AFs in neuronal filopodia guides MT growth and affects neurite initiation [57]. Electric signaling by AFs and MTs may play active roles in coincidence detection and storage of spatiotemporal patterns of inputs, and signaling within the cytoskeleton may be particularly critical to information storage over time scales longer than LTP times. The initial route to the MT network could be through the AFs concentrated in the spines. Inputs to arbitrary sites in the neuron can be transmitted from the neuronal membrane to AFs in spines via scaffolding proteins and signal transduction molecules. Electric signals can then be transmitted, utilizing AF cross-linker proteins to MTs, and subsequently through microtubule associated proteins (MAPs) and signal transduction molecules to other MTs in the network [57]. Conclusion It has been shown that the intensity of UPE is in direct correlation with neural activity, cerebral energy metabolism, EEG (Electroencephalography) activity, cerebral blood flow and oxidative processes [76,77]. From a theoretical point of view, the interaction of mitochondrial UPE and MTs can take the MTs into coherent states. The synchronous and coherent vibrations of billions of electric dipoles of biomolecules cannot be ignored in the EEG diagrams. MTs are particularly numerous in the brain where they form highly ordered bundles and are the best candidate for long range coherence and large synchrony [57]. In addition to electrical and chemical signals propagating in the neurons of the brain, signal propagation may take place in the form of UPE too. We believe that the role of UPE in the brain merits special attention (see [57]).
2019-04-13T13:02:37.514Z
2011-12-13T00:00:00.000
{ "year": 2011, "sha1": "9cb6f97b94b8b2b841922095459c495937162380", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/329/1/012006", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d94d75b7af084a9c93e4aab4facfe6ebf4089d12", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
25612019
pes2o/s2orc
v3-fos-license
Risk factors for hospital mortality in valve replacement with porcine bioprosthesis at an universitary institution 1 Medical Student with Scholarship at the Support Research Fund of the Institute of Cardiology of Rio Grande do Sul / University Cardiology Foundation (IC / FUC), Porto Alegre, RS, Brazil. 2 PhD at IC / FUC, Advisor, Porto Alegre, RS, Brazil. 3 Cardiovascular Surgeon at Cardiovascular Surgery Team of IC / FUC, Porto Alegre, RS, Brazil. 4 Leader of the Cardiovascular Surgery Team at IC / FUC and CEO of University Cardiology Foundation, Porto Alegre, RS, Brazil. INTRODUCTION The valve replacement surgery is the accepted treatment in structural heart valve disease, representing approximately 20% of all cardiac surgeries performed and accounts for 30% of the total surgery mortality rate [1]. The mortality rate recorded in the literature for this type of surgery ranges from 1% to 15%, regardless of the type of the implanted prosthesis [2][3][4][5][6][7][8][9]. This variation is justified by differences in demographic and clinical characteristics of patients considered for surgery, the surgical techniques, the position of the valve implantation, the associated surgical procedures [9,10] and in the postoperative care. Retrospective studies with large numbers of patients were performed to identify characteristics that may affect the surgery outcome and create models of individual risk stratification for different institutions [2][3][4][5]11]. The importance of these studies lies in the prospect of identifying patients at increased surgical risk by assessing their demographic, clinical and operative characteristics, neutralizing or minimizing the risk factors in order to reduce surgical mortality and morbidity, as well as the cost of care [12]. About 500 valve surgeries are performed annually at the Institute of Cardiology of Rio Grande do Sul. The porcine bioprostheses are used in approximately 40% of patients who underwent implantation of biological valve replacements, however, the results of these procedures have not been evaluated, unlike what happened with the surgical valve replacement with a bovine pericardial prostheses [13,14] and mechanical prostheses [15], whose analysis has allowed to stratify the implant surgical risk and decrease the operative mortality. This study objective is to characterize the population of patients undergoing implantation of a porcine biological valve prosthesis model at the Institute of Cardiology of Rio Grande do Sul, and also evaluate deaths and identify risk factors for hospital mortality. Valve surgery Surgical procedures and postoperative care were performed as previously described routines. All patients underwent surgery with cardiopulmonary bypass, membrane oxygenation, variable levels of hemodilution and hypothermia and myocardial preservation by hypothermic crystalloid cardioplegia with St. Thomas II solution. After surgery, the patients were taken to the recovery room, where they received intensive care for at least 24 hours; the patients were discharged on the fifth postoperative day [16]. After discharge, patients were referred to the clinical assistant or were followed-up at the institution outpatient clinic. Outcomes and definition of risk factors Deaths during hospitalization for surgical valve replacement with porcine bioprosthesis were considered as primary outcomes. Deaths were classified according to the preponderant factors in: a surgical cause (such as bleeding), due to cardiac causes (such as acute myocardial infarction and heart failure) or non-cardiac causes (such as infection and nervous, renal and pulmonary complications). The demographic, clinical and operative characteristics analyzed were: gender, age, functional class (according to the model proposed by NYHA), LVEF, CHF, atrial fibrilation, SAH, pulmonary arterial hypertension (systolic blood pressure greater than 100 mmHg), DM, serum creatinine, previous cardiac surgery, valvular lesion (mitral, aortic or mitral-aortic), associated CABG, associated tricuspid valve replacement and reoperation during hospitalization. The characteristics associated with the increased hospital mortality were considered as predictors of risk. Ethical Considerations This research project was submitted to the Research Institute of Cardiology of Rio Grande do Sul, which was approved by the Institute Research Ethics Committee, being registered under No. 3734/05. Norms related to patient privacy and confidentiality in the handling of medical information was respected. The data used in this study were obtained from records of the Department of Cardiovascular Surgery and hospital records. Collecting and analyzing data This research was based on four phases: selection of patients, chart review with data logging, tabulation of data and statistical analysis. The latter included the distribution of demographic, clinical and operative characteristics in the study population, determining the percentage of deaths, the mortality ratio with the selected features and the identification of risk factors for hospital mortality. We used univariate and multivariate statistical analysis using SPSS for Windows, version 14.0 to determine predictors of prevailing and independent hospital mortality risk. In order to obtain this information, Chi-square test, Student's t test and logistic regression were used. In multivariate analysis, the variables were used in the form that had greater discriminatory power. All significant characteristics (P ≤ 0.05) in univariate analysis were considered for multivariate analysis. We considered risk characteristics those with significant association with hospital mortality, for an alpha level of 0.05. The odds ratio (OR) with a 95% confidence interval was obtained by logistic regression analysis to estimate the relative risk of each analyzed characteristic. Characterization of the valve disease Among the 808 patients included in this study, 65 (8%) patients had rheumatic valvular disease and 14 (1.7%) with congenital valve alteration, in which the bicuspid aortic valves were the most common one, 31 (3.8%) patients had valve lesion determined by infective endocarditis and 14 (1.7%) ischemic disease, 684 (84.6%) patients did not have the etiology of valve lesions identified in their medical record. Hospital mortality There were 80 (9.9%) deaths. As for the causes of death, 10% were due to surgery, 46% of cardiac causes and 44% of non-cardiac causes. Table 1 shows the demographic, clinical, surgical characteristics analyzed, also their distribution in the study population and the association with hospital mortality. These variables were significantly associated (P < 0.05) with increased hospital mortality, except for reoperation during hospital admission (P = 0.064, ns). Characteristics associated with greater absolute mortality were associated procedure of tricuspid valve repair (38.1%), LVEF less than 30% (27.8%) and the presence of mitral valve disease (21.2%), as can be noted in Table 1. Risk Factors In order to increase the discriminatory power of the statistical analysis, the variables with multiple categories (age, functional class, LVEF, heart valve lesion and previous cardiac surgery) were transformed into dichotomous variables, and its distribution and association with hospital mortality are shown in Table 2. Estimating the relative risk By logistic regression analysis OR values were obtained in order to estimate the relative risk of the characteristics considered. Table 3 shows the OR values and their respective 95% confidence intervals (95% CI). Risk factors for hospital mortality with higher OR (OR> 3) were age groups above 60 years (variable OR, but greater than 3), associated tricuspid valve repair (OR 6.111, 95% CI 2.451 to 15.235), mitral valve lesion (OR 3.984, 95% CI 2.481 to 6.396) and LVEF less than 30% (OR 3.824, 95% CI 1.323 to 11.048), although other characteristics have demonstrated OR> 1, a value considered significant. Independent risk factors The characteristics that were significantly associated with increased hospital mortality in univariate analysis were considered for multivariate analysis, and also sought to show independent risk factors. The variables were used in the dichotomous form, which showed greater discriminatory power in the statistical analysis. Multiple logistic regression was used by the method Backward Stepwise with 0.05 P value input and a 0.10 Q output, leaving the last step of the method the following characteristics expressed in decreasing OR: mitral valve disease ( (Figure 1). DISCUSSION The identification of risk factors for patients undergoing valve replacement surgery has been studied for over 20 years [17]. The quantification of the factors identified and its neutralization by clinical and operative measures have decreased the risk of surgery [18]. Patients with severe valvular disease and minor systemic repercussions are being considered for surgery, due to their tendency to intervene earlier in the disease state, reflecting lower prevalence / intensity of recognized risk factors and, thus, resulting in lower hospital mortality [19]. But if some of the demographic or operative characteristics, which in the past increased surgical mortality and morbidity, can now have its influence minimized, and surgical indication progressively increased of older patients (and with more comorbidities) in different surgical series, can also induce changes in the profile of patients considered for valve surgery [20]. Thus, it is justified the periodic study of risk factors and keep this subject up-to-date. The study of risk factors begins with the selection of demographic and surgical characteristics that characterize the population evaluated and the procedures performed. Overall, we can state that the surgical experience confirms the influence of characteristics such as advanced age, low body mass index, renal insufficiency, low LVEF, indication for emergency surgery, heart surgery and others in the increased in-hospital mortality of patients with valvular heart diseases, and these must receive greater attention from physicians involved in their clinical and surgical management [21][22][23]. In this research, we used recognized characteristics from the literature [3,4,9,17,18], focusing on those presented by Ambler et al. [2]. This attitude is justified by the ready availability of medical information considered as part of the hospital record, and also because they had been previously used by the authors [13][14][15]. We opted to include pulmonary arterial hypertension as an additional factor, but other recognized factors were excluded, such as chronic obstructive pulmonary disease and peripheral vascular disease [3], which were not always correctly referred or quantified in hospital records. The risk factors identified were female gender, age greater than or equal to 70 years, NYHA functional class III and IV, LVEF less than 30%, congestive heart failure, atrial fibrillation, hypertension, pulmonary hypertension, diabetes, serum creatinine greater than or equal to 1.4 mg / dL, mitral valve disease, previous cardiac surgery and CABG or associated tricuspid valve. It is interesting to note that these factors participate with their own score in the risk stratification model for heart valve surgery proposed by Ambler et al. [2]. These authors highlight the performance of previous cardiac surgery (regardless of type), emergency surgery; age over 79 years and renal failure with dialysis as strong predictors of increased mortality. For Nowicki et al. [24] in a study on independent risk factors for surgical aortic valve replacement, previous heart surgery represent a risk factor associated with age over 70 years, small body surface, elevated creatinine, NYHA class IV, previous cardiac arrest, CHF, AF, emergency and associated MR. For the mitral valve surgery, the statistically significant characteristics were: female patients, advanced age, DM, CABG, previous cerebrovascular accident, elevated creatinine, NYHA class IV, emergency situations and CHF. Roques et al. [25], in the EuroSCORE study, which configures program with score predictor of hospital mortality, found that previous heart surgery and concomitant CABG were associated with increased surgical risk. Other variables significantly associated with high mortality were: advanced age, creatinine, low LVEF, heart failure, pulmonary hypertension, emergency situations, multiple valve replacement or tricuspid procedure. Edwards et al. [26] identified as independent risk factors for isolated valve replacement surgery, emergency situations, renal failure and cardiac arrest, and also the need for reoperation. This was also identified by Jamieson et al. [3] as well as emergency surgery, renal failure (whether or not on dialysis), low LVEF, and NYHA functional class IV (NYHA). The need for reoperation during hospitalization was not identified in this study as a risk factor. The use of odds ratio or OR as a resource for statistical analysis made it possible to estimate the surgical risk determined by each of the evaluated characteristics [27]. The predictors of increased risk in this study, in descending order, as the clinical characteristics were LVEF below 30%, DM, AF and pulmonary hypertension and as surgical characteristics were concomitant tricuspid valve surgery, mitral valve lesion and previous heart surgery. Interestingly, age greater than or equal to 70 years, while contributing to increased mortality, it is quantified in reduced values in the OR, when compared to other factors. Although elderly patients with valvular heart diseases may show more severe cardiac or systemic involvement (and comorbidities may contribute individually as risk factors), it is difficult to deny surgical treatment, so that specific perioperative care should be developed. This factor has been providing reduction in mortality, as stated in surgical experiments with groups of patients over the age of 70 [28] or 80 [29]. It is possible that the diffusion of percutaneous valve interventions may modify the surgical indication for older patients and may help to reduce surgical mortality. However, consideration of age as a risk factor to be noted is illustrated when comparing current results with those of a study conducted by the authors regarding the definition of hospital risk for mechanical valve prosthesis implantation [15], in which hospital mortality observed was 3.9%, in favor of the present series, 9.9%. It is possible that several demographic characteristics determine the difference in mortality, taking into account the mean age of patients referred for mechanical prostheses implantation and bioprostheses implantation, higher in the latter group (46.8 years and 66.5 years, respectively). Studies comparing results with implantation of a bioprosthesis or mechanical prosthesis in populations with overlapping patients as clinical characteristics, similar to that performed by Feguri et al. [30] can determine whether the observed differences in relation to mortality and risk factors are due to the type of valve replacement or to several characteristics of populations with indications for different cardiac valves. CONCLUSIONS Hospital mortality observed in this study (9.9%) is consistent with the literature results. Risk factors for hospital mortality identified (associated tricuspid valve repair, mitral valve disease, LVEF less than 30% DM, AF, pulmonary hypertension, serum creatinine greater than or equal to 1.4 mg / dL, previous heart surgery, SAH functional class III and IV, associated CABG, aged greater than or equal to 70 years, CHF and female sex) had already been reported by other authors. The possible neutralization of risk factors through changes in criteria for surgical indications, better clinical preoperative compensation and postoperative routine changes, may contribute to the reduction of surgical morbidity and mortality, as well as the costs of care.
2018-04-03T04:02:06.264Z
2012-05-09T00:00:00.000
{ "year": 2012, "sha1": "cf144b800625d1f68818c0e4375d4cb8ef170f7c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/1678-9741.20120100", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cf144b800625d1f68818c0e4375d4cb8ef170f7c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
261336453
pes2o/s2orc
v3-fos-license
Risk and predictors of hepatic decompensation in grey zone patients by the Baveno VII criteria: A competing risk analysis Baveno VII was proposed for non‐invasive identification of clinically significant portal hypertension. However, a substantial proportion of patients is classified in the grey zone (i.e., liver stiffness 15–24.9 kPa and/or platelet count <150 × 109/L). | INTRODUC TI ON Compensated advanced chronic liver disease (cACLD) was proposed in Baveno VI consensus to describe a spectrum of liver disease stages from advanced liver fibrosis to compensated cirrhosis, highlighting patients at risk of clinically significant portal hypertension (CSPH) and thus hepatic decompensation. 1 According to the recent Baveno VII consensus, liver stiffness measurement (LSM) by transient elastography (TE) <10 kPa can rule out cACLD, and LSM < 15 kPa with platelet count ≥150 × 10 9 /L can rule out CSPH with >90% sensitivity and negative predictive value.In contrast, LSM ≥ 25 kPa can rule in CSPH with >90% specificity and positive predictive value, depicting a group of patients at high risk of decompensation. 2,3However, a significant proportion (around 40%) of patients with cACLD are in the grey zone (i.e., LSM: 15-24.9kPa and/or platelet count <150 × 10 9 /L), in which the natural history and subsequent risk of hepatic decompensation are less defined. 3A recent retrospective analysis notably suggested a divergent risk of hepatic decompensation among patients in grey zone between viral and non-viral aetiologies. 4 deaths from cirrhotic complications and hepatocellular carcinoma (HCC) have substantially increased over the past decades, 5 knowing the risk of hepatic decompensation in this large population of grey zone patients is of paramount importance. 7][8] Studies showed that patients with SSM ≤ 46 kPa combining the Baveno VI criteria (LSM < 20 kPa and platelet count ≥150 × 10 9 /L) confidently ruled out high-risk varices which an upper endoscopy could be spared, 9,10 and such diagnostic algorithm has been incorporated in the latest Baveno VII consensus. 2jti et al 11 also proposed models using Baveno VII and SSM in identifying cACLD patients at risk for hepatic decompensation. These new developments deserve validation in large representative cohorts. Hence, we aimed to evaluate the risk and predictors of hepatic decompensation in cACLD patients in Baveno VII grey zone, as well as to validate the Baveno VII-SSM combination in predicting the risk of decompensation. | Transient elastography Details of the technical background and examination procedure of TE have been described. 12All operators had performed at least 500 procedures before examining patients in this project.The FibroScan 502 machine (Echosens) was used.All patients were fasted for at least 2 h before the procedure.The final LSM (in kPa), SSM (in kPa) and controlled attenuation parameter (CAP) values (in dB/m) were represented by the median of ≥10 measurements.LSM was considered reliable only if at least 10 successful acquisitions were obtained and the interquartile range (IQR)-to-median ratio of the acquisitions was ≤0.3.Because the inception of this study was before the availability of the FibroScan 630 machine, SSM was measured using at 50 MHz using the standard M probe. 13 | Data collection At baseline, demographic and laboratory data were collected.We also collected data on aetiology of CLD, and comorbidities including diabetes and hypertension.We used the albumin-bilirubin (ALBI) 8.7%-14.6%]).By competing risk analysis, aetiology of liver disease (alcohol-related liver disease), albumin-bilirubin score and alkaline phosphatase level were independently associated with decompensation among patients in the grey zone.The combination of Baveno VII and spleen stiffness significantly reduced patients classified into grey zone (12.8% in cACLD patients), while maintaining high discrimination of decompensation in low-and high-risk groups.where bilirubin is in μmol/L and albumin in g/L | Outcome The primary outcome was incident hepatic decompensation, which was defined as ascites, variceal bleeding, hepatic encephalopathy and/or cirrhotic complication-related mortality.The secondary outcome was incident HCC as confirmed by histology or typical radiological features. | Statistical analysis All statistical analyses were performed using SPSS version 28.0 and R software (version 4.2.2).According to the Baveno VII criteria, patients with LSM < 15 kPa and platelet count ≥150 × 10 9 /L were classified as the low-risk group to rule-out CSPH; patients with LSM ≥ 25 kPa were classified as the high-risk group to rulein CSPH; and the remaining patients were in the grey zone.Continuous variables were expressed as mean ± SD or median (IQR), whereas categorical variables were presented as n (%).Qualitative and quantitative differences between sub-groups were analysed by Chi-square or Fisher's exact tests for categorical parameters and one-way analysis of variance and Kruskal Wallis test for continuous parameters. Cumulative incidence of the primary and secondary outcomes with adjustment of risk of competing events were estimated by Gray's test with 95% CI.Non-cirrhotic complication-related death and HCC were considered as competing events for the primary outcome, non-HCC-related deaths were considered as competing events for the secondary outcome.Gray's test was used to compare the cumulative incidences among different Baveno VII groups. We then determined factors associated with hepatic decompensation in grey zone patients.On univariate and multivariable analysis, subdistribution hazard ratios (SHR) with 95% CI were estimated with Fine-Gray subdistribution hazards regression with adjustment of competing risk events.Only predictors significant in the univariate analysis were included in the multivariate analysis. To study, if SSM improves the prediction of hepatic decompensation over the Baveno VII classifications, we performed a subgroup analysis in patients who had paired measurements available with LSM and SSM.We validated the decompensation risk stratification of two different models (Sequential Baveno VII-SSM Model and Combined Baveno VII-SSM Model) based on LSM, SSM and platelet count proposed by Dajti et al. 11 The Sequential Baveno VII-SSM Model uses sequential application of SSM cut-offs of <21 kPa and >50 kPa to rule out and rule in CSPH in patients within the grey zone group according to the Baveno VII model, respectively. For the Combined Baveno VII-SSM Model, patients would be classified as having low risk of CSPH if at least two of the following criteria were present: LSM ≤ 15 kPa, platelet count ≥150 × 10 9 /L, SSM ≤ 40 kPa; high risk of CSPH if at least two of the following criteria were present: LSM > 25 kPa, platelet count <150 × 10 9 /L, SSM > 40 kPa. | Patient characteristics A total of 13,876 patients with chronic liver disease were recruited.We identified 2763 cACLD patients and 7895 non-cACLD (LSM < 10 kPa) patients after excluding 3218 patients (Figure 1). The majority of the patients were male in all groups (Table 1). Compared with the low-risk group, the grey zone and high-risk group included a higher proportion of patients with chronic viral hepatitis but a lower proportion of patients with non-alcoholic fatty liver disease and alcohol-related liver disease.Patients in the high-risk group were older, had higher levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), alphafetoprotein (AFP), ALBI score and LSM, but lower levels of albumin, platelet count and creatinine. The cumulative incidence of HCC was significantly higher in grey zone and high-risk patients than those in the low-risk group (p < 0.001), however, there was no significantly difference between the grey zone and high-risk groups (p = 0.125, Figure 2B).The cumulative incidence of HCC was significantly higher in low-risk patients than non-cACLD patients (p < 0.001). Based on the LSM and platelet count, patients in the grey zone could be further classified into three groups, that is, LSM 15-24.9kPa, platelet count <150 × 10 9 /L, or both.There was no significant difference in the cumulative incidence of decompen-ALBI score = log 10 bilirubin × 0.66 + (albumin × − 0.085) sation among the three groups (p = 0.152) (Figure 3).However, the cumulative incidence of HCC was highest in patients with both LSM 15-24.9kPa and platelet count <150 × 10 9 /L and lowest in those with platelet count <150 × 10 9 /L alone (p < 0.001).We compared the incidence of decompensation and HCC by aetiology of CLD in patients of grey zone (Figure S1) then.The cumulative incidence of decompensation was significantly higher in patients of alcohol-related liver disease (ALD) than non-alcoholic fatty liver disease (NAFLD) or viral aetiology.But the cumulative incidence of HCC was significantly higher in patients of viral hepatitis than other aetiologies. We then performed subgroup analysis in NAFLD and ALD.As presented in Table S1, higher ALP (SHR, 1.71; 95% CI: 1.11-2.65;p = 0.016) was the only risk factor associated with decompensation in NAFLD.Higher AST (SHR: 1.01; 95% CI: 1.00-1.01;p < 0.001) was significantly associated with hepatic decompensation in ALD patients.However, this might be chance finding due to low sample size in this subgroup analysis. | Incident decompensation by Baveno VII criteria and spleen stiffness measurement A total of 179 cACLD patients (all confirmed cirrhosis) had paired LSM and SSM measurements (Table S2).The overall successful rate of SSM was 86.6%.Patients with SSM > 40 kPa were older, had higher levels of ALT, ALP, AFP, ALBI score and LSM, but lower levels of albumin, platelet count and creatinine than patients with SSM ≤ 40 kPa.We then compared the baseline characteristics of patients with and without the assessment of SSM (Table S3).Patients with SSM were older, more likely F I G U R E 1 Study participant flow.cACLD, compensated advanced chronic liver disease; HCC, hepatocellular carcinoma; LSM, liver stiffness measurement; PLT, platelet. to be male and viral aetiology.There was no significant difference in terms of ALBI score and ALP level for patients with or without SSM. By the Baveno VII criteria, 75 (41.9%),78 (43.6%) and 26 (14.5%) patients were in the low-risk, grey zone and high-risk groups, respectively.In the grey zone, 12 (15.3%)patients developed decompensation during follow-up (Table S4).Patients who developed decompensation had higher baseline SSM values than those without decompensation (Figure S2).cumulative incidence of decompensation but not HCC was significantly different in patients of Baveno VII low-risk, grey zone and high-risk groups (Figure S3). Likewise, the Combined Baveno VII-SSM Model classified fewer patients in the grey zone, with 91 (50.8%), 23 (12.8%) and 65 (36.3%)patients in the low-risk, grey zone and high-risk groups, respectively.Four (17.3%) patients in the grey zone developed decompensation (Table S4 and Figure S4).In the present study, we validated the Baveno VII criteria on the risk stratification of hepatic decompensation.The incidence of decompensation was highest in Baveno VII high-risk group.The 5-year decompensation rate of the low-risk group was only 0.6% (95% CI: 0.2%-1.3%),which was similar to patients without cACLD.These patients could be safely excluded from the assessment of CSPH and its complications. Since the publication of Baveno VII consensus, it was immediately apparent that a significant proportion of patients would be classified in the grey or indeterminate zone, which is the main focus of the current study.Indeed, around 40% of the cACLD patients in this study were in the Baveno VII grey zone.We found that such patients had a 5-year incidence of decompensation of 4.2%, hepatitis patients in our cohorts received antiviral treatment which might explain the lower risk of decompensation. The ALBI score was proposed for the assessment of the liver dysfunction and prognosis in patients with HCC. 15 There have been several recent studies showing that the ALBI score could accurately predict the severity and long-term prognosis of patients with cirrhosis, and its predictive performance was superior to model of endstage liver disease (MELD) and MELD integrating sodium (MELD-Na) scores. 16,17The ALBI score is a very simple score that evaluates only two objective parameters (albumin and bilirubin).In our study, the ALBI score was independently associated with the incident decompensation in grey zone patients. Elevated ALP was another predictor of decompensation in this study.ALP elevation is a common finding in liver cirrhosis due to distorted liver architecture.In NAFLD, a cholestatic pattern predicts cirrhotic complications and HCC. 18On the other hand, the small number of patients with cholestatic liver disease in this study suggests that this is not the reason underlying the link between ALP and decompensation.Furthermore, ALP may originate from the liver or bone. 19Although hepatic osteodystrophy has been reported in cirrhosis, 20 evidence on its association with clinical outcomes is lacking.Future studies using ALP isoenzyme to determine the origin of ALP may shed light on the mechanism underlying this observation. Based on our data, patients with non-viral hepatitis, high ALBI score and elevated ALP had a higher risk of decompensation.Such patients should be prioritised for hepatic venous pressure gradient measurement (if available) to exclude CSPH or closer monitoring of hepatic decompensation and new complications.In our study, we validated that there was continued superiority for the combined Baveno VII-SSM model over Baveno VII criteria on the O RCI D Sang Hoon Ahn https://orcid.org/0000-0002-3629-4624 Beom Kyung Kim https://orcid.org/0000-0002-5363-2496 Three prospective cohorts of patients who have undergone TE examination for chronic liver disease were analysed.5078, 5556 and 3242 adult patients older than 18 years were recruited at the Prince of Wales Hospital, Hong Kong from August 2012 to March 2016, Severance Hospital, South Korea from April 2006 to January 2022, and Hôpital Haut-Lévêque, France from June 2003 to October 2021.The study baseline date was defined as the date of TE examination.We excluded patients with hepatic decompensation or HCC before baseline, patients who died or had hepatic decompensation/ HCC within 6 months after baseline, patients with unreliable LSM or those with missing platelet count for the evaluation of the Baveno VII criteria. : Patients in grey zone of Baveno VII criteria remain at high risk of hepatic decompensation.Clinical risk factors and spleen stiffness can further stratify the risk in such patients.score for the evaluation of the underlying liver function based on the following formula: Only 7/23 decompensating events occur in the high-risk group according to Baveno VII, whereas the highrisk zone of the combined model correctly identifies the majority (16/23) of the patients who will decompensated during follow-up.The Baveno VII criteria, Sequential Baveno VII-SSM Model or Combined Baveno VII-SSM Model could all clearly distinguish patients with different risk of hepatic decompensation (Figure S4).TA B L E 1 Baseline characteristics of included patients. somewhere between the low-and high-risk groups and cannot be ignored nonetheless.In this situation, there are two possible approaches to handle patients in the grey zone.First, clinicians may treat grey zone and high-risk patients similarly.However, the two groups are clearly different.Alternatively, one should understand the predictors of decompensation in grey zone patients and consider more refined risk stratification.In our competing risk analysis in grey zone patients, the risk of decompensation in patients with ALD was two times of that in patients with viral hepatitis.Patients of ALD had the highest risk of decompensation compared with NAFLD and viral aetiology of CLD (FigureS1).This is in keeping with previous natural history studies showing a faster rate of disease progression among patients with ALD.14 Virological suppression is now an achievable standard of care in viral-related cACLD patients.There were 75.4% of the viral F I G U R E 2 Incidence of (A) hepatic decompensation and (B) HCC the different Baveno VII categories.HCC, hepatocellular carcinoma.F I G U R E 3 Incidence of (A) hepatic decompensation and (B) HCC in the Baveno VII grey zone categories (N = 1243).HCC, hepatocellular carcinoma. In addition, SSM could improve the prediction of hepatic decompensation over the Baveno VII classifications.Both SSM-based models including the Sequential Baveno VII-SSM Model and Combined Baveno VII-SSM Model could clearly distinguish patients with different risk of hepatic decompensation with lower proportion of patients classified in the grey zone compared with the original Baveno VII criteria.The risk of decompensation was similar in the high-risk group as defined by the original Baveno VII criteria and the two SSM models.Several studies have shown the importance of the measurement of SSM as a valuable tool for identifying CSPH and cirrhosis-related complications.The Baveno VII consensus endorsed the use of SSM to improve risk stratification for CSPH and high-risk varices.In the study of Dajti et al, 11 the authors proposed the combined Baveno VII-SSM model used for the identification of CSPH, which significantly reduced patients classified into the grey zone by Baveno VII criteria to 7%-15%, maintaining adequate positive and negative predictive value. ( Abbott, AbbVie, Ascletis, Bristol-Myers Squibb, Echosens, Gilead Sciences, Janssen and Roche.She has also received a research grant from Gilead Sciences.Victor de Lédinghen has served as a consultant or advisory committee member for Gilead Sciences, Intercept, Alfasigma, Orphalan and Mindray; and a speaker for AbbVie, Echosens, Gilead Sciences, Hologic, Tillotts, Orphalan and Janssen.He has received a research grant from Gilead Sciences.Henry Chan has served as an advisory board member for Aligos, Aptorum, Arbutus, GSK, Gilead, Roche, Vaccitech, Vir Biotechnology, Virion Therapeutics; and a speaker for Gilead, Roche, and Viatris.Vincent Wong has served as a consultant or advisory board member for AbbVie, Boehringer Ingelheim, Echosens, Intercept, Inventiva, Novo Nordisk, Pfizer and TARGET PharmaSolutions; and a speaker for Abbott, Ab-bVie, Gilead Sciences and Novo Nordisk.He has received a research grant from Gilead Sciences, and is a cofounder of Illuminatio Medical Technology Limited.Seung Up Kim has served as a consultant or advisory committee member Gilead Sciences, Bayer, Eisai, Novo Nordisk and Boston Scientific; and a speaker for Gilead Sciences, GSK, Bayer, Eisai, AbbVie, EchoSens, MSD, Otsuka, Bristol-Myers Squibb and Boston Scientific.He has received a research grant from AbbVie and Bristol-Myers Squibb. The upper limitation of ALP was defined as 150 U/L for male and 140 U/L for female younger than 22 years, 110 U/L for female or male ≥22 years.Liver stiffness measurement and platelets count were not included in model.Bold indicates significance level at p value < 0.05 23on of decompensation in patients of Baveno VII grey zone.22Second, the definition of cACLD was based on LSM which might be confounded by obesity, liver congestion or operator experience.23Third,LSMwas only measured at baseline.Future studies should determine the prognostic significance of serial assessments.Fourth, ing acquisition (equal); methodology (equal); writing -review and editing (equal).Grce Lai-Hung Wong: Formal analysis (equal); writing -review and editing (equal).Adèle Delamarre: Writingreview and editing (equal). Saoon Ahn: Writing -review and editing (equal).Guan-Lin Li: Writing -review and editing (equal).Beom Kyung Kim: Writing -review and editing (equal).Lilian Yan
2023-08-31T06:18:31.928Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "347c94e887b8bc413b04805dea0559937e40b5fe", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/apt.17699", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "f1e33fe78d1b3c6dcfce63aec14aba9c15bbcce8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255060031
pes2o/s2orc
v3-fos-license
A New Approach of Modelling Bottom Edge Cutting in 4-Axis Rough Milling of Complex Parts and Its Application on Feed Rate Optimization Complex mechanical parts such as a blisk of aero-engines are commonly used in aerospace industry. These parts are complex in shape and their rough machining are conducted in 4-axis machine tools with end mills. The end mills are fully engaged into the workpiece material to be removed. Because of the complex cutter motion in 4-axis milling, the bottom edges of the end mills are involved in cutting with high possibility, resulting in an undesirable increase of cutting forces, tool deflection, and quick tool wear. To address this technical challenge, an analytical method is proposed to identify and evaluate the bottom edge cutting in 4-axis milling in this work. The motion of the cutter’s tool tip with respect to the workpiece is analyzed and the equations are formulated based on a basic interpolation algorithm. An approach to identifying and evaluating the bottom edge cutting is proposed. The increment of the cutting forces caused by the bottom edge cutting is taken into consideration to precisely evaluate the overall cutting forces. A feed rate optimization model is then established to control the cutting forces. The simulation and the experiment of rough milling of a blisk verify that the bottom edge cutting can be identified and the cutting force can be controlled by optimizing the feed rates without losing much machining efficiency. Introduction Complex mechanical parts such as blisks of aero-engines are commonly used in aeronautic and astronautic industries. These parts are complex in shape and their rough machining is conducted in 4-axis Computer Numerically Controlled (CNC) machine tools usually with end mills. The end mills are fully engaged into the workpiece material to be removed (see Figure 1). Because of the complex cutter motion in 4-axis milling, the bottom edges of the end mills are fully engaged in cutting with high possibility, resulting in an undesirable increase of cutting forces, tool deflection, and quick tool wear, even tool breakage. Therefore, it is crucial to identify and formulate the bottom edge cutting, and to reduce this undesirable effect during milling. The bottom edges of cutters engage in cutting in many machining processes such as orbital drilling and plunge milling. Kong et al. [1] pointed out that the cutting edge on the bottom of the tool is the main cause of the cutting force in orbital drilling. Tian et al. [2] established a mathematical model to simulate the cutting depths and volume of the bottom cutting edges on the helical milling process. They found that the undeformed chip geometry is affected significantly and can be optimized by helical milling parameters to obtain a good cutting condition. Francesco et al. [3] developed a new approach to measure and compute the cutting force coefficients for end mills used in plunge milling. The method is important to predict the chatter conditions. Fredj et al. [4] found that augmentation of the chip cross-section is the cause of the increase of the cutting forces in deep plunge milling of Unfortunately, the bottom edge cutting in 4-axis rough machining has not been successfully addressed in these existing research works. To address this technical challenge, an analytical method is proposed to identify and evaluate the bottom edge cutting in 4axis milling in this paper. In Section 2, the motion of the cutter's tool tip with respect to the workpiece is analyzed and formulated based on the basic interpolation algorithm, and a new approach to identify and evaluate the cutter's bottom edge cutting is proposed. Section 3 establishes a feed rate optimization model. The increment of the cutting forces caused by the bottom edge cutting is taken into consideration to precisely evaluate the overall cutting forces. In Section 4, the simulation and the experiment of rough milling a blisk are rendered to verify this new approach. Cutter Workpiece Figure 1. A flat end mill is fully engaged into the material of the workpiece in the rough milling of a blisk. However, the bottom edge cutting in rough milling with multi-axis machine tools has been more and more in the focus in the recent years. Zhu et al. [5] argued that when the lead angle of the tool axis is negative, the mechanistic model will lose accuracy if the bottom edge cutting effect is neglected. An improved mechanistic model of five-axis machining with a flat end mill was proposed. Wan et al. [6][7][8] proved that, by carefully calibrated cutting force coefficients and extended experiments, the influence of the bottom edge cutting is not negligible if the axial cutting depth is relatively small. Their work is limited to two-dimensional milling. Because of the additional bottom edge-induced cutting forces, the conservative machining parameters are usually adopted to avoid undesirable machining defects, causing low machining efficiency [9]. During the rough milling of complex parts, cutting parameters need to be carefully determined to achieve the objectives of rough machining. The cutting parameters include feed rate, spindle speed, width and depth of cut, etc., and the objectives could be different such as minimum machining time, maximum material removal rate, and maximum uniformity of the remaining volume at the end of roughing [10]. The cutting forces, moreover, need to be controlled to protect cutters and maintain stable machining. Many optimization methodologies are used in CNC machining to offer optimal process parameters [11]. Since the reduction in feed rate is an effective method to control cutting forces, Li et al. [12] divided a tool path into segments, then a heuristic method was employed to optimize the feed rate constrained by milling forces. Fu et al. [13] established a mapping relation between feed rate and cutting forces. The objective of optimizing the feed rate is to obtain better surface quality. Unfortunately, the bottom edge cutting in 4-axis rough machining has not been successfully addressed in these existing research works. To address this technical challenge, an analytical method is proposed to identify and evaluate the bottom edge cutting in 4-axis milling in this paper. In Section 2, the motion of the cutter's tool tip with respect to the workpiece is analyzed and formulated based on the basic interpolation algorithm, and a new approach to identify and evaluate the cutter's bottom edge cutting is proposed. Section 3 establishes a feed rate optimization model. The increment of the cutting forces caused by the bottom edge cutting is taken into consideration to precisely evaluate the overall cutting forces. In Section 4, the simulation and the experiment of rough milling a blisk are rendered to verify this new approach. Formula of Bottom Edge Cutting of Flat End Mills in 4-Axis Milling In aerospace industry, the rough milling of complex parts is conducted in multiaxis machine tools, and the CNC controllers of the machine tools use different CNC interpolation algorithms. In order to evaluate the bottom edge cutting with high fidelity, the instantaneous cutter positions and orientations (or cutter locations) in machining should be accurately computed by using the actual machine kinematics and the interpolation algorithm of the CNC controller. Unfortunately, most conventional methods approximately calculate cutter locations without taking the interpolation algorithm into consideration, by which the large deviations from the actual locations are inevitable. This work adopts a 4-axis horizonal machine tool (e.g., X-, Y-, Z-, and B-axes) and a basic 4-axis CNC interpolation algorithm of constant-acceleration interpolation algorithm as example. The methodology of this work can be applied to 4-axis machines with different CNC controllers. The main steps of formulating bottom edge cutting are (1) according to two cutter locations in steps of an NC program, several instantaneous cutter locations in the machine coordinate system are sampled and calculated by using the constant-acceleration interpolation algorithm, (2) the instantaneous cutter locations are converted into the workpiece coordinate system by using the machine kinematics, and (3) the feed rate of the tool tip is formulated in the workpiece coordinate system, and it is adopted to evaluate the bottom edge cutting. The technical details are given as follows. Representation of Instantaneous Cutter Locations Using the Basic Interpolation Algorithm To cut the parts on 4-axis machine tool, discrete cutter locations of tool paths are preplanned using the CAM software, in which cutter positions (X-, Y-, and Z-coordinates) and orientations (B-coordinate) are calculated, and these Cutter Location Data (CLData) are translated to G-code feeding to the CNC controller. The feed rate for each cutter location is preplanned as well. In milling, the CNC controller interpolates many instantaneous cutter locations according to consecutive cutter locations in the NC program, and the cutter is controlled to move from one location to the next. Since the interpolation algorithm is not disclosed, it is reasonable to adopt a basic CNC interpolation algorithm, which is a constant-acceleration interpolation algorithm, in this work as an example. Although the adopted algorithm is different from those in other CNC controllers, this method is generic by using the actual algorithm. The constant-acceleration interpolation algorithm is briefly described here. Suppose a step including two cutter locations x i y i z i B i and x i+1 y i+1 z i+1 B i+1 in the machine coordinate system are fed into the machine along with their feed rate f i and f i+1 . Then, the instantaneous cutter locations x(t) y(t) z(t) B(t) are interpolated with a constant-acceleration algorithm, where t is the time. The equation of the instantaneous cutter locations in the machine coordinate system is where L i , ∆ i , a i , and w i represent the step length, the cutting time of this step, the average acceleration, and the average angular velocity, respectively. They are computed as Micromachines 2022, 13, 2071 4 of 12 By using the machine kinematics, these cutter locations are converted into the workpiece coordinate system. Kinematics of 4-Axis Machine Tool The machine kinematics is established to convert the instantaneous cutter locations from the machine coordinate system into the workpiece coordinate system. Three coordinate systems are defined (see Figure 2). The origin O M of the machine coordinate system CS M is located in the center of the workpiece, and its X M -, Y M -, and Z M -axes are parallel to the x-, y-, and z-axes of the machine tool, respectively. The workpiece coordinate system CS W is defined as: (a) CS W coincides with the machine coordinate system CS M when the rotation angle B of the machine tool's table is zero; (b) CS W is rotated around the Y W -axis by angle B when the rotation angle B is not zero. In this case, angle B represents the tool orientation. At last, the tool coordinate system CS T is defined by setting that (a) its origin O T is at the cutter's tool tip and (b) its X T -, Y T -, and Z T -axes are parallel to the s X M -, Y M -, and Z M -axes of the machine coordinate system, respectively. The coordinates of the cutter's tool tip in the machine coordinate system CS M By using the machine kinematics, these cutter locations are converted into the workpiece coordinate system. Kinematics of 4-Axis Machine Tool The machine kinematics is established to convert the instantaneous cutter locations from the machine coordinate system into the workpiece coordinate system. Three coordinate systems are defined (see Figure Based on these coordinate systems, the machine kinematics is established. The transformation matrix M 1 (t) from the tool coordinate system to the machine coordinate system is and the transformation matrix M 2 (t) from the machine coordinate system to the workpiece coordinate system is The equivalent transformation matrix M(t) from the tool coordinate system to the workpiece coordinate system is Identification and Evaluation of Bottom Edge Cutting For a flat end mill, its periphery cutting edges always cut the workpiece material at a preplanned cutting speed (see Figure 3). The bottom cutting edges, however, cut the workpiece material at a lower cutting speed than the preplanned one. The closer the tool tip to the bottom cutting edge, the lower the cutting speed. The tool tip, which is the center of the bottom cutting edges and located on the cutter axis, has zero cutting speed. In effect, the workpiece material under the tool tip is rubbed away rather than cut off (see Figure 3), resulting in large cutting forces and quick tool wear. Therefore, the tool tip among the bottom edges is adopted to evaluate the bottom edge cutting. The equivalent transformation matrix ( ) M t from the tool coordinate system to the Identification and Evaluation of Bottom Edge Cutting For a flat end mill, its periphery cutting edges always cut the workpiece material at a preplanned cutting speed (see Figure 3). The bottom cutting edges, however, cut the workpiece material at a lower cutting speed than the preplanned one. The closer the tool tip to the bottom cutting edge, the lower the cutting speed. The tool tip, which is the center of the bottom cutting edges and located on the cutter axis, has zero cutting speed. In effect, the workpiece material under the tool tip is rubbed away rather than cut off (see Figure 3), resulting in large cutting forces and quick tool wear. Therefore, the tool tip among the bottom edges is adopted to evaluate the bottom edge cutting. In the workpiece coordinate system, the instantaneous cutter locations, which include the tool tip position P(t) and the orientation of the cutter axis A(t), are calculated as and Micromachines 2022, 13, 2071 6 of 12 Due to the complex motion in 4-axis machining, the feed directions of the tool tip at the instantaneous cutter locations are different. The feed direction V(t) is formulated by dt are the derivatives of x(t) and z(t) in terms of time t, and they are calculated by At some moments, the feed directionṼ(t) of the tool tip points inside the cutter (see Feed direction Feed direction Figure 5. From the beginning of cutting to the moment of 0.05 s, no bottom edge cutting occurs (green curve in Figure 5). After that moment, the bottom edge is involved in cutting (red curve in Figure 5). Figure 5. From the beginning of cutting to the moment of 0.05 s, no bottom edge cutting occurs (green curve in Figure 5). After that moment, the bottom edge is involved in cutting (red curve in Figure 5). (8), a positive feed rate ( ) f t indicates the bottom edge cutting, while the negative ( ) f t suggests no bottom edge cutting. An example of the instantaneous feed rate of the tool tip in a step is shown in Figure 5. From the beginning of cutting to the moment of 0.05 s, no bottom edge cutting occurs (green curve in Figure 5). After that moment, the bottom edge is involved in cutting (red curve in Figure 5). To evaluate the bottom edge cutting, a number of instantaneous feed rates of the tool tip on a step are sampled. The maximum of them is called the maximum instantaneous bottom edge feed rate on the step, and it is denoted as f M i (see Figure 5). The bottom edge cutting occurs if f M i is positive. The larger the f M i , the severer the bottom edge cutting. Feed Rate Optimization Model Considering Bottom Edge Cutting The process of milling is unstable if the cutting forces are larger than the normal, resulting in chatters, quick tool wear or cutting-edge chipping. In the worst case, the cutter breaks. In the end of the last section, we have theoretically demonstrated that the bottom edge may be engaged in cutting and it is represented by the maximum instantaneous bottom edge federate computed with Equation (8). Because the bottom edge cutting increases the cutting forces, the effect of bottom edge cutting is taking into consideration to control the cutting forces, achieving stable cutting. In this research study, an optimization model is constructed to control the resultant cutting forces. The objective of the optimization is to minimize the cutting time because the purpose of rough milling is to remove the large amount of material of billets as quickly as possible. To identify the optimization variables, a few key factors of the rough milling process are analyzed. The tool paths are workpiece geometry-dependent and the spindle speed is determined by the workpiece material and the cutters; these two factors are not modified in machining. Therefore, the feed rates are selected as optimization variables. Each cutter location in a tool path can have its own feed rate. For a step, if the maximum instantaneous bottom edge feed rate f M i is positive, the feed rates on the cutter locations of this step are marked as to-be-optimized. All the to-be-optimized feed rates in a tool path are found and denoted as f O j , j = 1, 2, · · · , m. The optimization model is formulated as where T is total cutting time and and computed with Equation (2). n represents the number of steps in the tool path. The optimization model is subject to the following constraints. Constraint 1 The cutting forces must be controlled within an acceptable range. Here, in this work, a practical and simple control method is proposed. Based on the well-established mechanistic model of cutting force, milling forces are proportional to the area of the swept crosssection of the cutting edges. For the periphery cutting edges, the area A P i is calculated by A P i = a p · f z i , where a p is the axial cutting depth and f z i is the feed rate per tooth. f z i = f i S·Z , where S and Z are spindle speed and cutter's tooth number, respectively. f i is the preplanned feed rate. The cutting force caused by the periphery cutting edges F P i is computed with where C S is the cutting force coefficient of the side edge of the cutter. When the bottom edges are engaged in cutting, the cutting force coming from the bottom edge is calculated by where r represents the cutter radius and f M i is the maximum instantaneous bottom edge feed rate. It is worth noting that f M i ( f i ) varies according with the feed rate f i . According to Zhu's research [5], C B is fairly close to C P , thus we approximately assume C B = C P in our research. Therefore, F B i is computed with After optimization, the feed rate f i is replaced by the optimized one f O i , the resultant cutting force combining the periphery cutting edge and the bottom cutting edge is To eliminate the additional effect of the bottom edge cutting on the resultant cutting force, the cutting force caused only by the periphery cutting edges F P i is taken as the acceptable threshold. This threshold requires F O i < F P i . By plugging Equation (10) and Equation (12) into the inequation and simplifying it, the first constraint is Constraint 2 To ensure that the cutter's acceleration does not exceed the dynamics of the machine tools, the second constraint is where a max is the limit of acceleration on the linear axis. The limit of angular acceleration is not handled in this study because the constant-acceleration interpolation algorithm adopted does not take the angular acceleration into consideration. Verification and Application To demonstrate its validity, this approach is applied to a tool path of rough machining a blisk in a 4-axis machine tool. Twenty channels of the blisk need to be machined (see Figure 6a). Each channel is 43 mm in height and 42 mm in width. The channel is machined with a flat end mill of 5 mm radius. Twenty-six tool paths are planned using UG NX software, and the channel is cut layer by layer. The axial cutting depth a p is 4 mm. One of the tool paths is shown in Figure 6b. This tool path cuts the channel from the leading edge to the trailing edge. The preplanned feed rates are determined based on tool vendor's recommendations and cutting experiments. According to the preplanned tool path and its feed rate, the maximum instantaneous bottom edge feed rates are computed by using the evaluation method proposed in Section 2, and they are plotted with a red asterisk line in Figure 7. It is clear that the bottom edge is involved in cutting in zone I (Cutter location no. 7 to 11) and zone II (Cutter location no. 25 to 94). Therefore, these preplanned feed rates, from f 7 to f 11 and from f 25 to f 94 , are marked as to-be-optimized. They are plotted with a red dotted line in Figure 7. Verification and Application To demonstrate its validity, this approach is applied to a tool path of rough machining a blisk in a 4-axis machine tool. Twenty channels of the blisk need to be machined (see Figure 6a). Each channel is 43 mm in height and 42 mm in width. The channel is machined with a flat end mill of 5 mm radius. Twenty-six tool paths are planned using UG NX software, and the channel is cut layer by layer. The axial cutting depth p a is 4 mm. One of the tool paths is shown in Figure 6b. This tool path cuts the channel from the leading edge to the trailing edge. The preplanned feed rates are determined based on tool vendor's recommendations and cutting experiments. According to the preplanned tool path and its feed rate, the maximum instantaneous bottom edge feed rates are computed by using the evaluation method proposed in Section 2, and they are plotted with a red asterisk line in Figure 7. It is clear that the bottom edge is involved in cutting in zone I (Cutter location no. 7 to 11) and zone II (Cutter location no. 25 to 94). Therefore, these preplanned feed rates, from 7 f to 11 f and from 25 f to 94 f , are marked as to-be-optimized. They are plotted with a red dotted line in Figure 7. The optimization model is established as described in Section 3, in which the maximum acceleration max a of the machine tool is set to 9.8 2 m s . The Genetic Algorithm (GA) built in MATLAB is employed to solve the optimization model. Since this paper does not focus on the optimization method, GA parameters with MATLAB default values are adopted, except that the size of population is set to twice the number of cutter locations marked to-be-optimized. The optimized feed rates are determined and plotted with a green dotted line in Figure 7. Accordingly, the maximum instantaneous bottom edge feed rates determined by these optimized feed rates are re-evaluated and plotted with a green asterisk line. In the zone near the cutter location no. 63, the bottom edge cutting reaches its maximum, and the optimized feed rate is significantly reduced from 120 mm/min to 95 mm/min. Therefore, the cutting force could be controlled and the tool is protected. After optimization, the machining time is slightly increased from 30.6 s to 32.4 s. It is readily seen that this approach can effectively decrease the effect of bottom edge cutting by optimizing the feed rate. A further experiment is conducted to verify the reduction in cutting force. A tool path is programed using UG NX to cut the channel. In the experiment, a dynameter of Kistler 9367C is employed to measure the cutting forces before and after the optimization (see Figure 8). The optimization model is established as described in Section 3, in which the maximum acceleration a max of the machine tool is set to 9.8 m/s 2 . The Genetic Algorithm (GA) built in MATLAB is employed to solve the optimization model. Since this paper does not focus on the optimization method, GA parameters with MATLAB default values are adopted, except that the size of population is set to twice the number of cutter locations marked tobe-optimized. The optimized feed rates are determined and plotted with a green dotted line in Figure 7. Accordingly, the maximum instantaneous bottom edge feed rates determined by these optimized feed rates are re-evaluated and plotted with a green asterisk line. In the zone near the cutter location no. 63, the bottom edge cutting reaches its maximum, and the optimized feed rate is significantly reduced from 120 mm/min to 95 mm/min. Therefore, the cutting force could be controlled and the tool is protected. After optimization, the machining time is slightly increased from 30.6 s to 32.4 s. It is readily seen that this approach can effectively decrease the effect of bottom edge cutting by optimizing the feed rate. End-mill A further experiment is conducted to verify the reduction in cutting force. A tool path is programed using UG NX to cut the channel. In the experiment, a dynameter of Kistler 9367C is employed to measure the cutting forces before and after the optimization (see Figure 8). A further experiment is conducted to verify the reduction in cutting force. A tool path is programed using UG NX to cut the channel. In the experiment, a dynameter of Kistler 9367C is employed to measure the cutting forces before and after the optimization (see Figure 8). End-mill B-axis Kistler Dynameter Blisk billet By evaluating the bottom edge cutting, the feed rates all over the tool path are marked as to-be-optimized. The preplanned and the optimized feed rates are plotted in Figure 9. With the preplanned and the optimized feed rates, two channels are cut in the 4-axis machine tool and their cutting forces are measured, respectively. The resultant forces are also plotted in Figure 9. As can be seen in Figure 9, the cutting forces vary significantly along the whole path due to the complex cutter motion in 4-axis machining. Moreover, careful observation of By evaluating the bottom edge cutting, the feed rates all over the tool path are marked as to-be-optimized. The preplanned and the optimized feed rates are plotted in Figure 9. With the preplanned and the optimized feed rates, two channels are cut in the 4-axis machine tool and their cutting forces are measured, respectively. The resultant forces are also plotted in Figure 9. the cutting forces indicates that the optimized cutting forces are reduced more or less, according to the reduction in the feed rate. In the area near the cutter location no. 14, the cutting forces are reduced apparently, since the feed rate drops down dramatically about 25% in the area. The maximum reduction in cutting forces is on the cutter location no. 11, the cutting force is reduced by 28.7% (from 35.2 N to 25.1 N) when the feed rate dropped down by 19.3% (from 90 mm/min to 72.6 mm/min). The actual machining time is increased from 31.5 s before optimization to 34.8 s after optimization. Figure 9. Comparisons of the feed rates and the cutting forces before and after optimization. Conclusions This paper proposes an analytical approach to identify and evaluate the bottom edge cutting in the rough milling of complex parts. By using the CNC interpolation algorithm, the motion of the cutter's tool tip with respect to the workpiece material is formulated. As the benefit of doing so, the motion of the bottom edges of the cutters are represented pre- Resultant forces (N) Figure 9. Comparisons of the feed rates and the cutting forces before and after optimization. As can be seen in Figure 9, the cutting forces vary significantly along the whole path due to the complex cutter motion in 4-axis machining. Moreover, careful observation of the cutting forces indicates that the optimized cutting forces are reduced more or less, according to the reduction in the feed rate. In the area near the cutter location no. 14, the cutting forces are reduced apparently, since the feed rate drops down dramatically about 25% in the area. The maximum reduction in cutting forces is on the cutter location no. 11, the cutting force is reduced by 28.7% (from 35.2 N to 25.1 N) when the feed rate dropped down by 19.3% (from 90 mm/min to 72.6 mm/min). The actual machining time is increased from 31.5 s before optimization to 34.8 s after optimization. Conclusions This paper proposes an analytical approach to identify and evaluate the bottom edge cutting in the rough milling of complex parts. By using the CNC interpolation algorithm, the motion of the cutter's tool tip with respect to the workpiece material is formulated. As the benefit of doing so, the motion of the bottom edges of the cutters are represented precisely in accordance with the specific CNC controllers. The mechanism of the bottom edge cutting is analyzed. The motion vector of the tool tip is projected onto the opposite of the cutter's axis to calculate the signed feed rate of the tool tip engaging into the workpiece material. A number of the feed rates are sampled within the step of tool paths and the maximum of these feed rates is used to evaluate the bottom edge cutting. The sign of this feed rate is employed to identify the bottom edge cutting. Then, the cutting forces caused by bottom edges are estimated by computing the area of the cross-section swept by the bottom edges multiped by the cutting force coefficient. An optimization model is established to achieve the high roughness efficiency by optimizing the feed rates, as well as constraining the combination of bottom edge cutting forces and the periphery edge cutting forces. The results of simulation and experiment show that the bottom edge cutting is identified and the cutting force is controlled by optimizing the feed rates without losing much efficiency in rough milling blisks of aero-engines. We believe that this approach can be directly implemented in rough milling impellers of aero-engines, and other complex parts in industry.
2022-12-25T05:10:04.862Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "55493fb0d1a36b2ac3c40de43a55a476d3ff53cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/13/12/2071/pdf?version=1669388290", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55493fb0d1a36b2ac3c40de43a55a476d3ff53cc", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
4871144
pes2o/s2orc
v3-fos-license
Soybean Meal Quality and Analytical Techniques Soybean meal is considered the “gold standard” among intact protein sources used in the feed industry (Cromwell, 1999). It has an excellent amino acid profile that complements cereal grains in diet formulation, as methionine is typically the only limiting amino acid for poultry. Soybean meal is characterized as either from dehulled beans or beans having hulls (NRC, 1994). Dehulled soybean meal has a higher composition of crude protein, amino acids and metabolizable energy than soybean meal produced from soybeans having hulls (NRC, 1994); Soybean meal is known to vary in amino acid composition among samples. Geographical location of soybean production, soybean variety, and processing methods are factors known to influence variability of crude protein and amino acid composition of soybean meal (Parsons et al., 1991, 2000; de Coca-Sinova, 2008, 2010; Baker et al., 2011). de Coca-Sinova (2008) evaluated amino acid composition of soybean meal samples obtained from Argentina, Brazil, Spain, and the United States. Crude protein content varied from 45.2 to 50.6% with lysine expressed as a percent of crude protein ranging from 5.51 to 6.26%. Samples from Spain had the highest crude protein content, whereas lysine expressed as a percentage of crude protein was the highest for samples obtained in the United States. Moreover, soybean varieties are being selected to contain higher amino acid concentrations than conventional soybean varieties resulting in soybean meals having more balanced amino acid content for swine and poultry diets (Baker and Stein, 2009; Baker et al., 2011). Baker et al. (2011) reported high protein soybean meal having a crude protein and lysine composition of 54.86 and 3.56% compared with conventional soybean meal containing crude protein and lysine contents of 47.47 and 3.14%. Soybean meal is known to vary in crude protein and amino acid content among soybean production years and using current amino acid data bases of soybean meal composition are important to avoid variability in diet formulation with swine and poultry (Table 1). Amino acids originating from intact protein sources are not digested and absorbed with 100% efficiency. Formulating diets on a digestible amino acid basis is increasing around the globe and this formulation strategy allows for the use of lower cost feed ingredients that may contain amino acids that are less available to the animal while minimizing nitrogen excretion. Digestible amino acid composition is calculated by multiplying a digestible coefficient by amino acid total composition. Digestible coefficient is the digestibility percentage of an amino acid in a specific feed ingredient or a complete diet. In poultry, amino acid digestibility coefficients for feed ingredients are typically determined using a true digestibility assay with cecectomized roosters (Parsons, 1986) or standardized amino acid assay using broilers (Lemme et al., 2004). Amino acid digestibility coefficients have been reported to be higher with cecectomized roosters compared with using broilers (Garcia et al., 2007;Adedokun et al., 2007). Amino acid digestibility assays are highly variable and a large number of assays are needed for specific feedstuffs to generate accurate digestibility coefficients. Amino acid digestibility coefficients for soybean meal have been found to range from 82 to 93% (Table 2). Sriperm et al., 1 Values are expressed on a dry matter basis as average ± SD and were determined from samples analyzed in 2009 at Ajinomoto Heartland LLC's amino acid laboratory. 2 Values are expressed on a "as-is basis" as average ± SD and were determined from samples analyzed in 2010 at Evonik's amino acid laboratory. 3 Values are expressed on a "as-is basis" as average ± SD and were determined from samples analyzed in 2010 at Novus International Inc., amino acid laboratory. Soybean meal in poultry and swine feeds Soybean meal is the most commonly-used source of protein for poultry and swine feeds in the world, with 67% of the animal feed market (Pettigrew et al., 2002). In order for a feed ingredient to be considered an important component of an industry feeding program, it must have several fundamental qualities. First, it must provide one or more important nutrients. Second, it must be available in amounts that allow it to be used regularly and on a large scale. Third, it must be cost effective to use. Soybean meal abundantly fits into this category as a high-protein product with good amino acid balance that is highly digestible. It is available in large quantities year round and has had most of the associated antinutritional compounds inactivated. Interestingly, antinutritional factors in soybeans are relatively easy to inactivate and are reduced substantially by normal soybean processing. This is in contrast to many of the other commonly-used plant proteins that have non-labile antinutritional factors (Pettigrew et al., 2002). In the early years of compound feed production, grain products were paired with animal protein meals that provided a natural balance of vitamins and minerals in addition to protein. As animal protein products such as fishmeal became more expensive, and synthetic sources of vitamins (particularly vitamin B 12 ) were developed, soybean meal captured a larger portion of the animal feed protein market. Modern feed formulation programs further increased the demand for soybean meal as the principle protein source as least cost diet formulation became more common. Worldwide, nearly 2/3 of the protein sources used in animal feeds come from soybean meal, with canola meal, cottonseed meal and sunflower meal providing additional plant protein sources. In the United States, plant protein source usage in animal feeds is primarily (92%) soybean meal. Over half of the soybean meal produced in the United States is fed to poultry (Waldroup and Smith, 1999). Approximately 66% of protein in broiler feeds comes from soybean meal. With the development of reasonably-priced synthetic methionine sources, feed manufacturers are now able to produce relatively simple feeds based on a combination of corn and soybean meal with supplementation of minerals, vitamins and methionine. Swine account for 27% of the soybean meal used in animal feeds in the United States. Soy protein's digestibility, combined with a relative abundance of lysine, which is the first limiting amino acid in swine feeds, make soybean meal an excellent protein source for swine. Most areas of swine and poultry production have economical access to soybean meal for compounding animal feeds. In some places, however, local access to soybeans has led to interest in the processing of full fat soybeans meals for local usage. Full fat soybean meal, often an extruded product, has the advantage of higher energy values due to the full complement of oil in the native seeds as compared to commercial soybean meal, which has had most of the oil extracted for sale (Reese and Bitney, 2000). Other advantages include: 1) the addition of fat to a feed in a more easily-handled granular form and 2) the addition of fat to a feed in a form that is less likely to reduce pellet quality (Waldroup, 1985). Performance results indicated that there was significant variation in the nutrient content from various batches of extruded soybean meals (Reese and Bitney, 2000). The authors concluded that it would be difficult to compare extruded soybean meal to regularlyprocessed soybean meal for this reason. It would be wise if considering these products to do extra nutrient analysis. Numerous research groups have explored the use of full fat soybean meals in poultry feeds as well (Waldroup, 1985). Extruded full fat soybean meals have seen limited use, although dry roasting, followed by grinding, has also been tested. Waldroup and Cotton (1974) determined the levels of full fat soybean meal that could be included in mash broiler feeds before performance suffered (less than 25%). Higher levels could be utilized in pelleted broiler feeds because the pelleting process causes more cell wall disruption and increases the digestibility of full fat soybean meal products (Waldroup and Cotton, 1974). Soybean geneticists are continually improving productivity characteristics of soybeans for crop production. Additionally, efforts have been underway for some time to enhance the quality of soybeans in relation to animal feeding of soybean meal (Bajjalieh, 2002). Areas of interest include increasing levels of sulfur containing amino acids, increasing the proportion of soybean meal phosphorus that is available for digestion (reducing phytate-bound phosphorus) and increasing energy availability through selection away from carbohydrate fractions of low availability to monogastrics. Protein digestion Dietary protein consists of complex polypeptides, which must be cleaved into dipeptides and amino acids to facilitate absorption. In poultry, the crop, proventriculus, gizzard, pancreas, and small intestine have an active role in protein digestion (Moran, 1982). Proteolysis is the first stage of digestion and it occurs in the proventriculus and gizzard (Hill, 1971). The contents found in the proventriculus and gizzard have a pH of 1.80 and 2.50, respectively, which is relatively lower than the crop, small intestine, cecum, and cloaca ( Figure 1). This low pH is central to gastric digestion. The Proventriculus is the site for pepsin and HCl production and contains gastric glands located in the mucosa (Toner, 1963). At low pH, protein denaturation occurs through unfolding of proteins and cleavage of peptide bonds by pepsin, which is an endopeptidase. Fig. 1. pH of the contents in the digestive tract of poultry (Herpol and Van Grembergen, 1967) www.intechopen.com One of the functions of the pancreas is to supply digestive enzymes for protein digestion (Brody, 1994). Trypsin, chymotrypsin A, chymotrypsin B, proelastase, and carboxypeptidase are produced by the pancreas and these enzymes are endopeptidases with the exception of carboxypeptidase (Brody, 1994). Pancreatic enzymes play a central role in protein digestion in the small intestine by breaking down polypeptides into oligopeptides (Alpers, 1994;Lowe, 1994). Approximately 13 peptidases are present in the brush border membrane or the cytoplasm of the small intestine that breakdown oligopeptides into dipeptides and amino acids (Alpers, 1994). The resulting dipeptides and amino acids are absorbed in the small intestine for the synthesis of body proteins. Soybean meal that has been underprocessed contains trypsin inhibitors, which are antinutritional factors. These proteins bind to trypsinogen and chymotrypsinogen preventing the conversion into their active forms limiting protein digestion. A detailed description of trypsin inhibitors will be discussed in the following section. Trypsin inhibtor in soybean meal and protein digestion Growth depression effects due to antinutritional factors present in soybeans have been welldocumented for more than half a century (Ham et al., 1945;Chernick et al., 1948;Liener, 1953;Lyman and Lepkovsky, 1957;Gestetner et al., 1966). Trypsin inhibitor is the primary antinutritional factor in soybean meal (Araba and Dale, 1990a,b;Anderson-Hafermann et al., 1992;Mian and Garlich et al., 1995), which is a globulin-type protein having a molecular weight of 24,000 and isoelectric point of 4.5 (Kunitz, 1945). Trypsin inhibitor inhibits the conversion of zymogens to active proteases of trypsin and chymotrypsin. The mechanism of action differs for trypsin and chymotrypsin (Kunitz, 1947). Trypsin inhibitor binds with trypsinogen to form an irreversible compound preventing the formation of an active protease. Conversely, trypsin inhibitor action of chymotrypsin is less pronounced forming a reversible dissociated compound (Northrop, 1922). In addition to its detrimental effects on proteolytic action, trypsin inhibitor dramatically affects the size of the pancreas and amount of trypsinogen produced. Chernick et al. (1948) reported that pancreas weight as a percent of body weight was increased by 56% and had 43% higher trypsinogen content per gram of pancreas nitrogen content with chicks fed diets containing raw soybean meal compared with diets containing heat-treated soybean meal. Moreover, Lyman and Lepkovsky (1957) reported low trypsin content in the small intestine of rats immediately after feeding a diet containing raw soybean meal, but increased 3 fold the normal concentration 6 hours postfeeding. This provides evidence the pancreas produced trypsinogen in excess to compensate for the trypsin inhibitor. Hence, the justification for the trypsin content observed several hours after feeding. The inhibitory action is reduced by subjecting soybeans or soybean meal to heat by deactivating antinutritional toxins (Hayward et al., 1936;Kunitz, 1947). Broiler growth has been shown to be increased by approximately 140 to 150% with autoclaving raw hexane-extracted soybeans or soybean meal compared with chicks fed diets containing raw hexane-extracted soybeans or soybean meal not subjected to heat (Araba and Dale, 1990b;Anderson-Hafermann, 1992). If adequate heat is not applied during soybean processing, soybean meal will be produced containing active toxins compromising its nutritional value. Overheating of soybean meal Overheating of soybean meal reduces its nutritional value for poultry (Renner et al., 1953;Warnick and Anderson, 1968;Araba and Dale, 1990a). It has been shown that overcooking www.intechopen.com of soybean meal decreases digestibility of amino acids (Lee and Garlich, 1992;Parsons et al., 1992). The explanation for the decreased amino acid digestibility and reduced growth responses appear to be related to the Maillard reaction with cross-linking involved to a lesser extent. Parsons et al. (1992) examined the effects of overprocessing dehulled, solventextracted soybean meal by autoclaving at 121 • C and 105 kPa for 0, 20, 40, and 60 min. Increasing the time of autoclaving reduced total concentration of lysine, arginine and cysteine, but other amino acids were not influenced by overprocessing. The largest decrease in true amino acid digestibility occurred with lysine, cystine, histidine, and aspartic acid, whereas digestibility of threonine, serine, alanine, and leucine was decreased to a lesser extent. Moreover, a growth assay using broiler chicks determined that autoclaving at 121 • C for 40 min reduced lysine bioavailability by 15% compared with birds fed soybean meal not subjected to autoclaving. The destruction of lysine and arginine content of soybean meal and reduced lysine digestibility due to autoclaving indicates the presence of the Maillard reaction. In addition to chemical composition, color differences are apparent with soybean meal subjected to overprocessing indicating a browning during the latter stage of Maillard reaction ( Figure 2). Maillard reaction is a series of complex reactions occurring when feed ingredients, food, and animal tissues are subjected to overprocessing (Iqbal et al., 1999;Fayle and Gerrard, 2002). The series of reactions involve early, advanced, and final stages (Mauron, 1981). In the early reactions, amino groups react with aldehyde groups of free sugars producing a schiff base, which cyclizes to form a glycosylamine (Mauron, 1981;Dillis, 1993). The glycosylamine undergoes a rearrangement to form either Amadori products (1-amino-1-deoxy-2-ketose) if produced from glucose or Heyns products if derived from fructose. In this series of reactions, ε-amino group of lysine is affected the most and ε-amino groups located at the terminal end of proteins are also involved but to a lesser extent. With lysine, an aldose is changed to a ketose creating a fructosyl-lysine. In the advanced reactions, Amadori or Heyns products are decomposed to form deoxydicarbonyl sugars and these resulting sugar derivatives can react with other amino acids producing aldehydes, ketones, and/or deoxydicarbonyl compounds (Dillis, 1993). Heterocylic compounds (pyrazines, pyrroles, pyridines, and thiazoles) are formed during the latter stages of these reactions, which are known to provide aromas and flavor to food (Mauron, 1981;Dillis, 1993). In the final reactions, food or feed ingredients are characterized by exhibiting a dark color associated with brown melanoidin pigments produced by this set of reactions, hence the name of browning well known for the Maillard reaction (Hurrell and Carpenter, 1981). Proteins are modified through cross-linking reactions as deoxydicarbonyl sugars or carbonyl compounds react with amino acids (Mauron, 1981;Dillis, 1993). Poor digestibility of intact protein sources subjected to overprocessing (Maillard reaction) may be due to the formation of Amadori or Heyns products, reduced absorption of lysine, and the formation of cross-links (Mauron, 1981;Sherr et al., 1989;Dillis, 1993). Sherr et al. (1989) determined that, in the presence of Maillard products derived from lysine (glycosylated lysine derivatives), absorption of lysine was inhibited. The glycosylated lysine derivatives compete with lysine for absorption carriers, but the majority of these derivatives have poor utilization with excretion being 72 and 96% of the amounts absorbed. The crosslinks are not very digestible as endogenous proteases are not able to cleave this complex during digestion resulting in poor utilization to the animal. Soybean meal contains sugar complexes in the form of raffinose and stachyose and overprocessing may contribute to Maillard reactions (Hancock et al., 1990). Cysteine content has been shown to be reduced in www.intechopen.com soybean meal with overprocessing . Cysteine is not thought to be involved with Maillard reactions, but rather forming lanthionine during overprocessing (Miller et al., 1965;Hurrell et al., 1976). With the formation of lanthionine, cysteine would probably be expected to decrease when soybean meal is subjected to overprocessing. Analytical assays to estimate soybean meal quality Based on the popularity of soybean meal as a protein source in poultry and swine feeds, it is not surprising that quite a lot of time and effort are expended in measuring soybean meal protein quality. Over the years, a number of techniques have been examined to measure the protein quality of plant protein products. Those most used in practice have changed as research-based comparisons of the various techniques have shed light into the relative merits of each. Currently, the analytical technique most commonly used to measure soybean meal quality is protein solubility, perhaps combined with the urease test. Protein solubility has been a tool to test soybean meal solubility for many decades Circle, 1938, Lund andSandstrom, 1943). These early attempts examined protein solubility in water. Later, a range of acid and alkaline chemicals were compared for their utility in measuring soybean meal protein solubility. More recently, Araba and Dale (1990a) and Parsons et al. (1991) examined the use of a 0.2% potassium hydroxide (KOH) solution. Protein (nitrogen) concentration is then quantified using the kjeldahl method. In general, KOH solubility decreases as the degree of heat treatment associated with soybean processing increases. While raw soybean products would be 100% soluble, they obviously have a full complement of antinutritional factors that have not been deactivated. Research comparing protein solubility to other measures of protein quality indicate that KOH solubilities between 78 to 84% are optimal for animal performance. Values ranging from 84 to 89% are slightly underprocessed and may be acceptable for older animals, while values under 74% are overprocessed and will have reduced lysine digestibility. Araba and Dale (1990b) compared protein solubility to Orange G binding and trypsin inhibitor activity. They found that protein solubility compared favorably to measurements of broiler growth and trypsin inhibitor activity while the Orange G binding technique was not sensitive to processing www.intechopen.com changes in autoclaved soybean meals (Figure 3). The combined works of Araba and Dale (1990a,b) concluded that the KOH solubility test is useful for detecting both over-processed and under-processed soybean meals. The urease test has been used for some time as a measure of soybean meal processing. Urease is an enzyme in soybean meal that is of little interest in animal nutrition. It is, however, easier to measure than many of the antinutritional factors of interest. Because trypsin inhibitors and lectins are denatured by heat processing of soybeans at a similar rate to the urease enzyme, testing for urease is a useful marker for degree of soybean meal underprocessing (Caskey and Knapp, 1944;Wright, 1981). Unfortunately, the urease test does not do an adequate job of measuring overprocessed meals. Over time, meals ranging from 0.05 to 0.15 change in pH were considered properly processed for poultry. Recently, meals higher than a 0.15 pH change have been deemed usable by older chickens. Also, changes in soybean processing methods have raised questions regarding the lower range of this test (i.e. levels under 0.05 pH may not cause problems). Fig. 3. Effects on protein solubility and Orange G binding of overprocessed soybean meal (Araba and Dale, 1990b). Despite the ease of measuring the urease enzyme as opposed to more complicated assays, it is possible to routinely measure trypsin inhibitors in soybean meals. Directly measuring trypsin inhibitors in soybean meals is obviously a desirable assay and trypsin inhibitors are one of the major antinutritional factors of note. Kakade et al. (1974) described the most commonly-used method for determining trypsin inhibitors in soybean products for animal www.intechopen.com feeds. Work by McNaughton et al. (1981) indicated that direct measurement of trypsin inhibitor levels was an accurate indicator of animal performance for undercooked soybean products. For practical applications, the easier-to-complete urease test still predominates as a marker for under-processed soybean meals. The use of Orange G dye to determine the amount of heat processing a soybean meal sample has been subjected to is based on the dye's ability to bind the free ε-amino group of lysine under acidic conditions. As lysine progressively becomes less available during extended heat processing, less of the Orange G dye can bind. Moran et al. (1963) correlated Orange G dye binding with broiler chick growth and found agreement across a range of heat treatments (autoclaving in this case). Araba and Dale (1990b) found protein solubility more sensitive to soybean meal processing variation than the Orange G binding technique. There are other dye binding tests that have been suggested as methods to monitor soybean meal quality, including the cresol red test (Olomucki and Bornstein, 1960;Vorha and Kratzer, 1991) and coomassie blue staining (Vorha and Kratzer, 1991). A coomassie blue dye solution can be used to titrate protein solubility after KOH treatment in place of the kjeldahl protein test (Kratzer et al., 1990). The optical density of the stained proteins is then measured against a set of lysozyme standards at 595 nm. Coomassie blue staining may be more accurate than the kjeldahl procedure at measuring protein solubility because coomassie blue binds with intact proteins and not free amino acids (Vorha and Kratzer, 1991), also, the coomassie blue dye test would be faster in producing results than using the Kjeldahl portion of the KOH solubility test. Because this is, in essence, a KOH solubility test, it is particularly useful in detecting overprocessed soybean meals. Protein dispersibility index refers to the amount of soybean meal protein dispersed in water after blending a soybean meal sample in water with a high speed blender. Research by Batal et al. (2000) correlated chick growth with several methods of soybean meal quality assessment in meals that had been heat treated. Their results indicated that protein dispersibility index was a sensitive measure of soybean meal quality and gave better results than either the urease or protein solubility assays. Protein dispersibility indexes of 40 to 45% indicate a soybean meal that is neither over-or under-processed. These authors suggested that the protein dispersibility index will give an accurate picture of soybean processing if paired with another test such as the urease test. A number of other tests have been proposed to measure soybean meal quality, including formaldehyde titration (Almquist and Maurer, 1953) and a fluorescence test (Hsu et al., 1949). In conclusion, nutritional quality of soybean meal is of utmost importance to optimize the rate and efficiency of growth of poultry. It is necessary for ingredient quality control programs to understand the appropriate assays to determine if soybean meal has been subjected to under-or over-processing (Table 3). Protein solubility assay is easily conducted and provides more reproducible results than trypsin inhibitor activity assay. A value greater than 85% denotes underprocessing, whereas a protein solubility index less than 74% infers overheating. Protein dispersibility index is also a useful tool to measure protein quality with values ranging from 40 to 45% denoting acceptable quality. Conversely, urease activity is useful only for detecting underprocessing because its activity falls to zero as soybean meal has been exposed to overprocessing. Moreover, Orange G binding capacity exhibits small change with soybean meal subjected to overprocessing, hence this assay may not be appropriate to detect overheated soybean meal.
2017-09-13T14:05:01.380Z
2011-09-12T00:00:00.000
{ "year": 2011, "sha1": "d3739f6dd761ecf1a697a636742577dd01944afc", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/chapter/pdf-download/19977", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "707fda2f056bd09e8f87d5d2026d260cf5d8985d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
255970153
pes2o/s2orc
v3-fos-license
Causes and trends of adult mortality in southern Ethiopia: an eight-year follow up database study Background Broad and specific causes of adult mortalities are often neglected indicators of wellbeing in low-income countries like Ethiopia due to lack of strong vital statistics. Thus, this database study aimed to assess the causes of adult mortality using demographic surveillance data. Methods An 8-year (12 September 2009–11 September 2017) surveillance data from the Arba Minch Health and Demographic Surveillance Site was used for this study. Verbal autopsy methods and ICD codes were used to identify the causes of the adult deaths. The collected data were entered to the database by data clerks. We used Microsoft Excel and STATA version 16 software for data cleaning and analysis. Chi-squared test was used to see the significances of the trend analyses. Result From the 943 adult deaths from 2009 to 2017 in the Health and Demographic Surveillance Site in southern Ethiopia, more than half of them were females. The specific leading cause of death in the adults were tuberculosis (16.8%), malaria (9.7%), and intestinal infectious diseases (9.6%). Communicable diseases (49.2%, 95% C.I 45.7, 52.7) accounted for about half of the deaths followed by non-communicable diseases (35%, 95% C.I 31.7, 38.4) where both categories showed an increasing trend. Conclusion Although pieces of evidences are showing the shift from communicable diseases to non-communicable diseases as the major causes of adult death in developing countries, this study showed that communicable diseases are still the major causes of adult deaths. Efforts and emphasis should be given to control infectious diseases such as tuberculosis and malaria. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-07988-5. According to the World Health Organization (WHO) Africa region, the overall adult mortality of the region was 308 per 1000 population, and in Ethiopia, it was 218, 239, and 198 per 1000 population for both sexes, males and females respectively in 2015 [6]. This is the only WHO region where still the leading causes of death (52.9-56%) are the communicable, maternal, neonatal, and nutritional conditions killing 2.3 million people in 2016 [2,4,7]. Evidence from International Network for the Demographic Evaluation of Populations and Their Health (INDEPTH) Health and Demographic Surveillance System sites from Asia and Africa reported that a mortality rate of 10.9 per 1000 person-years whereby 35.6% of the deaths were due to Non-Communicable Diseases [8]. A triple burden of infectious diseases, chronic diseases, and external injuries is still resulting in modest adult mortality level decline in most African countries except in north Africa in the last few decades [7]. A paradoxical kind of adult mortality trends is documented in findings from low-and middle-income countries. Decreasing adult mortality rates are evidenced in studies from South Korea, Bangladesh, Gambia, Ghana, and South Africa [9][10][11][12][13]. On the contrary, increasing mortality trends were documented elsewhere [14][15][16]. Despite the substantial improvements in adults' survival in some countries in sub-Saharan Africa, still, this region has the heaviest burden of adult mortality worldwide [17]. In the northern part of Ethiopia, it is reported that the non-communicable diseases account for 36.4% of the total deaths followed by communicable disease (34.9%) [18]. So far, in sub-Saharan African countries including Ethiopia, more studies had focused on child and maternal mortalities, whereas a little emphasis is given to causes of adult mortalities [19]. The mortality estimation for this population is impeded by the lack of data and due to discrepant estimates [7,20]. Estimating the causes of adult mortality is more difficult in countries without strong vital statistics [21,22]. This study aimed to identify the causes of adult mortality in the Arba Minch health and demographic surveillance system. Study setting The study used surveillance data conducted in Arba Minch Health and Demographic Surveillance Site (Arba Minch HDSS) which was established in 2009 in collaboration with Arba Minch University. Nine rural and one urban kebeles (administrative units below districts) were intentionally selected as catchment area Based on climatic zone two of the kebeles are highlands, four lowlands and the rest three midlands. The main objective of Arba Minch HDSS is to collect longitudinal data on birth, death, and migration in the selected kebeles of Arba Minch Zuria district [23]. The Arba Minch HDSS site follows-up every individual within a defined catchments area two times a year with house-to-house visits. During the visits, if deaths happened, they register and collect information about the cause of death by using the standard WHO verbal autopsy questionnaires [24]. The design of the surveillance is population-based longitudinal follow up, and this study used the data from September 2009 to September 2017 to identify the causes of adult mortality in the surveillance site. Data collection procedures The Arba Minch Health and Demographic surveillance system use a verbal autopsy technique to identify the cause of deaths. Verbal autopsy is a technique used to determine the cause of death by asking caregivers, friends, or family members about signs and symptoms exhibited by the deceased in the period before death. This is done using a standardized questionnaire that collects details on signs, symptoms, complaints, and any medical history or events [24]. The occurrence of death in the demographic surveillance area was notified by the local village-based data collectors and guides. The causes of death were ascertained based on an interview with next of kin or other caregivers using a standardized questionnaire that draws information on signs, symptoms, medical history, and circumstances preceding death after 45 days mourning period. On the agreed day, the VA interviewer arrived at the residence of the deceased to interview with the person who was responsible for caring for the deceased. In the case of the absence of an appropriate interviewee, up to three attempts were made to conduct an interview. VA data collectors would make sure that every section of the form would be accurately completed before the form submitted to field supervisors for scrutiny of the quality of the collected data. The completed VA questionnaires were given to two blinded physicians and reviewed independently. When disagreements in diagnosis arose, a third physician was assigned to review the case. The final diagnosis was assigned based on the agreement between the third physician and any of the two physicians. The case was considered 'undetermined' if all three physicians assigned a different diagnosis. Physicians label the death as 'unspecified causes of death (VA-99)' when it was difficult to classify based on the given information. Two physicians, trained in VA diagnosis and coding procedures assigned codes and titles for each cause of death as underlying, immediate, and contributing factors independently using the information in VA forms based on WHO International Classification of Diseases-10 and VA code system [25]. Classification of causes of death We have used the following classifications of causes of death for this study based on the international disease classification system [26,27]. Communicable diseases (CDs) All infectious and parasitic diseases including human immunodeficiency virus (HIV), tuberculosis, malaria, intestinal infection, infectious diseases of an unspecified cause, acute lower respiratory infections, meningitis, viral hepatitis, typhoid and paratyphoid fever, and rabies. Non-communicable diseases (NCDs) Diseases of the circulatory system, neoplasms, renal disorders, respiratory disorders, gastrointestinal disorders, mental, and nervous system disorders and nutritional and endocrine disorders. External causes of death (ECs) Accidental falls, accidental drowning and submersion, burn, intentional self-harm, and others that are not related to the above two categories. Pregnancy, childbirth and puerperium All deaths related to pregnancy, childbirth, and postpartum such as maternal deaths associated with abortion, childbirth-related hemorrhage. Data analysis procedures The data were entered into an excel database system by the data clerks of the Arba Minch HDSS. Data cleaning and analysis was done using STATA 16 software and Microsoft Excel. Description of the adult deaths was made by various sociodemographic characteristics such as sex, residence, age category, marital status, occupation, educational status, and place of death. Both the specific and broad causes of death among the adults aged 15 years and above were identified according to the verbal autopsy diagnoses. We excluded deaths with discordant verbal autopsy diagnoses, unspecified cause of death, and deaths without verbal autopsy diagnosis from the denominators in the calculation of the proportions of each cause of deaths. We have also used a chi-squared test to compare some specific causes of death among different sociodemographic characteristics, and to test the trends in major causes of death. The analyzed data is from September 2009 to September 2017. But for the purpose of analysis and comparison, we categorized the death years into eight equal categories (i.e. 1 year) as follow; Socio-demographic characteristics A total of 943 adult deaths were recorded in the Arba Minch-HDSS in the eight surveillance periods (from September 2009 to September 2017). Accordingly, 54% of them were females, and about 62% were married. The majority of the deceased persons were from rural residence (88.85%), aged between 55 and 74 years (32.8%), unable to read and write (96.78%) by educational status, and farmers (45%) by occupation. The median ± interquartile range age of the deceased was 54 ± 38. Regarding the place of death, 736 (78%) individuals died at home (Table 1). Almost half (49.6%) of the deaths occurred in lowland followed by highland (38.2%) and midland (12.2%) areas of the surveillance site according to climatic conditions. Causes of adult deaths Among the total deaths, specific and broad causes of death were identified from the Verbal Autopsy (VA) for 924 cases. The final VA code was missing for 19 cases and discordant for 79 cases. Besides, the cause of death for 65 (8.8%) of the deaths was unspecified, based on the VA questionnaire (VA-99 code). A total of 75 VA codes of specific causes of deaths were recorded for the above deaths. The leading broad causes of death in the surveillance site was infectious and parasitic diseases (49.2%, 95% C.I 45.7, 52.7) followed by external causes of death (13.5%, 95% C.I 11.1, 15.9). There was a single death assumed to be attributed due to misadventure to a patient during surgical and medical care (Fig. 1). Although the total number of deaths from all causes shows a decreasing trend, the percent share of deaths in each surveillance year showed an increasing trend for communicable diseases compared to the other broad causes of adult deaths (Fig. 2). Furthermore, the distribution of the broad causes of death was classified by the age category of the deceased. In the majority of the age categories, the communicable diseases group accounts for the majority of the deaths followed by non-communicable diseases (Fig. 3). The distribution of the broad causes was done by the sex of the deceased. Accordingly, the communicable group of diseases tends to be the leading cause of death in both females and males. Actually, there was no statistically significant association in the distribution of causes of deaths among males and females (Fig. 4). Specific causes of death Among the specific causes of death in the study area, tuberculosis was the commonest (16.8%, 95% C.I 14.2, 19.4) single cause followed by malaria (9.7%, 95% C.I 7.6, 11.8), intestinal infectious diseases (9.6%, 95% C.I 7.5, 11.8), and chronic liver disease (5.6%, 95% C.I 4, 7.2). Among the commonest specific causes of adult deaths, diseases such as malaria, intestinal infectious deaths, chronic liver diseases, and intentional self-harm caused more deaths in males than females. On the contrary, congestive heart failure, cardiovascular diseases, and typhoid and paratyphoid caused more female deaths than males. A nearly similar distribution of deaths among males and females were observed from tuberculosis, HIV/AIDS, and gastric and duodenal ulcer ( Table 2). Although the mortality trend from tuberculosis was declining from year to year, it showed a dramatic increase in 2017. On the other hand, the trend of adult mortality from malaria showed a steady decrement. Similarly, the trend of adult deaths due to tuberculosis was decreasing (χ 2 test for trend = 4.65, P-value = 0.03). However, a nearly identical trend was observed for both males and females (Figs. 5, 6). Discussion In this study, the major causes of adult mortality were identified. The trends of adult mortality in the demographic surveillance site were also examined. The majority of the deceased were females, from rural residence, aged between 55 and 74 years, unable to read and write, and farmers from 2009 to 2017 in Arba Minch Health and Demographic Surveillance Site. Almost similar proportions in the above socio-demographic characteristics excepting the age group were observed in studies conducted in Ethiopia and other African settings [18,28,29]. This study showed that the leading broad causes of death in the surveillance site were infectious and parasitic diseases (49.2%) followed by external causes of death (13.5%), gastrointestinal disorders (10.6%), disease of the circulatory system (7.3%), and neoplasms (5.8%). Similar findings were documented showing communicable diseases as the major causes of adult deaths in various African countries [10,28,[30][31][32][33]. This finding is in contrary to a study done in the northern part of Ethiopia, where the non-communicable diseases were the major killers [34]. Such discrepancies may be due to differences in socioeconomic status, and lifestyles of the community which play an important role in the development of the NCDs. Among the specific causes of death in the study area, tuberculosis was the commonest (16.8%) single cause followed by malaria (9.7%), intestinal infectious diseases (9.6%), and chronic liver disease (5.6%). In line with this finding, tuberculosis is the single common cause of death in different parts of Ethiopia and other sub-Saharan African countries [10,18,28,30,31]. This indicates that tuberculosis is still the single most cause of mortality of Ethiopian adults. Although an increasing trend of tuberculosis incidence was documented elsewhere in Ethiopia [27,35], the adult deaths due to tuberculosis are steadily decreasing in this study for both males and females. This may be due to a relatively better diagnosis and treatment of tuberculosis in health facilities currently in Ethiopia. Similarly, the trend of adult mortality from malaria showed a steady decrement in this study, although a declining bed net utilization among pregnant mothers is reported from similar area [36]. The declining trend of death from malaria, does not however, show a low malaria prevalence in the area, as it is still the second most cause of adult deaths. Limitations One of the major limitations of this study is that we only described the causes of adult mortality failing to identify the possible risk factors. We also missed the verbal autopsy result for 19 cases. Another limitation is the validity of physician certified verbal autopsy to identify the causes of adult deaths as it may yield biased results. Not including the data after 2017in the analysis may be also taken as a limitation of this study. The pattern of the cause of death maybe skewed to communicable diseases as majority of the study participants are from rural. Conclusion Tuberculosis is still the leading cause of adult mortality in the rural part of southern Ethiopia using VA. Although pieces of evidences are showing the shift from communicable diseases to non-communicable diseases as the major causes of adult death in developing countries including Ethiopia, this study showed that communicable diseases alone account for about half of the adult deaths. Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8
2023-01-19T20:37:58.224Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "f0885662bc392f98800f9f3879d4a39176a5f671", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/counter/pdf/10.1186/s12879-023-07988-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb70dc19ab4a4ab86759b301f25090292957ab85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245386794
pes2o/s2orc
v3-fos-license
Magnet-assisted electrochemical immunosensor based on surface-clean Pd-Au nanosheets for sensitive detection of SARS-CoV-2 spike protein Tracking and monitoring of low concentrations of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can effectively control asymptomatic transmission of current coronavirus disease 2019 (COVID-19) in the early stages of infection. Here, we highlight an electrochemical immunosensor for sensitive detection of SARS-CoV-2 antigen marker spike protein. The surface-clean Pd-Au nanosheets as a substrate for efficient sensing and signal output have been synthesized. The morphology, chemical states and excellent stable electrochemical properties of this surface-clean heterostructures have been studied. Functionalized superparamagnetic nanoparticles (MNPs) were introduced as sample separators and signal amplifiers. This biosensor was tested in phosphate buffered saline (PBS) and nasopharyngeal samples. The results showed that the sensor has a wide linear dynamic range (0.01 ng mL−1 to 1000 ng mL−1) with a low detection limit (0.0072 ng mL−1), which achieved stable and sensitive detection of the spike protein. Therefore, this immunosensing method provides a promising electrochemical measurement tool, which can furnish ideas for early screening and the reasonable optimization of detection methods of SARS-CoV-2. Introduction The coronavirus disease 2019 (COVID-19) outbreak has brought an unprecedented devastating impact on human life around the world [ 1 , 2 ]. Until September 2021, the number of people infected with the virus has exceeded 220 million and continues to rise [3] . These days the SARS-CoV-2 virus, which can cause various degrees of upper respiratory disease, has been deeply analyzed [4][5][6][7][8][9] . The β-coronavirus is a sequence of RNA genome with singlepositive strand characteristics. It mainly encodes four structural proteins, including small envelope (E) protein, nucleocapsid (N) protein, spike (S) protein and matrix (M) protein [ 10 , 11 ]. The infection route of SARS-CoV-2 can be achieved through a variety of contact or non-contact ways, and no obvious clinical symptoms appear in the early stage [ 12 , 13 ]. Therefore, rapid virus screening can al-leviate the fatal flaws of asymptomatic transmission that appeared in this pandemic. At present, the detection of SARS-CoV-2 in the public health system is mainly through the PCR method targeting RNA and the ELISA method targeting antibody [14][15][16][17] . However, the PCR method takes a long time, especially for such a large infection base; and the antibody detection technology of ELISA is not suitable for early screening of COVID-19, but is more conducive to the recovery period after treatment [1] . Therefore, more and more attention is focused on the antigen coexisting in the virus as a potential detection target [18] . Among a variety of predicted antigenic biomarkers, the S protein that mediates adhesion to host cells has been reported to be one of the most valuable detection targets [ 9 , 19 ]. S1 protein with receptor binding domain (RBD) located on the surface of the virus generally recognizes and binds to host cells, contributing to the development of clinical diagnostic kits suitable for sensitive detection in the field [20] . Therefore, the design and development of practical methods for tracking and detection S1 protein in respiratory samples are extremely important. https://doi.org/10.1016/j.electacta.2021.139766 0013-4686/© 2021 Elsevier Ltd. All rights reserved. Electrochemical sensors that can convert changes in active biological substrates into capturable electrical signals have predictable potential [21][22][23] . The electrochemical detection method exhibits the advantages of high sensitivity and simple processing, and it is very promising to construct methods such as remote controllable measurement or on-site instant detection driven by a smartphone [ 24 , 25 ]. The layered noble metal nanocomposite has high conductivity, large surface area and flexibly adjustable surface chemistry [26] . Among them, thin-layer metal nanosheets formed by embedding or converting other metal ions can provide many possibilities for the desired design and function. However, most thin-layer multi-metal nanosheets need to introduce a large amount of surfactants and stabilizers during the preparation process, and accompanied by harsh synthesis conditions, which greatly block their active sites and reduce their inherent activity. So far, simple preparation methods of layered bimetallic nanosheets with surface-clean heterostructures are quite rare. As a sensing substrate, the enhanced electrocatalytic performance by the synergistic effect and high chemical stability make it undoubtedly have important capabilities in the field of electrochemical measurement [ 27 , 28 ]. In the constructed antigen-based sensor, the introduction of high charge concentration and appropriate electron mobility can promote the transfer of electrons from biomolecules to the sensor platform, enabling the detection of ultra-low levels of biological species [25] . Magnetic nanoparticles (MNPs) have advantages in virus detection, especially in this COVID-19 pandemic [29][30][31] . Magnetic labels can meet the requirements of rapid separation and enrichment, and low cost-effectiveness [32] . Although the magnetic properties of MNP have been extensively studied, the employment of MNP is obviously more direct and effective in rapid and sensitive screening such as SARS-CoV-2. MNPs can reduce the amount of sample processing while simplifying the method, and can also exert its signal amplification function in electrochemical biosensing. Here, we designed a magnet-assisted electrochemical immunosensor based on Pd-Au nanosheets for sensitive detection of S1 protein ( Fig. 1 ). Pd-Au nanosheets are selected as the sensing and fixing substrate. The surface-clean Pd precursor is used as the sacrificial template. It is worth mentioned that the entire preparation system was carried out at room temperature without any surface agents or stabilizers. The excellent conductivity and large surface area greatly enhanced the electronic transport and signal output between the interface and the solution. Antibodyfunctionalized MNPs were introduced as sample separators and signal amplifiers. The results showed a wide linear range (0.01 ng mL −1 to 10 0 0 ng mL −1 ) and an ultra-low detection limit (LOD, 0.0072 ng mL −1 ), which achieved stable and sensitive detection of the S protein. This is essential for the initial monitoring of the virus. Meanwhile, this electrochemical immunosensor showed good selectivity, repeatability and stability. Overall, the successful casting of electrochemical biosensors combining Pd-Au nanosheets and functionalized MNPs provides a valuable analysis and application approach for antigen detection. D-( + )-glucose, AgNO 3 and glucose oxidase (Gox) were obtained from Sigma-Aldrich (Shanghai, China). H 2 O 2 was obtained from Acros Organics (USA). SARS-CoV-2 S1 recombinant protein (S1 protein), nucleocapsid protein (N protein) and anti-SARS-CoV S1 antibody (Ab) were purchased from Sangon Biotech Co., Ltd. (Shanghai, China). All aqueous solutions were made using ultrapure 18.2 M cm water. Materials and apparatuses Scanning electron microscope (SEM, Hitachi-SU8220, Japan) and transmission electron microscope (TEM, Tecnai G2 F20 U-TWIN, USA) were used to characterize the morphology of Pd-Au nanosheet s. The surface modification of MNPs was identified by fourier transform infrared spectroscopy (FT-IR, Spectrum One, USA). The potential measurement was carried out by the Zetasizer instrument (Zetasizer Nano ZS, England). The crystal structures of nanomaterials were studied by X-ray powder diffraction (XRD, M18XHF, Japan). The chemical elements and structure of the nanosheets were analyzed by X-ray photoelectron spectroscopy (XPS, ESCALAB 250Xi, England). All electrochemical measurements were recorded by Chenhua Electrochemical Instrument (CHI 660, China). The typical three-electrode system used includes counter electrode (platinum electrode), reference electrode (saturated calomel electrode) and working electrode. Preparation of Pd-Au nanosheet The precursor Pd nanosheets were synthesized with modification according to the previously reported method [33] . First, 15.2 mg of Pd(acac) 2 was ultrasonic dissolved in 10 mL of glacial acetic acid. Then, CO gas was introduced into the mixed solution at a flow rate of 250 mL min −1 and maintained for 30 min; after that, the above reaction solution was sealed and allowed to stand for 24 h During this process, the color of the solution changed from yellow to black, and Pd nanosheets gradually formed and settled slowly. Finally, the products were washed thoroughly with ethanol and re-dispersed in ultrapure water. Pd-Au composite nanosheets were synthesized by a simple method of galvanic replacement without the use of additional reductants. First, 10 mL of ultrasonically dispersed Pd sheets was transferred to a flask and stirred vigorously, and then an aqueous solution containing 4 mL of 26.32 mM HAuCl 4 •3H 2 O was dropped into the flask at a flow rate of 33.3 mL min −1 using an autoinjector. Then the reaction system continued to stir for 1 h to fully carry out the displacement reaction. Finally, Pd-Au nanosheets were obtained by washing several times with ultrapure water. Synthesis and modification of MNP The typical synthesis of MNPs was performed as previously reported [34] . In short, MNPs with core-shell structure were prepared by a simple solvothermal method. First, Fe(NO 3 ) 3 •9H 2 O, anhydrous sodium acetate and AgNO 3 were fully dissolved in the ethylene glycol solution, and then the mixture was transferred to an autoclave and reacted at 210 °C for 4 h After cooling, it was magnetically separated and thoroughly washed with ethanol and ultrapure water. Then, citric acid was added to the prepared MNPs solution to improve the stability of the MNP colloidal solution. Finally, the stable citrate-coated MNP was obtained. See the details about the preparation of MNP and functionalization in Supporting Information. Modification of immunosensing electrode The glassy carbon electrode (GCE) was pretreated according to our previous procedure [ 35 , 36 ]. Next, 20 μL of 1 mg mL −1 Pd-Au nanosheets was cast on the surface of the GCE and dried at 40 °C. The modified electrode was immersed in 10 mM PBS solution and used for further experiments. Then, 15 μg mL −1 S1 protein antibody was irradiated with UV light of 300 mW/cm 2 for 30 s to generate sulfhydryl groups for immobilization. This process relies on simple and mature photochemical immobilization technology [ 37 , 38 ]. After that, 10 μL S1 protein antibody was used to incubate the electrode chip for 1 h under 25 °C and blocked with 0.1 mg mL −1 BSA solution to avoid possible nonspecific adsorption. At the same time, 50 μL 1.0 μg mL −1 antibody-modified MNPs was mixed with the same volume of the sample to be analyzed and incubated in a 37 °C shaker for 40 min, then magnetically separated and re-dispersed in 50 μL PBS. Finally, the formed MNPs-Ab-S1 were reacted and bound with the antibody on the interface for 40 min, then washed with PBS and used for measurement next. Sample assay and electrochemical measurements Nasopharyngeal samples were collected using sterile swab sticks and invaded into 2.5 mL PBS. After standing for 1 h, the samples were centrifuged and diluted twice as a stock solution for the dilution of spike protein. Then, 1 mg mL −1 spike protein was successively diluted with stock solution and stored at 4 °C for later use. All the electrochemical measurements are carried out using a standard three-electrode system. The differential pulse voltammetry (DPV) and cyclic voltammetry (CV) programs are executed in 0.01 M PBS containing 10 mM H 2 O 2 . The CV potential scanning range is 0.8 V ∼ −0.8 V, 0.6 V ∼ −0.6 V and 0.6 V ∼ −0.3 V; DPV scanning range is 0.2 V ∼ −0.4 V; the scanning rate of the amperometric measurement above is 50 mV s − 1 . The electrochemical impedance spectroscopy (EIS) measurement is performed employing a series of 0.005 V amplitude sinusoidal waveforms from 0.1 Hz to 10 0,0 0 0 Hz in 5 mM [Fe(CN) 6 ] 3-/4 − solution in 0.01 M PBS. Characterization of Pd-Au nanosheets The microstructure and surface morphology of the Pd-Au nanosheets were characterized by TEM and SEM. The TEM images revealed the thin-layer structure of Pd-Au hybrid nanosheets ( Fig. 2 A, 2 B and Fig. S1). In the high-resolution TEM image ( Fig. 2 C); the lattice spacing of Pd was 0.221 nm, while the gold nanostructure showed a larger lattice spacing of 0.235 nm, which was consistent with the face-centered cubic structure (111) of Au [ 34 , 39 ]. The SEM images ( Fig. 2 D, 2 E and Fig. S2) described the overall morphology of Pd-Au nanosheets, which were flower-like thin-layer structures with large surface areas. The EDS mapping images exhibited that Pd and Au coexist in Pd-Au nanosheets ( Fig. 2 F and Fig. S3), and the Pd element was significantly higher than Au; while the Pd nanosheet only showed Pd element and strong Si element, which was attributed to the SiO 2 film (Fig. S4). XPS survey spectrum was used to further analyze the chemical elements and existing states of Pd-Au nanocomposite. Fig. 2 Fig. S5 revealed the overall survey and individual XPS element scan results. In the Pd 3d spectrum, Pd 3d 3/2 and Pd 3d 5/2 pertained to the peaks at 340.3 and 335.0 eV respectively, while the peaks at 335.8 and 341.2 eV were attributed to Pd 2 + (Fig. S5A). The 87.3 and 83.6 eV peaks of Au were located in Au 4f 5/2 and Au 4f 7/2 (Fig. S5B) [40] . The peaks of C 1s and O 1s were revealed at 284.8, 285.9 and 532.0 eV (Fig. S5C, S5D), respectively, which originated from CO and glacial acetic acid in simple synthesis [41] . The crystal structure of the synthesized material was further explored by XRD. As shown in Figure Characterization and functionalization of MNPs First, TEM was employed to study the morphology of the MNP. Fig. 3 A and 3 B showed a relatively uniform MNP of 170 nm, consisting of a dark Ag core and a dense Fe 3 O 4 shell. MNPs exhibited excellent uniformity and dispersion before applied magnet. Rapid separation of MNPs can be completed within 35s after the addition of magnetic field, showing excellent magnetic properties ( Fig. 3 C). Then, FT-IR spectroscopy ( Fig. 3 D) and Zeta potential measurement ( Fig. 3 E) to −17.5 mV. The above results indicated the successful modification of Ab on the surface of MNP. Finally, the purified S1 protein and Ab were analyzed by Western Blotting (Fig. S6). The results showed that S1 antigen had strong binding to the Ab. Performance evaluation of Pd-Au nanosheets Here we evaluated the application ability of Pd-Au nanosheets as substrates in electrochemical sensing. First, the CV response was measured at different scanning rates (10 mV s − 1 -100 mV s − 1 ) in 5 mM [Fe(CN) 6 ] 3-/4 − solution ( Fig. 4 A), the oxidation and reduction peak currents increased with the increase of scanning rate. There was a good linear relationship between the peak current and the square root of the scanning rate (R 2 (Ox) = 0.9988, R 2 (red) = 0.9986), as shown in Fig. 4 B, indicating that the electron transfer in [Fe(CN) 6 ] 3-/4 − was reversible and the reaction was a diffusion controlled process [ 42 , 43 ]. The diffusion coefficient calculated by the Randles-Sevcik equation was 6.77 × 10 −6 cm 2 /s, which was comparable to the previous reports [44][45][46] . After that, the catalase activity of Pd-Au nanosheets was identified by a sim-ple visual method (Fig. S7). In the PBS solution containing H 2 O 2 , a large number of visible bubbles were generated after adding Pd-Au nanosheet in a short time, showing the production of O 2 . This result revealed that Pd-Au nanosheets have excellent catalase activity. The formation of precursor Pd nanosheets, the in-situ reduction of Pd and Au 3 + replacement reactions, and the effective catalysis of the substrate are as follows: Then, the catalytic ability of Pd-Au nanosheets to substrate H 2 O 2 was tested by i-t method ( Fig. 4 C). The modified electrode showed a step-up current response in different concentrations of H 2 O 2 (0 -20 mM), proving the highly efficient catalytic ability for H 2 O 2 . In addition, the stability of the nanomaterials was studied ( Fig. 4 D). The volt-ampere curve was stable after 30 times of CV The relationship between S1 protein concentration and DPV peak current. Inset: Plot of current vs logarithm of S1 protein concentration. scanning, demonstrating that the substrate Pd-Au nanosheets have good stability. Electrochemical properties of modified electrode and feasibility The electrochemical properties of the modified interface were studied by EIS and CV ( Fig. 4 E and 4 F). According to the modification procedure, compared with bare GCE, the load of Pd-Au nanosheets caused a decrease ( R ct = 198 ) in the diameter of the Nyquist plots (R ct value), the increase of the CV peak current and a lower peak potential separation value ( Ep), showing that this process greatly enhanced the electron transport between the interface and the solution. This is due to the large surface area of the Pd-Au nanosheets, which increased the contact and penetration sensitivity of [Fe(CN) 6 ] 3-/4 − . The modification of antibody and BSA as a blocking agent resulted in an increase ( R ct = 107 ) in Rct and a decrease in CV peak current, indicating that this process hindered the reversible delivery of the redox probe [ 35 , 41 ]. However, the process exhibited lower spikes. This result can be attributed to the electronic coupling of the reduced probe [Fe(CN) 6 ] 3 − with negatively charged proteins (spike protein, BSA and antibody) in cathodic scanning and the greater polarization effect [ 47 , 48 ]. The binding of MNP-Ab-S1 nanocomposite on the surface of GCE further strengthened the above process. These results proved the effectiveness of the construction method. In addition, CV and DPV procedures were performed in PBS solution containing 10 mM H 2 O 2 to verify the feasibility of this strategy ( Fig. 4 G and 4 H). The results revealed that the modified Ab showed a reduced current response relative to the Pd-Au nanosheets, and the DPV curve collected the reduction current of H 2 O 2 at −0.074 mV. After the MNP-Ab was bound to the target S1 protein, the modified electrode measured the reduced CV and DPV signals. This was because of the MNP, which was a poor conductor and hindered the contact and reaction of Pd-Au nanosheets with H 2 O 2 in the solution. Furthermore, the surface morphology of the interface of the modified MNPs-Ab-S1 was characterized by SEM, as shown in Fig. S8. Compared with that one before modification (Fig. S8A-C), the interface after modification of MNPs-Ab-S1 showed that the nanocomposite was bound to the surface of the 3D Pd-Au nanosheet substrate (Fig. S8D-F), indicating the successful capture of S protein by MNP and the effective connection of the nanocomposite to the interface Ab. In general, all test procedures have confirmed the feasibility of this construction strategy. Condition optimization In order to obtain the best electrochemical detection performance, the experimental conditions were optimized, including the incubation concentration of Ab on the interface, the reaction time between MNP-Ab and the target, and the binding time between the MNP-Ab-S1 nanocomposite and Ab on the interface. The details have been shown in the Supporting Information (Fig. S9A-F) Ultra-sensitive detection of S1 protein First, the comparative experiments between Pd-Au + Ab + S -MNP with the blanks (bare GCE, Pd-Au, Pd-Au + Ab) were carried out to detect S1 protein (Fig. S10), the results indicated that compared with the blanks, Pd-Au + Ab + S -MNP showed an observably suppressed DPV current, which proved the importance of this construction strategy. Then, the performance of the magnet-assisted electrochemical immunosensor was analyzed by DPV program under optimal conditions. Different concentrations of S1 protein in PBS were tested, and each test was performed 3 times. As shown in Fig. 5 A, as the target concentration increased the DPV response at −0.074 mV gradually decreased. Fig. 5 B showed the relationship between S1 protein concentration and DPV peak current. The inset showed that the current signal gradually decreased with the logarithmic value of the target concentration from 0.01 ng mL −1 to 10 0 0 ng mL −1 , and exhibited an excellent linear relationship over a wide range (R 2 = 0.9916). The calibration curve was y = 46.0 6 6 -11.439 lg x, where y was the peak DPV current and x was the target antigen concentration. The LOD of 0.0072 ng mL −1 was calculated by the signal-to-noise ratio of 3 (S/ N = 3), according to the means of 3's blank criterion. In addition, various methods for detecting S protein and its related proteins were summarized and compared (Table S1), and the results demonstrated that this electrochemical sensor was superior to most of them. This is also due to the high catalytic performance of Pd-Au nanosheets and the signal amplification effect of MNPs. These results confirmed that this detection strategy has an excellent ability to detect S protein and can achieve ultra-sensitive detection of the target. Repeatability and storage time stability To evaluate the repeatability and stability of this method, multiple prepared immunosensors were stored in a dry environment at 4 °C for different periods of time, and then DPV responses were tested under the same conditions ( Fig. 6 ). The results showed that the current signals between different GCEs did not show significant differences ( Fig. 6 A). After 10 days of storage, the electrochemical response of the modified electrode was close to its original current signal ( Fig. 6 B). We noticed that there is some variation in the mean valves within 10 days. Yet compare with the exiting literature, this magnet-assisted electrochemical immunosensor based on Pd-Au nanosheets has relative ultrasensitivity, simplicity, and a wide detection range [49][50][51] . Evaluation of selectivity and detection in nasopharyngeal samples We evaluated the selectivity of this detection method using 10 different proteins or small molecules ( Fig. 7 A). The results revealed that except the S1 protein, other proteins or molecules exhibited almost constant DPV signal; and after incubating the target protein, the current was greatly reduced ( Fig. 7 B). It should be pointed out that the concentration of non-target substances set was 5 times of the concentration of target substances. The above results proved that this strategy has good selectivity. In order to test the possible applications of the proposed immunoassay method, SARS-CoV-2 S1 protein was detected in the nasopharyngeal samples collected from healthy people by standard addition method. Fig. 7 C showed the DPV responses caused by incubation with 5 different concentrations of S1 protein ( n = 3), showing a gradual decrease in current signal strongly dependent on the concentration. The determination results in the complex samples were visually compared with those in PBS, and similar results were obtained ( Fig. 7 D). The calculated recovery rate was 84.545% -103.520%, and the RSD was between 1.833% and 8.406% (Table S2), indicating that the feasibility of the proposed sensor for practical applications. It is important to fur-ther validate the accuracy of the proposed sensor for future clinical samples. Conclusion In general, an electrochemical immunosensor was fabricated to detect SARS-CoV-2 S1 protein, achieving a LOD as low as 0.0072 ng mL −1 and wide detection range, which was attributed to the high conductivity, large surface-area of the surface-clean heterostructures Pd-Au nanosheets and the signal amplification and magnetic field-based transmission function of MNPs. The entire detection can be completed within 2 h The measurement results in PBS and nasopharyngeal samples proved the excellent performance of this biosensor, which can be used for early diagnosis of virus antigen. Our work presents a simple and efficient strategy to build electrochemical immunosensors based on metallic nanosheets for the sensitive detection of spike protein of SARS-CoV-2 antigen. This strategy provides meaningful ideas for the development of more rapid, economical, and mature unconventional detection methods to help medical staff predict the degree of virus infection in infected patients. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests.
2021-12-23T14:08:17.622Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "6622829b9cea3ea373be05797b0c011feaf779f3", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.electacta.2021.139766", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "51da62735ae216fdaa53e9c55a993af1f4b4d611", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
245240891
pes2o/s2orc
v3-fos-license
MicroRNA-573 inhibits cell proliferation, migration, and invasion and is downregulated by PICSAR in cutaneous squamous cell carcinoma The incidence of cutaneous squamous cell carcinoma (cSCC) has been increasing in recent years. Meanwhile, microRNAs have been found to play vital roles in various cancers, including cSCC. This study aimed to investigate the expression of microRNA-573 (miR-573) in cSCC, its relationship with long non-coding RNA PICSAR and analyze its biological role. The relationship between PICSAR and miR-573 was confirmed by dual-luciferase reporter assay and Pearson’s correlation coefficient analysis. The levels of PICSAR and miR-573 were measured using quantitative Real-Time polymerase chain reaction. Cell Counting Kit-8 assay was used to evaluate the cSCC cell proliferation ability. The migration and invasion abilities of cSCC cells were evaluated by Transwell assay. PICSAR expression was increased and miR-573 was decreased in tumor tissues and cSCC cell lines. PICSAR and miR-573 can bind directly, and miR-573 expression was downregulated by PICSAR in cSCC. Overexpression of miR-573 significantly inhibited the proliferation, migration, and invasion abilities of A431 and SCC13 cells. In addition, miR-573 overexpression reversed the promotion effects of PICSAR overexpression on cSCC cell proliferation, migration, and invasion abilities. In conclusion, our findings indicated that miR-573 expression was decreased in tumor tissues and cSCC cells and was downregulated by PICSAR in cSCC. Additionally, miR-573 overexpression inhibited cSCC cell proliferation, migration and invasion, and reversed the promotion effects of PICSAR overexpression on cSCC cell biological functions. Thus, miR-573 might function as a tumor suppressor and might be involved in the regulatory effects of PICSAR on tumorigenesis in cSCC. INTRODUCTION Cutaneous squamous cell carcinoma (cSCC) is the second most common cancer in humans with an increasing incidence [1].Because of the sunlight or trauma, exposure to chemical agents, chronic wounds, or papillomavirus infection, pre-neoplastic lesions arise in the skin, which cause abnormal proliferation of keratinocytes and eventually leading to cSCC [2].Although the clinical behavior of cSCC is generally benign, it may undergo local invasion and metastasis [3].Squamous cell carcinoma itself is a more aggressive cancer, which is prone to lymph node and distal metastasis, and once metastatic, it is difficult to treat and has a poor prognosis.Thus, although study has found that the overall survival of patients with cSCC is extremely high, patients with advanced cSCC continue to have high morbidity and mortality [4].Thus, it is urgent to search for new diagnostic biomarkers and thereby improve the cSCC treatment outcome. Non-coding RNAs, especially long non-coding RNAs (lncRNAs) and microRNAs (miRNAs), have been found to be closely associated with the occurrence and development of cancers [5].LncRNAs are defined as non-coding RNAs (ncRNAs) over 200 nucleotides in length and can regulate gene expression at epigenetic, transcriptional, and posttranscriptional levels [6].In addition, some lncRNAs have been increasingly recognized to be involved in the progression of cSCC, such as lncRNA TINCR [30993776] and lncRNA SCARNA 2 [7].The important role of lncRNA PICSAR in cSCC has been reported by previous studies.For example, Piipponen et al. have reported that lncRNA LINC00162 also named P38 inhibited cSCC associated lincRNA (PICSAR) may promote cSCC tumor progression by regulating ERK1/2 signaling pathway activity [8].In addition, PICSAR could regulate the function of cSCC cells [9].Notably, a recent study also reported that PICSAR could promote cSCC progression by regulating miR-125b/YAP1 signaling axis [10]. It is known that lncRNAs may function as competing endogenous RNAs (ceRNAs) to regulate the biological functions or expression of miRNAs.MiRNAs are small R E T R A C T E D www.bjbms.org the manufacturer' s protocols.Cells were collected after transfection for 48 hours and used for the following analyses. RNA extraction and quantitative real-time polymerase chain reaction (qRT-PCR) TRIzol reagent (Invitrogen; Thermo Fisher Scientific, Inc.) was used to extract total RNA, including miRNA, from tissues and cSCC cells.A NanoDrop 2000 (Thermo Fisher Scientific, Inc.) was used to evaluate the purity and concentration of the extracted RNA.The single-stranded cDNA was then synthesized from the obtained RNA using a PrimeScript RT reagent kit (Takara Bio, Inc.) according to the manufacturer' s protocols. Dual-luciferase reporter assay At first, the binding sequence details of PICSAR and miR-573 were predicted by using starBase v2.0 (http://starbase.sysu.edu.cn/) [16].To confirm whether there was a direct interaction between PICSAR and miR-573, a luciferase reporter assay was performed.PICSAR wild-type (PICSAR-WT) and mutant type (PICSAR-MUT) sequences were cloned into the reporter vector pGL3 (Promega, Madison, WI, USA).Then the integrated vectors were respectively co-transfected with miR-573 mimic and mimic NC into cSCC cell lines A431 and SCC13 using Lipofectamine 3000 (Invitrogen, CA, USA).Relative luciferase activity was analyzed by a Dual-Luciferase Reporter assay system (Promega, Madison, WI, USA) after 48 hours of transfection at 37°C.Firefly luciferase activity was normalized to Renilla luciferase activity. CCK-8 assay After cell transfection, the cell proliferation was analyzed using the CCK-8 assay.The stable transfected A431 ncRNAs that can regulate gene expression by binding to the 3' untranslated region (3'UTR) of target mRNAs to suppress target mRNA translation or promote mRNA degradation [11].Besides, some miRNAs have been reported to be involved in the progression of cSCC, such as miR-221 [12] and miR-497 [13].In this study, the complementary sequence of miR-573 on the sequence of PICSAR was predicted by bioinformatics.Additionally, miR-573 was found to act as a tumor suppressor gene in some tumors and can inhibit tumor progression of melanoma [14].Thus, we speculated that miR-573 may be associated with PICSAR and may play a role in cSCC.However, the relationship between PICSAR and miR-573 has not been reported previously, and the role of miR-573 in cSCC remains unknown. Therefore, this study attempted to analyze the expression of miR-573 in tumor tissues of cSCC patients and cSCC cells, the relationship of miR-573 and PICSAR, as well as the effects of miR-573 expression on cell proliferation, migration, and invasion of cSCC cells. Patients and sample collection A total of 96 cSCC patients admitted to Weifang People' s Hospital from 2014 to 2019 were recruited, all of whom had not received any anti-tumor treatment before sample collection.The inclusion criteria were: (1) patients with comprehensive case data; (2) patients without other dermatological manifestations such as liver, nasopharynx, or heart or lesions; (3) patients in whom no basal cell carcinoma was found.The tumor tissues of cSCC patients were collected, and the adjacent normal tissues (1-2 cm from the edge of the tumor tissues) were also collected.All the tissues were promptly frozen with liquid nitrogen.This study was approved by the Ethics Committee of Weifang People' s Hospital and all patients have signed informed consent. Cell culture and transfection Four cSCC cell lines (A431, HSC-5, SCC13, and SCL-1) and a human keratinocyte cell line (HaCaT) were all purchased from the Shanghai Cell Bank of Chinese Academy of Sciences (Shanghai, China).The cells were cultured using Dulbecco' s modified Eagle' s medium (DMEM; Gibco; Thermo Fisher Scientific, Inc.) supplemented with 10% fetal bovine serum (FBS; Gibco; Thermo Fisher Scientific, Inc.), and maintained in a 5% CO 2 atmosphere at 37°C.The pcDNA3.1-PICSAR,pcDNA3.1,miR-573 mimic, mimic negative control (NC) were purchased from GenePharma (Shanghai, China).The above vectors were transfected into cSCC cells using Lipofectamine 3000 transfection reagent (Invitrogen, CA, USA) according to www.bjbms.organd SCC13 cells were seeded into 96-well plates at a density of 5 × 10 3 cell/well, and then cultured in a humidified incubator at 37°C.When the cells were incubated for 0, 24, 48 and 72 hours, the CCK-8 reagent was added into the cells and the cells was further incubated for 2 hours.The optical density of samples at 450 nm was measured using a micro-plate analyzer (Bio-Rad Laboratories, Inc.) to reflect cell proliferation. Transwell assay Transwell chambers (Corning, Inc.) were used to evaluate the migration and invasion abilities of A431 and SCC13 cells.The chambers without pre-coated with Matrigel (Corning, Inc.) were used for the migration assay.The upper chambers with serumfree DMEM medium were seeded with A431 and SCC13 cells (cell density of 5 × 10 5 cell/well).The lower chambers were filled with DMEM supplemented with 10% FBS.After incubation for 24 hours at 37°C, the cells remaining on the upper membrane surface were removed, the cells in the lower chambers were fixed with 4% paraformaldehyde for 15 minutes and then stained using 0.1% crystal violet for 20 minutes.The number in five randomly selected fields was counted under an inverted light microscope (Olympus Corporation) to analyze the migration ability of cells.In performing the analysis of cell invasion ability, Transwell chambers pre-coated with Matrigel were used, and the rest procedures were the same as the migration analysis method. Ethics approval and consent to participate The experimental procedures were all in accordance with the guideline of the Ethics Committee of Weifang People' s Hospital and have approved by the Ethics Committee of Weifang People' s Hospital. A signed written informed consent was obtained from each patient. Statistical analysis All experiments were repeated at least three times and the data were presented as the mean ± SD.All statistical analyses were performed using SPSS 21.0 software (SPSS, Inc., Chicago, USA) and GraphPad Prism 7.0 software (Inc., Chicago, USA).The differences between groups were assessed using Student' s t-test, Chisquare test, or one-way ANOVA.Correlation between PICSAR levels and miR-573 levels was assessed using Pearson' s correlation coefficient.p < 0.05 was considered statistically significant. Relationship between PICSAR and miR-573 in Patients with cSCC The binding sequences between PICSAR and miR-573 was shown in Figure 1A.According to the luciferase reporter assay results (Figure 1B and C), the relative luciferase activity in PICSAR-WT group was inhibited by miR-573 overexpression (p < 0.05), whereas no changes were observed in luciferase activity in PICSAR-MUT group (p > 0.05).The results of dual-luciferase reporter assay indicated the direct binding of miR-573 to PICSAR.Then the expression levels of PICSAR and miR-573 in the tissue samples were analyzed.The expression of PICSAR was significantly increased and the expression of miR-573 was significantly decreased in tumor tissues compared with that in normal controls (Figure 1D and E, all p < 0.001).As presented in Figure 1F, a negative correlation was observed between PICSAR levels and miR-573 levels (r = -0.551,p < 0.001). Association of PICSAR and miR-573 with the clinicopathological characteristics of cSCC patients Chi-square test was used to analyze the association of PICSAR and miR-573 expression with the clinical characteristics of cSCC patients.The median expression value of PICSAR (1.9) and miR-573 (0.5) were used as the cutoff value to classify the patients into low and high PICSAR, and low and high miR-193b expression groups, respectively.The results presented in Table 1 indicated that PICSAR and miR-573 expression were all significantly correlated with the tumor size, tumor grade, and TNM stage of cSCC patients (all p < 0.05).Meanwhile, patients with high PICSAR levels or low miR-573 levels contained more patients with tumors larger than 5cm in diameter, poor tumor grade, and advanced TNM stages compared with the patients with low PICSAR levels or high miR-573 levels.Therefore, PICSAR and miR-573 expression might be involved in the progression of cSCC. Expression of PICSAR and miR-573 in cSCC cell lines The experimental results shown in Figure 2 were obtained from three biological replicates.The expression levels of PICSAR and miR-573 were detected in the four cSCC cell lines and human keratinocyte cell line HaCaT.Consistent with the results of tumor tissues, PICSAR expression level was increased (Figure 2A) and miR-573 expression level was decreased (Figure 2B) in the cSCC cell lines compared with that in HaCaT cell line (all p < 0.01).We selected the A431 cells and SCC13 cell lines for the subsequent experiments.In the A431 cells and SCC13 cells, the expression of PICSAR was upregulated by pcDNA3.1-PICSAR(Figure 2C, all p < 0.001).As shown in Figure 2D, the expression level of miR-573 was inhibited by PICSAR overexpression in the A431 cells and SCC13 cells (all p < 0.001), once again proving that PICSAR directly regulates miR-573. R E T R A C T E D www.bjbms.org MiR-573 overexpression inhibits cSCC cell proliferation, migration and invasion The expression level of miR-573 was upregulated by miR-573 mimic in A431 cells (Figure 3A MiR-573 Overexpression Reverses the Effects of PICSAR on cSCC Cell Proliferation, Migration and Invasion The expression level of miR-573 inhibited by pcD-NA3.1-PICSAR was upregulated by miR-573 mimic in A431 cells (Figure 4A, all p < 0.001) and SCC13 cells (Figure 4B, all p < 0.001).The PICSAR overexpression promoted the cell proliferation of A431 cells and SCC13 cells, which was reversed by miR-573 overexpression (Figure 4C and D, all p < 0.05).The PICSAR overexpression promoted the cell migration of A431 cells and SCC13 cells, which was reversed by miR-573 overexpression (Figure 4E and F, all p < 0.001).Consistently, the miR-573 overexpression also reversed the promotion effects of PICSAR overexpression on the cell invasion of A431 cells (Figure 4G, all p < 0.001) and SCC13 cells (Figure 4H, all p < 0.001). DISCUSSION Accumulating evidence indicated that miRNAs play an important role in the occurrence and development of tumor, and have the function of signal transduction and regulation of gene expression in cells [17].In addition, some studies have showed that miRNAs play oncogenic roles or suppressive roles in human tumor progression.For instance, Liang et al. showed the decreased miR-187 expression in cervical cancer www.bjbms.orgtissues and cell lines, and miR-187 exerted tumor-suppressive roles in cervical cancer cells by targeting FGF9 [18].A study by Hu et al. reported that miR-532 was overexpressed in gastric cancer tissues and cells, exerted the promotion effects on the gastric cancer cell migration and invasion and might a potential target for gastric cancer therapy [19].Similarly, in cSCC, some miRNAs expressions have been reported to be dysregulated and play crucial roles in tumorigenesis and development in cSCC.For instance, Zhou et al. demonstrated that miR-506 expression was upregulated in both cSCC tissues and cell lines and downregulation of miR-506 expression repressed tumorigenesis in cSCC cells by targeting P65 and LAMC1 [20].The miR-217 expression, which was upregulated in the cSCC cell lines and was found to promote cSCC cell growth, cell cycle and invasion, contributed to the development of cSCC [21]. A study reported by Wang et al. showed that decreased miR-27a expression promoted the progression of cSCC and could serve a novel therapeutic target [22].The aforementioned studies indicated that miRNAs might be involved in the cSCC progression, and identifying the new miRNAs affect tumor progression was very important for improving cSCC treatment. It has been known that PICSAR played important role in the progression of cSCC [8].Moreover, PICSAR has been demonstrated to promote cSCC progression by regulating miR-125b/YAP1 signaling axis [10].In addition, the binding sequence details of miR-573 and PICSAR were predicted by bioinformatics.Moreover, miR-573 was found to inhibit tumor progression of melanoma [14].Therefore, we suspected that miR-573 expression might be related to the cSCC and was regulated by PICSAR.In this study, we firstly confirmed the direct binding of miR-573 to PICSAR by dual-luciferase reporter assay.Then, we found that miR-573 expression was significantly decreased and PICSAR expression was significantly increased in tumor tissues and cSCC cells.In addition, the expression of miR-573 was inhibited by PICSAR.Moreover, MiR-573 has been also found to be related to other types of diseases.For instance, a study by Wang et al. revealed that miR-573 played a protective role in the pathological process of rheumatoid arthritis (RA), and suggested that miR-573 might be a potential target in the treatment of RA [23].MiR-573 expression, which was found to be significantly decreased in metastatic tissues, modulated epithelial-mesenchymal transition and metastasis of prostate cancer cells [24].A study by Danza et al. revealed that miR-573 was downregulated in BRCA 1/2-related breast cancer, and was involved in BRCArelated breast cancer angiogenesis [25].Besides, miR-573 was also found to be regulated by other lncRNAs, such as lncRNA SNHG1 [26] and lncRNA TTN-AS1 [27].It is believed that the mechanisms underlying the transformation of normal keratinocytes involving the dysregulation of various key genes in cancers, and lncRNAs and miRNAs have been demonstrated as important regulators of the expression of the key genes in cancers.Besides, for the potential function of PICSAR and miR-573 in normal keratinocytes, it also has great significance to indicate the relationship between cSCC development and PICSAR and miR-573.Therefore, we speculated that miR-573 might be involved in the progression of cSCC and was downregulated by PICSAR in cSCC.This study extends our understanding of miR-573' s functional role in cSCC.The functional role of miR-573 has previously been investigated in a variety of cancers.For example, decreased miR-573 expression was observed in pancreatic cancer cell lines, which enhanced pancreatic cancer cell proliferation, migration, and invasion via targeting TSPAN1.[28].A study by Hu et al. showed that miR-573 caused the increase of invasion, migration, and proliferation of hepatoma cells in hepatocellular carcinoma [29].The expression of miR-573 was decreased in degenerative nucleus pulposus cells and promoted cell viability of nucleus pulposus cells [30].The present study conducted cell experiments to investigate the functional role of miR-573 in cSCC progression.Following transfection, the expression of miR-573 was upregulated by miR-573 mimic, and PICSAR expression was upregulated by pcDNA3.1-PICSAR.The results of cell experiments indicated that miR-573 overexpression inhibited cSCC cell proliferation, migration and invasion, suggesting that miR-573 might play suppressive role in cSCC progression.In addition, the promotion effects of PICSAR on cSCC cell biological function have been found [8].And studies have found that some miRNAs mediated the promotion effects of PICSAR on cell biological function of other disease [31,32], including cSCC [10].This study revealed that miR-573 overexpression reversed the promotion effects of PICSAR on cSCC cell proliferation, migration, and invasion.In addition, miR-573 has been found to reverse the effects of other lncRNAs on cell biological function, such as lncRNA FLVCR1-AS1 [33] and lncRNA TTN-AS1 [27].Therefore, miR-573 might functions as a tumor suppressor in cSCC progression and was inhibited by PICSAR in cSCC. There were some limitations in this study.At first, the sample size was small and future studies with a large research cohort are needed.Besides, this study only discussed the potential target genes of miR-573 and did not explore the exact target of miR-573 in cSCC.We thus performed additional in silico analysis, using TargetScan databases, to identify potential key targets of miR-573.Among them, previous studies have reported that EGFR can promote the cell proliferation and survival [34], and IL8 and CLEC2A are related to cSCC [35,36].However, whether miR-573 could regulate EGFR, IL8 and/or CLEC2A in cSCC remains unclear, and whether miR-573 could regulate the cSCC cell biological functions through targeting EGFR, IL8, and/or CLEC2A remains also uncertain.In addition, the targets of miR-573 proposed in discussion (such as TSPAN1, E2F3, and Bax) have not been confirmed in cSCC.Thus, we will assess the correlation of miR-573 with the above targets, and assess the expression of above targets in both in vitro cSCC models and human cSCC tissue samples in further researches. CONCLUSION In conclusion, the present study indicated that the expression level of miR-573 was decreased in tumor tissues of cSCC patients and cSCC cells, and was downregulated by PICSAR in cSCC.In addition, miR-573 overexpression inhibited the cell proliferation, migration, and invasion of cSCC cells, and reversed the promotion effects of PICSAR overexpression on cSCC cell biological functions.Overall, this study reveals that miR-573 might function as a tumor suppressor and might be involved in the biological function of PICSAR in regulating the progression of cSCC.The potential PICSAR/miR-573 axis provides a novel insight into the pathogenesis of cSCC, and may help to develop the tumor therapy targets in future. FIGURE 1 . FIGURE 1. Relationship between PICSAR and miR-573 in patients with cutaneous squamous cell carcinoma.(A) The binding sequences between PICSAR and miR-573.(B and C) The relative luciferase activity in PICSAR-WT group was inhibited by miR-573 overexpression, whereas no changes were observed in luciferase activity in PICSAR-MUT group.The expression of PICSAR (D) and miR-573 (E) in tumor tissues and tissues of normal controls.(F) Relative level of miR-573 was negatively correlated with relative level of PICSAR (r = -0.551,p < 0.001).(*p < 0.05, ***p < 0.001 vs. Untreated or Normal controls).D FIGURE 2 .FIGURE 3 . FIGURE 2. Expression levels of PICSAR and miR-573 in cutaneous squamous cell carcinoma cell lines.The expression levels of PICSAR (A) and miR-573 (B) were detected in the A431, HSC-5, SCC13 and SCL-1 cell lines as well as the human keratinocyte cell line HaCaT.(C) The expression of PICSAR was upregulated by pcDNA3.1-PICSAR in A431 cells and SCC13 cells.(D) The expression level of miR-573 was inhibited by PICSAR overexpression in the A431 cells and SCC13 cells.(**p < 0.01, ***p < 0.001 vs. HaCaT or Mock).D C www.bjbms.orgPICSAR and miR-573 expression were all correlated with the tumor size, tumor grade, and TNM stage of cSCC patients. TABLE 1 . Association of PICSAR and miR-573 with the clinicopathological characteristics of cSCC patients
2021-12-16T06:23:27.178Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "650f02898ba0f3bcdbfc0fcdc1476313feabe069", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17305/bjbms.2021.6301", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e4962e87fc409c67dd4bd44b1f9dc30286b4b744", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
269541124
pes2o/s2orc
v3-fos-license
Association between Plasma Interleukin-27 Levels and Cardiovascular Events in Patients Undergoing Coronary Angiography Atherosclerotic disease, including coronary heart disease (CHD), is one of the chronic inflammatory conditions, and an imbalance between pro-inflammatory and anti-inflammatory cytokines plays a role in the process of atherosclerosis. Interleukin (IL)-27, one of the IL-12 family members, is recognized to play a dual role in regulating immune responses with both pro-inflammatory and anti-inflammatory properties. IL-27 is secreted from monocytes, T cells, and endothelial cells, and its expression is upregulated in atherosclerotic plaques. We previously reported that no significant difference was observed in plasma IL-27 levels between patients with stable CHD and those without it. However, the prognostic value of IL-27 levels has not been fully elucidated. We studied the relation of plasma IL-27 levels to cardiovascular events in 402 patients undergoing elective coronary angiography for suspected CHD. We defined cardiovascular events as cardiovascular death, myocardial infarction, unstable angina, stroke, or coronary revascularization. Of the 402 study patients, CHD was present in 209 (52%) patients. Plasma IL-27 levels were not markedly different between patients with CHD and those without it (median 0.23 vs. 0.23 ng/mL). During a follow-up of 7.6 ± 4.5 years, cardiovascular events were observed in 70 patients (17%). In comparison to the 332 patients with no event, the 70 patients who had cardiovascular events showed significantly higher IL-27 levels (median 0.29 vs. 0.22 ng/mL) and more frequently had an IL-27 level of >0.25 ng/mL (59% vs. 40%) (p < 0.01). The Kaplan–Meier analysis demonstrated a lower event-free survival rate in patients with an IL-27 level >0.25 ng/mL than in those with an IL-27 level ≤0.25 ng/mL (p < 0.02). The multivariate Cox proportional hazards regression analysis showed that IL-27 level (>0.25 ng/mL) was a significant predictor for cardiovascular events (hazard ratio: 1.82; 95%CI: 1.13–2.93, p < 0.02), independent of CHD. Thus, high IL-27 levels in plasma were related to an increased risk of further cardiovascular events in patients who underwent elective coronary angiography. Introduction Atherosclerotic diseases, including coronary heart disease (CHD), are chronic inflammatory conditions, and atherosclerotic plaques develop due to an imbalance between pro-inflammatory and anti-inflammatory cytokines [1].The T-helper (Th) cells differentiate into the Th1 and Th2 cells.The Th1 cells have the pro-inflammatory effects of secreting cytokines like interferon (IFN)-γ and interleukin (IL)-2, whereas the Th2 cells induce antiinflammatory responses via the secretion of IL-4, IL-10, and IL-13 cytokines.Furthermore, regulatory T (Treg) cells are a subtype of CD4 + T cells that regulate the effects of Th1 and Th2 cells.The pathogenesis of inflammation in atherosclerosis is attributable to the imbalance between pro-inflammatory Th1 and anti-inflammatory Th2 cytokines and impaired Treg responses [2,3]. In 2002, Pflanz et al. [4] first identified IL-27 as one of the IL-12 family members that is a heterodimeric cytokine consisting of the p28 subunit (an IL-6 and p35 homologue) and the Epstein-Barr virus-induced gene 3 (EBI3) subunit (an IL-12 p40 homologue that was originally discovered to be secreted from Epstein-Barr virus-transformed B cells).IL-27 is mainly secreted from monocytes, T cells, endothelial cells, and dendritic cells [2,4].Moreover, IL-27 binds to the IL-27 receptor (IL-27R), consisting of a ligand-binding chain, the IL-27 Rα (WSX-1) subunit, which is unique for the IL-27 binding, and an additional signal-transducing chain, the gp130 subunit, which is shared with the IL-6 receptor [5,6].IL-27R is expressed in various cells, like T cells, macrophages, dendritic cells, and endothelial cells [2,7].Both IL-27 and IL-27R gene expression was demonstrated to be upregulated in atherosclerotic plaques [8].In human endarterectomy specimens from the carotid artery, IL-27 expression was shown in vascular smooth muscle cells (SMCs), endothelial cells, and macrophages [9]. Regarding blood IL-27 levels and atherosclerotic diseases, several studies reported high blood IL-27 levels in patients who had acute coronary syndrome (ACS), defined as acute myocardial infarction (MI) or unstable angina pectoris (UAP) [15,18,19].Recently, Grufman et al. [20] evaluated plasma IL-27 levels and prognosis in ACS patients, and they showed that high IL-27 levels were related to recurrent MI and cardiovascular death.However, any association of blood IL-27 levels with cardiovascular events in patients who underwent elective coronary angiography or patients with stable CHD has not been fully elucidated.Notably, we previously measured plasma IL-27 levels in 147 patients with stable CHD and 97 without it and reported that no significant difference was observed in IL-27 levels between patients with stable CHD and those without it [21].To elucidate the prognostic value of plasma IL-27 levels in patients who underwent elective coronary angiography for suspected CHD, the present study extended our previous study [21] by increasing the number of patients and then by following up for cardiovascular events. Patient Population In July 2008, we began to prospectively collect blood samples as well as clinical and angiographic data from patients who underwent coronary angiography for suspected CHD at NHO Tokyo Medical Center in Japan.Patients who had any history of percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG), or patients who were on hemodialysis were not asked for their participation in our study.The institutional ethics committee had approved our study (Approval Number: R08-050 and R21-037).After taking written informed consent according to the Helsinki Declaration, blood sampling was performed in an overnight fasting state on the morning of the day when angiography was performed.Any patients who were admitted for ACS, defined as acute MI or class III UAP by Braunwald's classification [22], or those who had any history of heart failure (HF), were excluded from the present study.Since serum IL-27 levels have been documented to be increased in patients with breast or lung cancers [23,24], patients who had any cancer were also excluded.In the present study, we assessed plasma IL-27 levels in 402 consecutive patients who underwent elective coronary angiography for suspected CHD and then were followed for a mean period of 7.8 ± 4.5 years for cardiovascular events.We defined hypertension as having blood pressures ≥140/90 mmHg and/or drug prescriptions; 243 patients (60%) were taking anti-hypertensive medication.We also defined hypercholesterolemia as having an LDL cholesterol level >140 mg/dL and/or drug prescriptions, and 146 patients (36%) were taking statin.Diabetes mellitus (DM) was defined as having a fasting plasma glucose level ≥126 mg/dL and/or drug prescriptions or insulin treatment, and 100 patients (25%) were found to have DM.We defined smoking as a history of 10 or more pack-years smoking, and 171 patients (43%) had such a smoking history. The Measurements of IL-27 and C-Reactive Protein Levels in Plasma Blood samples were collected into tubes with EDTA and centrifuged at 2000× g for 15 min at 4 • C. Plasma was frozen and stored at −80 • C until use.For the measurement of IL-27 levels, the enzyme-linked immunosorbent assay (ELISA) (LEGEND MAX™ Human IL-27 ELISA Kit; BioLegend, San Diego, CA, USA) was used.As previously reported [21], we assessed IL-27 levels at Ochanomizu University according to the manufacturer's instructions.According to data from the manufacturer, the lowest detection limit of this kit was 0.01 ng/mL.The intra-and interassay coefficients of variation were found to be <6.0%and <5.5%.To measure high-sensitivity C-reactive protein (CRP) levels, a BNII nephelometer (Siemens Healthineers, Tokyo, Japan) was used. Coronary Angiography at Baseline and Clinical Follow-Up We performed angiography using the Philips Electronics angiogram system (Tokyo, Japan).CHD was defined as ≥1 coronary artery having >50% stenosis, and the severity of CHD was evaluated as the number of vessels having >50% stenosis.The stenosis severity in each segment by the CASS classification was assessed from a visual assessment and was classified into 5 grades (<25%, 26-50%, 51-75%, 76-90%, and >90% stenosis).All the angiograms were evaluated by a single cardiologist, who was blind to clinical and laboratory data.Left ventricular (LV) systolic function was evaluated as LV ejection fraction (LVEF) measured using echocardiography.For a mean period of 7.6 ± 4.5 years, all our patients were followed for cardiovascular events.As in our previous report [25], we defined cardiovascular events as cardiovascular death, MI, hospitalization for UAP or stroke, or the need for coronary revascularization, such as PCI and/or CABG.However, if PCI or CABG were scheduled and performed as a result of coronary angiography at baseline, they were judged not to be events.The patients' outcomes were assessed by a review of their medical records. Statistical Analysis We conducted all statistical analyses using the IBM SPSS version 29 software and defined statistical significance as a p-value < 0.05.Parametric and categorical parameters were presented as the mean ± SD and the number (%), respectively.As the measured CRP and IL-27 levels were judged to be nonparametric by the Shapiro-Wilk test, their results were represented as the median value and the interquartile range.For parametric, nonparametric, and categorical parameters, the unpaired t-test, Mann-Whitney U test, and chi-square test were used to assess any differences between two groups, respectively.The optimal cutoff point of IL-27 for cardiovascular events was found to be 0.25 ng/mL, where the Youden index of sensitivity + specificity − 1 is the maximum [26].The event-free survival rates in patients with an IL-27 level of >0.25 ng/mL and those with IL-27 ≤0.25 ng/mL were compared using the Kaplan-Meier method with a log-rank test.As for the cut-off point of CRP, the previously shown cut-off point of 1.0 mg/L was used [27,28].A multivariate Cox proportional hazards regression analysis was performed to find the independent predictors for cardiovascular events. Results Of the 402 patients, CHD (>50% stenosis) was observed in 209 (52%) patients, of whom PCI and CABG were performed in 111 and 39 patients, respectively, as a result of angiography at baseline.In comparison to the 193 patients without CHD, the 209 with CHD were significantly older, had a male predominance, and more frequently had hypertension, hypercholesterolemia, DM, and lower HDL cholesterol levels.Furthermore, plasma CRP levels were significantly higher in patients with CHD than in patients without CHD (median 0.80 vs. 0.51 mg/L, p < 0.005) (Table 1).There was no significant difference in plasma IL-27 levels between patients with CHD and patients without CHD (median 0.23 vs. 0.23 ng/mL, p = NS).The data represent the mean value ± SD or the number (%), except for CRP and IL-27 levels, which represent the median value and interquartile range. During the mean follow-up of 7.6 ± 4.5 years, cardiovascular events were observed in 70 (17%) patients (cardiovascular death, n = 20; MI, n = 5; UAP, n = 8; stroke, n = 12; coronary revascularization, n = 25).In comparison to the 332 patients with no event, the 70 patients with cardiovascular events had higher LDL cholesterol and lower HDL cholesterol levels (p < 0.05).Moreover, patients with cardiovascular events had a higher prevalence of CHD (80% vs. 46%) and a greater number of >50% stenotic coronary vessels (1.7 ± 1.1 vs. 0.8 ± 1.0) (p < 0.001) (Table 2).CRP levels were higher in patients with events than in those with no events (0.85 vs. 0.60 mg/L), but this difference did not reach statistical significance.Of note was that patients with events had significantly higher plasma IL-27 levels (0.29 vs. 0.22 ng/mL) and more often had an IL-27 level of >0.25 ng/mL (59% vs. 40%) than those with no event (p < 0.01).As a result, the sensitivity and specificity to predict cardiovascular events for the IL-27 level of >0.25 ng/mL were 59% and 60%, and the positive and negative predictive values were 24% and 87%, respectively.The data represent the mean value ± SD or the number (%), except for CRP and IL-27 levels, which represent the median value and interquartile range. Figure 2. The event-free survival from cardiovascular events.The 402 patients were divided into tertiles according to IL-27 levels: lower (<0.18ng/mL), middle (0.19-0.30ng/mL), and higher (>0.30ng/mL) tertiles.A Kaplan-Meier analysis showed lower event-free survival in patients in the higher tertile compared with those in the lower tertile (p < 0.05). Figure 2. The event-free survival from cardiovascular events.The 402 patients were divided into tertiles according to IL-27 levels: lower (<0.18ng/mL), middle (0.19-0.30ng/mL), and higher (>0.30ng/mL) tertiles.A Kaplan-Meier analysis showed lower event-free survival in patients in the higher tertile compared with those in the lower tertile (p < 0.05). Discussion We studied the prognostic value of plasma IL-27 levels in 402 patients who underwent elective coronary angiography for suspected CHD.Plasma IL-27 levels were not markedly different between patients with stable CHD and those without it.Of note was that IL-27 levels were significantly higher in patients who developed cardiovascular events than in those with no event.High plasma IL-27 levels were related to an increased risk of further cardiovascular events, independent of CHD, CRP levels, and atherosclerotic risk factors. IL-27 is suggested to have dual effects on regulating the immune response with pro-inflammatory and anti-inflammatory properties [5,10].However, the role of IL-27 in promoting or suppressing inflammation may vary within different diseases.IL-27 promotes inflammation in diseases such as crescentic glomerulonephritis, colitis, and systemic sclerosis, but IL-27 suppresses inflammation in diseases such as autoimmune arthritis, allergic asthma, and autoimmune encephalomyelitis [15].Regarding the role of IL-27 in atherosclerosis, IL-27 enhanced the upregulation of adhesion molecules and pro-inflammatory cytokines in cultured endothelial cells [16].IL-27 also promoted Th1 differentiation and upregulated ICAM-1 on CD4 + T cells [17].These findings indicate that IL-27 may play a primarily promotive role in atherosclerosis and inflammation.In contrast, in animal models of atherosclerosis, recombinant IL-27 administration inhibited the progression of atherosclerosis in ApoE-deficient mice [29].Ldlr −/− mice transplanted with IL-27 receptor −/− bone marrow showed larger atherosclerotic lesions [30].IL-27-deficient mice also developed increased atherosclerosis with enhanced macrophage activation [7].However, mice with IL-27R genetic ablation were reported to be protected against the development of aortic aneurysms [31].Therefore, the effect of IL-27 on the process of atherosclerosis still remains a matter of debate. As for blood IL-27 levels and atherosclerosis, plasma lL-27 levels were reported to be higher in 140 patients who had carotid artery stenosis compared with 19 healthy controls [8].Ye et al. [32] also measured plasma IL-27 levels in 430 hypertensive patients and reported IL-27 levels to be associated with carotid atherosclerotic plaques.Several studies reported blood IL-27 levels in patients with ACS, which was defined as acute MI or UAP, to be high [15,18,19].Moreover, Si et al. [33] measured serum IL-27 levels in 81 patients with Kawasaki disease and 90 healthy controls and showed IL-27 levels to be higher in patients with Kawasaki disease, especially in such patients with coronary arterial lesions, than in controls.Regarding blood IL-27 levels and CHD, Jin et al. [15] measured plasma IL-27 levels in 30 patients with stable CHD and 27 without CAD and showed IL-27 levels to be higher in patients with CHD than in those without CHD.In contrast, Lin et al. [18] reported no significant difference in plasma IL-27 levels between 43 patients with stable CHD and 47 without it.Although the present study extended our previous report [21] by increasing the number of patients (from 244 up to 402 patients), we found that plasma IL-27 levels were not markedly different between the 209 patients with stable CHD and the 193 without it.Of note was that plasma IL-27 levels were significantly higher in the 70 patients who developed cardiovascular events than in the 332 patients with no events.High IL-27 levels were related to an increased risk of cardiovascular events, independent of the presence of CHD. Regarding blood IL-27 levels and clinical outcome, Eric et al. [34] studied the association of serum IL-27 levels with in-hospital mortality in 151 critically ill patients with peritonitis, pancreatitis, or trauma who were admitted to the intensive care units, and they reported that IL-27 levels on admission were significantly higher in patients who died during hospital than in those who survived.Xu et al. [35] assessed serum IL-27 levels in 239 patients with community-acquired pneumonia and showed that higher IL-27 levels were related to an increased risk of vasoactive agent usage and a longer hospital stay.Recently, Grufman et al. [20] assessed plasma IL-27 levels in 524 patients with ACS and followed them up during the median follow-up of 2.2 years.The incidence of the combined end-point of MI and cardiovascular death was significantly higher in patients with IL-27 within the top two tertiles than in those with the lowest tertile, suggesting an association between high IL-27 levels and a worse prognosis in patients with ACS.Our present study, for the first time, reported that plasma IL-27 levels were significantly higher in patients with cardiovascular events than in those without such events among 402 patients undergoing elective coronary angiography for suspected CHD.High IL-27 levels were related to an increased risk of cardiovascular events independent of CHD, but CRP levels were not independent predictors of cardiovascular events.Our results indicate that high plasma IL-27 levels can be a biomarker for further cardiovascular events in patients who underwent elective coronary angiography. Our study was associated with some limitations.First, our study population was relatively small (402 patients), and the number of patients who had cardiovascular events was especially small (70 patients).To clarify the prognostic value of IL-27 levels, further studies in a larger number of study patients will be needed.Second, we performed coronary angiography to evaluate the presence and severity of CHD.Angiography is unable to look at coronary artery plaques but only shows the lumen characteristics of the artery.Moreover, the severity of stenosis was not assessed by quantitative angiography or coronary fractional flow reserve; it was assessed only by the visual assessment of a single cardiologist, like in our previous study [21].These may have affected our results.Third, our study population consisted of Japanese patients who underwent coronary angiography.Such patients were generally recognized to be a highly select population at high risk for CHD.Therefore, our results may not be applicable to general or other ethnic populations.Finally, we assessed plasma IL-27 levels only at baseline angiography and did not evaluate any changes in IL-27 levels during the period of follow-up, which may have affected outcomes.Furthermore, we did not assess any changes in medication for the treatment of CHD, which may have confounded our results. Conclusions The present study investigated the prognostic value of plasma IL-27 levels in patients who underwent elective coronary angiography for suspected CHD.Plasma IL-27 levels were not markedly different between patients with stable CHD and those without it.However, IL-27 levels were higher in patients with cardiovascular events than in those with no events.High IL-27 levels were found to be related to an increased risk of cardiovascular events, independent of CHD, CRP levels, and atherosclerotic risk factors.Our results indicate that high IL-27 levels in the blood can be a biomarker of further cardiovascular events in patients undergoing elective coronary angiography. Figure 1 . Figure 1.Event-free survival from cardiovascular events in 402 study patients.Kaplan-Meier analysis showed lower event-free survival in patients with an IL-27 level >0.25 ng/mL than in those with an IL-27 level ≤0.25 ng/mL (p < 0.02). Figure 1 . Figure 1.Event-free survival from cardiovascular events in 402 study patients.Kaplan-Meier analysis showed lower event-free survival in patients with an IL-27 level >0.25 ng/mL than in those with an IL-27 level ≤0.25 ng/mL (p < 0.02). Figure 1 . Figure 1.Event-free survival from cardiovascular events in 402 study patients.Kaplan-Meier analysis showed lower event-free survival in patients with an IL-27 level >0.25 ng/mL than in those with an IL-27 level ≤0.25 ng/mL (p < 0.02). Table 1 . Clinical data and IL-27 levels in patients with CHD and those without CHD. Table 2 . Clinical data and IL-27 levels in patients with cardiovascular events and those with no event. Table 3 . Independent factors for cardiovascular events in 402 patients. Table 3 . Independent factors for cardiovascular events in 402 patients. Table 3 . Independent factors for cardiovascular events in 402 patients.
2024-05-04T15:34:35.090Z
2024-04-30T00:00:00.000
{ "year": 2024, "sha1": "af93d03fe4ff93d42eb86e6a502ac99c6bb995bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2308-3425/11/5/139/pdf?version=1714474276", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c759c1385c0295cdbdf1d439d75ff660c44e5e86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212842454
pes2o/s2orc
v3-fos-license
The Effects of Low-Dose and High-Dose Decoctions of Fructus aurantii in a Rat Model of Functional Dyspepsia Background Fructus aurantii is a flavonoid derived from Citrus aurantium (bitter orange) that is used in traditional Chinese medicine (TCM) to treat gastric motility disorders. This study aimed to investigate the effects of low-dose and high-dose decoctions of Fructus aurantii in a rat model of functional dyspepsia (FD). Material/Methods Sprague-Dawley rats (n=90) were divided into nine study groups: the control group, the FD model group, the domperidone-treated (Domp) group, the low-dose raw Fructus aurantii (FA-L) group, the high-dose raw Fructus aurantii (FA-H) group, the low-dose Fructus aurantii with stir-fried wheat bran (Bran-L) group, the high-dose Fructus aurantii with stir-fried wheat bran (Bran-H) group, the low-dose Fructus aurantii with stir-fried wheat bran and honey (Honey-L) group, and the high-dose Fructus aurantii with stir-fried wheat bran and honey (Honey-H) group. The FD rat model was established by semi-starvation, followed by tail damping, stimulation, and forced exercise with fatigue. Change in weight, rate of gastric emptying and intestinal propulsion, and serum levels of leptin, motilin, vasoactive intestinal peptide (VIP), gastrin, calcitonin gene-related peptide (CGRP), ghrelin, and cholecystokinin were compared between the groups. Results In the FD model group, weight, rate of gastric emptying and intestinal propulsion significantly decreased, the expression of leptin, VIP and CGRP increased, and expression of motilin, gastrin, ghrelin, and cholecystokinin significantly decreased. Treatment with low-dose Fructus aurantii with stir-fried wheat bran significantly reversed these effects. Conclusions In the rat model of FD, low-dose Fructus aurantii with stir-fried wheat bran increased gastrointestinal motility and gastrointestinal hormone levels. Background Functional dyspepsia (FD) is the term used to describe a syndrome of impaired motility of the upper gastrointestinal tract that affects the quality of life and health of patients [1]. The clinical symptoms of FD include chronic upper abdominal pain and discomfort without organic disease [2]. The pathogenesis and etiology of FD are complex and include impaired functional motility of the stomach and duodenum with associated psychological effects and effects on the quality of life [3]. The pathogenesis of FD remains poorly understood, and the treatment is mainly symptomatic [4,5]. Fructus aurantii is a flavonoid derived from Citrus aurantium (bitter orange) that is used in traditional Chinese medicine (TCM) to treat gastric motility disorders [6]. The taste of Fructus aurantii is bitter, and the compound has pharmacological effects that relive abdominal distention, according to TCM theory [7]. Also, Fructus aurantii is a prokinetic herb that relieves indigestion and gastrointestinal dysfunction, as well as chest pain [8]. The efficacy of herbal TCM is significantly associated with the chemical components, and different production methods may affect the content of the effective components of the processed products or decoctions used [9,10]. However, have been few studies on the effects of Fructus aurantii on FD and the mechanisms involved. A previously published study showed that the Weichang'an (WCA) tablet, which is used to treat FD, contains 12 active components including naringin, hesperidin, and neohesperidin derived from Fructus aurantii [11]. A further study showed that meranzin hydrate, a compound isolated from Fructus aurantii, increased gastric emptying and intestinal transit in patients with FD [12]. Therefore, it is possible that some active components from Fructus aurantii, such as naringin, hesperidin, neohesperidin, and meranzin hydrate may have roles in the effects Fructus aurantii in experimental models of FD. Therefore, this study aimed to investigate the effects of lowdose and high-dose processed products, or decoctions, of Fructus aurantii in a rat model of FD. The rat model of FD was established by semi-starvation followed by tail damping, stimulation, and forced exercise with fatigue, as previously described [13]. Material and Methods Preparation of the processed products, or decoctions, of Fructus aurantii and the study groups Fructus aurantii was obtained from Tianqitang Pharmacy (Jiangxi, China), which was the dried immature fruit of Citrus aurantium. Bran from dried wheat, Triticun aestivum was obtained from Shanghai Xiangxu Agricultural Products Trading Co. Ltd. (Shanghai, China). Ninety specific pathogen-free (SPF), 7-weekold Sprague-Dawley rats (45 male and 45 female), weighing between 180-220 g, were obtained from the Guangdong Medical Laboratory Animal Center (Guangdong, China). The rats were housed in a room with a temperature of 21±2°C, relative humidity of 30-70% and a 12-hourly light and dark cycle, and fed with water and normal food. All animal studies were approved by the Animal Ethics Committee of Jiangxi University of Traditional Chinese Medicine. Nine study groups included: the control group; the functional dyspepsia (FD) model group; the domperidone-treated (Domp) group; the low-dose raw Fructus aurantii (FA-L) group; the highdose raw Fructus aurantii (FA-H) group; the low-dose Fructus aurantii with stir-fried wheat bran (Bran-L) group; the high-dose Fructus aurantii with stir-fried wheat bran (Bran-H) group; the low-dose Fructus aurantii with stir-fried wheat bran and honey (Honey-L) group; and the high-dose Fructus aurantii with stirfried wheat bran and honey (Honey-H) group. Fructus aurantii was stir-fried with wheat bran until the mixture became pale yellow. The wheat bran was sieved and cooled the Fructus aurantii. The Fructus aurantii to wheat bran ratio was 10: 1. Fructus aurantii was stir-fried with honey and bran with a wheat bran to honey ratio of 10: 3. After stir-frying on medium heat, until the Fructus aurantii became yellow, the wheat bran was sieved and cooled. Concoction solutions were obtained by dilution in water. The high-dose concoction was diluted ten times in water for 30 min and then strained with gauze. The filtrates were combined, and the final concentrated obtained was 1 g/mL, which was stored at -4°C. The low-dose concoction was diluted to a concentration: 0.1 g/mL. The domperidone aqueous solution was obtained by grinding domperidone tablets (10.0 mg) (certification No: H10910003; Xian Janssen Pharmaceutical Ltd, Beijing, China), which were dissolved in water at a concentration of 0.2 g/L. Carboxymethyl cellulose sodium (20 g) (Sigma-Aldrich, St. Louis, MO, USA) was diluted with 500 mL of double-distilled water and heated to dissolve the mixture. Then, 16 g of dried skimmed milk powder, 8 g of starch, 8 g of sucrose, and 2 g of activated charcoal were added and mixed into a paste and stored at -4°C. Development of the rat model of functional dyspepsia (FD) Ten rats were normally fed in the control group. The remaining 80 rats were randomly divided into eight groups of ten in the FD model group, the Domp group, the AF-L group, the AF-H group, the Bran-L group, the Bran-H group, the Honey-L group, and the Honey-H group. The rat model of FD was established e919815-2 by semi-starvation, followed by tail damping, stimulation, and forced exercise with fatigue, as previously described [13]. Briefly, the FD rat model was developed using semi-starvation in the rats by tail damping, provocation, and forced exercise fatigue with exercise four times a day for ten days. Tail damping involved using the head of hemostatic forceps wrapped with gauze, and the distal one-third of the rat tail was clamped by hemostatic forceps, without damaging the skin. The hemostatic clamp was released when the rat struggled to escape, and this was performed twice per day for 30 min each time with a 12 h interval. This stimulation was performed for 14 days with feeding on alternate days. Any skin injuries to the rats were swabbed with iodine to prevent infection. All rats were fed a normal diet after the model was developed. Dosing of the Fructus aurantii decoctions and dosing of domperidone The dosing of the rats was calculated according to the drug doses in humans, and the conversion between humans and experimental animals [14,15]. The routine dosage for Fructus aurantii of rats was 1.0 g/kg in the low-dose groups, and 10.0 g/kg dose in the high-dose groups. Also, the dose of domperidone was calculated in the same way. After the model was constructed, the rats were treated every morning since the 15 th day. The rats in the control group and the model group were given normal saline (10.0 mL/kg) by gavage. The rats in the Domp group were given domperidone (10.0 mL/kg) by gavage. The rats in the AF-L group, the Bran-L group, and the Honey-L group were given low doses of raw Fructus aurantii (1.0 g/kg), Fructus aurantii with stir-fried with wheat bran (1.0 g/kg), and Fructus aurantii stir-fried with honey and bran (1 g/kg) by gavage. The rats in the AF-H group, the Bran-H group, and the Honey-H group were given high doses of raw Fructus aurantii (10.0 g/kg), Fructus aurantii with stir-fried wheat bran (10.0 g/kg) and Fructus aurantii with stir-fried honey and bran (10.0 g/kg), by gavage. The rats in each study group were continuously monitored for 14 days. Before modeling (week 0), two weeks after modeling (week 2), and after treatment (week 4), the weight change of rats in each group were measured and recorded using an electronic balance (R200D; Sartorius, Göttingen, Germany). Blood sampling from the abdominal aorta During the development of the rat model of FD, attention was given to the changes in the behavior, hair, diet, drinking water, and the weight of the rats in each group. The endpoint evaluations at the end of the study included the appearance, the hair, activity, response to external stimuli, vigilance, and resistance to handling, weight, and the nature of the stool. On the 28 th day of the study, the animals fasted for 24 h after the last treatment. On the 29 th day of the study, all rats were fed with nutritious semisolid paste (2 mL) by gavage. After 30 min, the rats were anesthetized with 10% chloral hydrate (350 mg/kg) by intraperitoneal injection). Blood samples (3 mL) were taken from the abdominal aorta, the serum was separated and then frozen at -80°C. The anesthetized rats were sacrificed by cervical dislocation. Investigation of gastric emptying and intestinal propulsion After the rats were euthanized, the abdominal cavity was immediately opened, and if no intra-abdominal abnormalities were seen, the gastric cardia and pyloric orifice were ligated, and the entire stomach and small intestine were removed. The mucosal surfaces of the stomach and small intestine were observed for changes, included flushing, erosion, and ulcer. After the stomach was dried with filter paper, the entire stomach was weighed using an electronic balance (R200D; Sartorius, Göttingen, Germany). Then, the stomach was immersed in a 0.9% saline solution to clean out the gastric contents. Filter paper was used to dry the stomach, which was weighed. The gastric remnant rate was calculated as follows: the gastric remnant rate=(full weight of the stomach-net weight of the stomach)/mass of the semisolid paste×100%. The whole small intestine, from the pylorus to the ileocecal junction, and the distance of movement of ingested of graphite powder from the pylorus were measured to calculate the propulsive as follows: the propulsive intestinal rate=the length of the graphite powder movement/the whole small intestine×100%. One senior investigator was responsible for the study and for conducting the experiments. Histology The stomach and duodenal tissue from the rats were fixed in a 4% paraformaldehyde solution for 48 h. The tissue was dehydrated and paraffin wax-embedded. Tissue sections were cut at 4 μm and routinely stained with hematoxylin and eosin (H&E) (Beyotime Biotechnology, Shanghai, China) for light microscopy, using a BH-2 light microscope (Olympus, Tokyo, Japan). Enzyme-linked immunosorbent assay (ELISA) Serum levels of leptin, motilin, vasoactive intestinal peptide (VIP), gastrin, and calcitonin gene-related peptide (CGRP) were measured by ELISA using MSK ELISA kits (LifeSpan BioSciences, Wuhan, China), according to the manufacturer's instructions. The samples were incubated in 96-well plates at 37°C for e919815-3 min. The washing solution was added into each well and incubated for 30 s, then discarded. Washing was repeated five times. The enzyme standard reagent was added into the well and incubated at 37°C for 30 min, except for the blank well. After washing, the chromogenic reagent was added into each well, in the dark, for 15 min at 37°C, and the reaction was terminated. The optical density (OD) of each sample was detected using a microplate 680 reader (Bio-Rad, Hercules, CA, USA) at a wavelength of 450 nm. Immunohistochemistry The gastric antral tissue and duodenal tissue sections were incubated in 0.01 mol/L of citric acid buffer solution for 5 min. The tissue sections were blocked in 5% normal goat serum (Origene, Beijing, China) for 30 min. The tissue sections were incubated in the primary rabbit antibody to ghrelin (1: 2000) (ab209790; Abcam, Cambridge, MA, USA) at 4°C overnight. After washing, the sections were incubated for 1 h at 37°C with the secondary goat anti-rabbit IgG (1: 2000) (ab150077; Abcam, Cambridge, MA, USA). The 3,3-diaminobenzidine (DAB) detection kit (Beyotime Biotechnology, China) was used. Histology was performed using a light microscope (Olympus, Tokyo, Japan). The images were analyzed using ImageJ software (National Institutes of Health, Bethesda, MD, USA). Western blot The gastric antral mucous tissue and duodenal tissues of the rats were cut into small fragments, and placed in RIPA lysate buffer containing phenylmethyl sulfonyl fluoride (PMSF), and homogenized. A bicinchoninic acid (BCA) protein assay kit (Thermofisher Scientific, Waltham, MA, USA) was used to measure the protein concentration. Then, 4 μL of protein was separated using 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene fluoride (PVDF) membranes (Bio-Rad, Hercules, CA, USA). The PVDF membranes were incubated for 2 h in 10% dried skimmed milk powder. The PVDF membrane was incubated with the primary antibodies at 4°C overnight. The primary antibodies included a rabbit antibody to cholecystokinin (1: 1000) (ab83180; Abcam, Cambridge, MA, USA) and a mouse antibody to GAPDH (1: 1000) (ab8245; Abcam, Cambridge, MA, USA). Then, horseradish peroxidase (HRP)-conjugated secondary antibodies were incubated with the PVDF membrane at room temperature for 2 h, and included goat anti-rabbit IgG Statistical analysis Each experiment was performed in triplicate. Study data were presented as the mean±standard deviation (SD). Data were analyzed using SPSS version 21.0 software (IBM Corp., Armonk, NY, USA). Statistical comparisons between groups were determined by Student's t-test or one-way analysis of variance (ANOVA). A P-value <0.05 was considered to be statistically significant. Results The weight of the rats from the different groups In this study, a rat model of FD was established by semi-starvation followed by tail damping, stimulation, and forced exercise with fatigue, as previously described [13]. The rats were divided into nine study groups: the control group, the FD model group, the domperidone-treated (Domp) group, the low-dose raw Fructus aurantii (FA-L) group, the high-dose raw Fructus aurantii (FA-H) group, the low-dose Fructus aurantii with stirfried wheat bran (Bran-L) group, the high-dose Fructus aurantii with stir-fried wheat bran (Bran-H) group, the low-dose Fructus aurantii with stir-fried bran and honey (Honey-L) group, and the high-dose Fructus aurantii with stir-fried bran and honey (Honey-H) group. The findings showed that a low-dose of Fructus aurantii stir-fried with wheat bran had the most significant therapeutic effect on rat model of FD. The rats in all treatment groups were treated by gavage, and the rats in the model group and the control group were given normal saline by gavage. The rats in each group were weighed before and after the study. At 2 weeks, the weight of the rats in the model group and all treatment groups were significantly lower than in the control group (Table 1) (P<0.05). At 4 weeks, the weight of the rats in all the treatment groups was significantly greater than in the model group. There was no difference in the weight between the control group and the Bran-L group (Table 1) (P>0.05). The weight of the rats in the model of FD was lower than that in the control group, while the weight of the rats in the Bran-L group recovered the most after the study. Comparison of gastrointestinal function in rats from the different groups Compared with the control group, the gastric remnant rate in the model group was significantly increased, and the propulsive intestinal rate was significantly reduced ( Table 2) (P<0.05). This finding indicated that the gastrointestinal motility of the rats was reduced, and the FD model was successfully established. Compared with the model group, the gastric remnant rate was significantly reduced and the intestinal propulsion e919815-4 rate was significantly increased in rats in the Domp group, the AF-L group, the AF-H group, the Bran-L group, the Bran-H group, and the Honey-L group (Table 2) (P<0.05). At the same dosage, the difference in the gastric remnant rate and the propulsive intestinal rate between the Bran group and the control group was the least, indicating that the effect of Fructus aurantii stir-fried with wheat bran on the treatment of the rat model of FD was greater than that of raw Fructus aurantii and of Fructus aurantii that was stir-fried with honey and bran. Compared with low-dose group, high-dose treatment increased the gastric remnant rate and reduced the intestinal propulsion rate in the rat model of FD, which indicated highdose treatment reduced the therapeutic effect. Histology of the stomach and duodenum in rats from the different groups The effects of treatment on the gastric and duodenal mucosa were studied histologically. The appearance and morphology of the gastric and duodenal mucosa of rats in all treatment groups were normal without visible ulcer, erosion, or hemorrhage. The histology showed that the gastric mucosa ( Figure 1A) and duodenal mucosa ( Figure 1B) of the rats in all the treatment groups was intact, and there was no infiltration of inflammatory cells, which indicated that different doses of Fructus aurantii products did not damage the gastric and intestinal mucosa of the rats studied. Comparison of ghrelin expression in rats from the different groups Immunohistochemistry was performed to investigate the level and distribution of ghrelin expression in the gastric antral and duodenal mucosa of the rats, as previously described [16,17]. Ghrelin expression was detected in cells in both the gastric and duodenal tissues (Figures 3, 4). Also, compared with the control group, ghrelin expression in the model group was significantly reduced in the gastric and duodenal tissues. However, ghrelin expression in all treatment groups was increased compared with the model group (Figures 3, 4) (P<0.05). Immunohistochemistry showed that the level of ghrelin in the duodenum and gastric antral mucosa of the rat model of FD was increased by treatment. Also, ghrelin expression was not significantly different between the control group and the Bran-L group, which indicated that Bran-L had the greatest effect on the expression of ghrelin in the rat model of FD, and could restore it to the level of normal. Comparison of cholecystokinin expression in rats from the different groups Western blot was used to detect the protein expression levels of cholecystokinin in the rat duodenum and gastric antrum mucosa. Compared with the control group, the cholecystokinin expression in the duodenum ( Figure 5A, 5B) and the gastric antral mucosa ( Figure 5C, 5D) in the model group were significantly reduced (P<0.05). Compared with the model group, cholecystokinin expression in all treatment groups was significantly increased (P<0.05). No significant difference in cholecystokinin expression was found between the Bran-L group and the control group (P>0.05). These results showed that the cholecystokinin protein expression in the rat model of FD was increased after the treatment with Fructus aurantii decoction. Discussion The commonly used treatments for functional dyspepsia (FD) in clinical practice include drugs that inhibit the production of gastric acid, drugs that increase gastrointestinal motility, and treatments for Helicobacter pylori [18,19. Domperidone is a peripheral dopamine receptor blocker that can improve gastrointestinal function and reduce the clinical symptoms of FD [20]. Fructus aurantii is derived from Citrus aurantium (bitter orange) that is used in traditional Chinese medicine (TCM) to treat gastric motility disorder [21]. The main effective bioactive constituents of Fructus aurantii are alkaloids, volatile oils, and flavonoids, which have anti-inflammatory and antioxidant effects in FD [21]. In this study, a rat model of FD was established by semi-starvation followed by tail damping, stimulation, and forced exercise with fatigue, as previously described [13]. The rats were divided into nine study groups: the control group, the FD model group, the domperidone-treated (Domp) group, the low-dose raw Fructus aurantii (FA-L) group, the high-dose raw Fructus aurantii (FA-H) group, the low-dose Fructus aurantii with stir-fried wheat bran (Bran-L) group, the high-dose Fructus aurantii with stir-fried wheat bran (Bran-H) group, the low-dose Fructus aurantii with stir-fried bran and honey (Honey-L) group, and the high-dose Fructus aurantii with stir-fried bran and honey (Honey-H) group. The findings showed that a low-dose of e919815-10 Fructus aurantii stir-fried with wheat bran had the most significant therapeutic effect on the rat model of FD. In this study, after the FD rat model was established, the weight of the rats was measured and recorded before and after the study, as weight loss is a clinical feature of FD [22]. The animal experiments showed that the weight of the rats in the model group was significantly reduced, which indicated that the rat model of FD was successfully developed. Following treatment, the weight of the rats in the rat model of FD increased in all treatment groups, and the recovery of the weight of the domperidone-treated rats in the model of FD in Bran-L group showed the greatest outcome. Also, in this study, no ulcers, hemorrhage, or inflammation were observed in the gastric and duodenal mucosa of rats in each treatment group. The findings from this preliminary study indicated that Fructus aurantii processed products, or decoctions, were effective in treating FD, without gastrointestinal mucosal injury in rats, and the beneficial effect of Bran-L showed the greatest effect. Gastric emptying is the process of emptying food from the stomach into the duodenum through the propulsive effects of the stomach and duodenum [23,24]. Most patients with FD have gastrointestinal motility disorder, which is associated with delayed gastric emptying [25]. There is a close association between slow intestinal motility and impaired gastric emptying [26]. In the present study, the investigation of the gastric remnant and small intestine propulsion in rats showed delayed gastric emptying and slow intestinal propulsion in the model group. Following treatment, the gastrointestinal motility in the rat model of FD was improved, and the effect of lowdose Fructus aurantii stir-fried with wheat bran on gastrointestinal motility was the best, which was consistent with the findings from domperidone treatment. Also, high-dose treatment reduced the therapeutic effects, which further supported the efficacy of low-dose bran stir-fried with Fructus aurantii. Gastrointestinal hormones, secreted by endocrine cells on the gastrointestinal mucosa, is closely associated with gastrointestinal motility disorders [27]. Leptin is a neuroendocrine factor secreted by the hypothalamus and gastric mucosal cells, which can inhibit gastric emptying and enhance satiety [28]. Vasoactive intestinal peptide (VIP) is widely distributed in neural tissue and the gastrointestinal tract, inhibits gastrointestinal motility, delays gastric emptying, and slows small intestine motility [29]. Calcitonin gene-related peptide (CGRP) is widely distributed in the gastrointestinal tract neural plexus. CGRP has a role in visceral hypersensitivity, inhibits gastric acid secretion, slows gastrointestinal movement, and regulates gastrointestinal hormone secretion [30]. Jiang et al. [31] showed that Fructus aurantii water decoction could reduce the expression of VIP. The present study showed that serum levels of leptin, VIP, and CGRP in the rat model group were upregulated. After 14 days of treatment, the serum levels of leptin, VIP, and CGRP in the rat model of FD in each treatment group were reduced. These findings showed that the different treatments based on Fructus aurantii reduced the serum levels of leptin, VIP, and CGRP, to regulate the visceral sensory function of the gastrointestinal tract and improve gastrointestinal movement. Also, low-dose Fructus aurantii stir-fried with wheat bran was the most efficacious decoction. Motilin is a gastrointestinal peptide hormone secreted cells in the small intestinal mucosa and can promote gastrointestinal motility and accelerate gastric emptying [32]. Gastrin promotes gastrointestinal motility and the secretion of gastric acid and pepsin [33]. Ghrelin is an appetite-stimulating factor secreted by gastric oxyntic cells, which can increase appetite, accelerate gastric emptying, and protect the gastrointestinal mucosa [34][35][36]. Cholecystokinin is widely distributed in gastrointestinal neurons and is involved in the regulation of gastrointestinal function [37]. Liang et al. [38] showedthat cholecystokinin expression was down-regulated in the duodenum and antrum of the rat model of FD. The findings from the present study showed down-regulation of motilin, gastrin, ghrelin, and cholecystokinin in the model group, and upregulation of motilin, gastrin, ghrelin, and cholecystokinin in the treated groups. These results showed that the processed products of Fructus aurantii could promote gastrointestinal movement and improve gastric motility disorders by regulating the secretion of gastrointestinal hormones in the rat model of FD. Also, low-dose Fructus aurantii stir-fried with wheat bran had the greatest efficacy in the rat model. This study had several limitations. Because lesions in the duodenal bulb are more common than in other segments of the duodenum, as in celiac disease in children [39], it is necessary to subdivide and investigate different segments of the duodenum, which was not done in this study. This study also included only ten rats in each group, and future animal studies on gastric emptying and intestinal motility should be undertaken with larger study groups. Also, the mechanisms of action of Fructus aurantii and its decoctions in FD remain unknown. Conclusions This study aimed to investigate the effects of low-dose and high-dose decoctions of Fructus aurantii in a rat model of functional dyspepsia (FD). The findings showed that low-dose Fructus aurantii with stir-fried wheat bran increased gastrointestinal motility and gastrointestinal hormone levels, including leptin, vasoactive intestinal peptide (VIP), and calcitonin gene-related peptide (CGRP).
2020-02-20T09:18:27.302Z
2020-02-12T00:00:00.000
{ "year": 2020, "sha1": "a78b3f1bb3776c0b86c4c0accfa6eb06272b25f2", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc7156881?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3034f9eb729130a7e0cd5665156c6dbb6d1d039d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
24288232
pes2o/s2orc
v3-fos-license
ABCA1 contributes to macrophage deposition of extracellular cholesterol. We previously reported that cholesterol-enriched macrophages excrete cholesterol into the extracellular matrix. A monoclonal antibody that detects cholesterol microdomains labels the deposited extracellular particles. Macrophage deposition of extracellular cholesterol depends, in part, on ABCG1, and this cholesterol can be mobilized by HDL components of the reverse cholesterol transport process. The objective of the current study was to determine whether ABCA1 also contributes to macrophage deposition of extracellular cholesterol. ABCA1 functioned in extracellular cholesterol deposition. The liver X receptor agonist, TO901317 (TO9), an ABCA1-inducing factor, restored cholesterol deposition that was absent in cholesterol-enriched ABCG1−/− mouse macrophages. In addition, the ABCA1 inhibitor, probucol, blocked the increment in cholesterol deposited by TO9-treated wild-type macrophages, and completely inhibited deposition from TO9-treated ABCG1−/− macrophages. Lastly, ABCA1−/− macrophages deposited much less extracellular cholesterol than wild-type macrophages. These findings demonstrate a novel function of ABCA1 in contributing to macrophage export of cholesterol into the extracellular matrix. Male ABCG1 Ϫ / Ϫ mice on a C57BL/6J background were generated as described previously ( 25 ). Female ABCA1 Ϫ / Ϫ mice were generated from DBA/1-Abca1 tm1Jdm /J mice (#003897) obtained from Jackson Laboratory (Bar Harbor, ME). These mice were of mixed genetic background. The ABCA1 mutation was transferred to a C57BL/6N background by 10 consecutive crossings with C57BL/6N. Wild-type C57BL/6 control mice were substrain-, sex-and age-matched to ABCG1 Ϫ / Ϫ and ABCA1 Ϫ / Ϫ mice. Animal studies were conducted in conformity with the Public Health Service Policy on Humane Care and Use of Laboratory Animals, and were approved by the National Heart, Lung, and Blood Institute Institutional Animal Care and Use Committee . For culture of bone marrow-derived macrophages, femurs and tibias were isolated from mice and muscle was removed. Both ends of the bones were cut with scissors and then fl ushed with 5 ml of RMPI-1640 with a 25 gauge needle. Bone marrow cells were centrifuged and resuspended at a concentration of 4-6 × 10 6 cells/ml in 1 ml of freezing medium containing 90% FBS and 10% DMSO ( 26 ). Cells were stored in liquid nitrogen until use. On the day of use, cells were thawed and suspended in 30 ml RPMI-1640 medium containing 100 U/ml penicillin, 0.1 mg/ml streptomycin, and 2 mM L-glutamine before centrifugation to remove DMSO. Then, cells were resuspended at a concentration of 1 × 10 5 cells/ml RPMI-1640 medium containing 100 U/ml penicillin, 0.1 mg/ml streptomycin, 2 mM L-glutamine, 10% FBS, and 50 ng/ml macrophage colony-stimulating factor (complete medium). Cells were seeded in a 75 cm 2 culture fl ask and incubated in a 37°C cell culture incubator with 5% CO 2 /95% air. On day 3, cultures were rinsed three times with RPMI-1640 medium containing 100 U/ml penicillin, 0.1 mg/ml streptomycin, and 2 mM L-glutamine, and then cultured in fresh complete medium. Medium was changed every 2 days until suffi cient macrophages had grown in the fl ask, which usually occurred by the seventh day. Next, experiments were initiated by harvesting macrophages at room temperature with 10 ml 0.25% trypsin-EDTA solution. After about 20-30 min, macrophages rounded, but mostly remained attached. Trypsinization was stopped by addition of 10 ml RMPI-1640 containing 100 U/ml penicillin, 0.1 mg/ml streptomycin, 2 mM L-glutamine, and 10% FBS. A cell lifter was used to retrieve macrophages from the culture surface. The cell suspension was centrifuged at 300 g for 5 min and the resulting cell pellet was resuspended in 1 ml complete medium. Macrophages were counted with a hemocytometer. 0.6 × 10 5 macrophages per milliliter were cultured in 12-well CellBIND culture plates containing 1.5 ml of complete medium per well. Macrophages were incubated overnight before experiments were initiated with complete medium and the indicated additions, but without FBS. Experimental incubations were carried out for 4 days with the medium and additions refreshed after 2 days. Immunostaining of macrophages Fixation, immunostaining, and microscopy were all performed with macrophages in their original CellBIND culture plates, and all steps were carried out at room temperature. Macrophage cultures were rinsed three times (5 min each rinse this and all subsequent times) in DPBS, fi xed for 10 min with 4% paraformaldehyde in DPBS, and then rinsed an additional three times in DPBS. Macrophages were then incubated 60 min with 5 g/ml purifi ed mouse anti-cholesterol microdomain MAb 58B1 IgM diluted in DPBS containing 0.1% BSA. Control staining was performed with 5 g/ml of an irrelevant purifi ed mouse anti-Clavibacter michiganense MAb (clone 9A1) IgM diluted in DPBS containing 0.1% BSA. MAb IgM fractions were purifi ed as previously described ( 16 ). Cultures were then rinsed three times in DPBS, followed by to HDL for potential reverse cholesterol transport (15)(16)(17). In those studies, we employed a unique monoclonal antibody (MAb 58B1) that labels cholesterol microdomains formed when cholesterol reaches high concentrations within membranes. While the MAb labels cholesterol crystals and cholesterol monolayers, it does not label individual cholesterol molecules (18)(19)(20)(21)(22). Thus, the antibody recognizes a structural motif presented by an ordered array of cholesterol molecules. Such ordered arrays of cholesterol, in the form of cholesterol crystalline microdomains, have been demonstrated with small-angle X-ray diffraction in biological membranes, including the membranes of cells isolated from atherosclerotic plaques ( 23,24 ). With this anti-cholesterol microdomain antibody, we have shown that cultured cholesterol-enriched macrophages excrete spherical particles containing cholesterol microdomains into the extracellular matrix ( 17 ). Similar extracellular spherical particles labeled by the anti-cholesterol microdomain antibody are found in human atherosclerotic lesions ( 17 ). These extracellular particles showing cholesterol microdomains may function as an extracellular storage form of cholesterol. Macrophage deposition of cholesterol into the extracellular matrix could help maintain cholesterol homeostasis when macrophages accumulate excess cholesterol beyond that which can be stored in intracellular lipid droplets and cell membranes. We previously reported that ABCG1 contributes to macrophage generation of these extracellular particles showing cholesterol microdomains ( 15 ). In the current work, we show that ABCA1 also contributes to generation of these extracellular deposited cholesterol microdomains. The generation of cholesterol microdomains may facilitate ABCA1-and ABCG1-mediated transport of cholesterol to HDL. Cholesterol-enriched ABCG1 Ϫ / Ϫ mouse macrophages (AcLDL treatment in Fig. 2 ) showed very little extracellular cholesterol deposition, as we reported previously ( 15 ). However, when stimulated with TO9 (AcLDL + TO9 treatment in Fig. 2 ), these macrophages then deposited cholesterol into the extracellular matrix (7.6-fold more MAb 58B1 immunofl uorescence compared with macrophages incubated with AcLDL without TO9). Given that TO9 is known to upregulate ABCA1 expression ( 29 ), that probucol inhibits human macrophage extracellular cholesterol a 30 min incubation in 5 g/ml biotinylated goat anti-mouse IgM diluted in DPBS containing 0.1% BSA. After three rinses in DPBS, cultures were incubated 10 min with 10 g/ml streptavidin Alexa Fluor 488 diluted in DPBS. Cultures were then rinsed three times with DPBS and mounted in Vectashield hard-set mounting medium with DAPI nuclear stain in preparation for digital imaging using an Olympus IX81 fl uorescence microscope. Because macrophages were not permeabilized, MAb 58B1 staining represents cell surface or extracellular staining. No staining was observed when the control MAb was substituted for the anticholesterol microdomain MAb. Microscopic analysis Cells were identifi ed using phase-contrast microscopy, or by locating DAPI-stained nuclei. The pattern and intensity of MAb 58B1 staining were then analyzed for cultures from each experimental parameter, and these data were compared with one another. We considered MAb 58B1 labeling cellular if it was located within cell membrane boundaries, as identifi ed on the corresponding phase-contrast view. Labeling was considered extracellular if it was located outside the cell membrane boundaries seen on phase-contrast view. Different planes of focus were visualized before acquiring images to confi rm that only a monolayer of cells was present, thereby ensuring that labeling seen outside cell membrane boundaries did not represent cellular labeling from cells lying in a different plane of focus. As we reported before ( 15 ), MAb 58B1 labeling of mouse macrophage cultures showed extracellular rather than plasma membrane staining. The immunostained cells shown in the fi gures are representative of a minimum of fi ve microscopic fi elds viewed in one culture well. Quantifi cation and statistical analysis of MAb 58B1 immunofl uorescence For each condition shown in the fi gures, including additional control images where macrophages were incubated without AcLDL, we quantifi ed MAb 58B1 immunofl uorescence in three separate digital images using Image J software (version 1.37) developed by the National Institutes of Health. Control image fl uorescence values were subtracted from non-control image fl uor escence values. Statistical analysis of the obtained fl uorescence data was carried out with SigmaPlot for Windows (version 11.0). One-way ANOVA using the Holm-Sidak method was employed for comparisons of three groups ( Figs. 1-3 ), and the unpaired t -test was used for comparison of two groups ( Figs. 4-6 ). P р 0.05 was considered signifi cant. RESULTS In an earlier study, we observed that cholesterol-enriched ABCG1 Ϫ / Ϫ mouse bone marrow-derived macrophages, in contrast to cholesterol-enriched wild-type mouse bone marrow-derived macrophages, excreted very little cholesterol into the extracellular matrix ( 15 ). This suggested that ABCG1 mediated the extracellular cholesterol deposition process in the mouse. However, probucol inhibited cholesterol-enriched human monocyte-derived macrophage deposition of extracellular cholesterol ( 15 ). Because probucol inhibits ABCA1 ( 27,28 ), this suggested the possibility that besides ABCG1, ABCA1 may also contribute to macrophage deposition of extracellular cholesterol. Although cholesterol-enriched wild-type mouse bone marrow-derived macrophages showed extracellular cholesterol deposition (AcLDL treatment in Fig. 1 ), the Fig. 1. Cholesterol-enriched wild-type macrophages deposit extracellular cholesterol without TO9, but deposition is increased with TO9. Wild-type mouse bone marrow-derived macrophages were incubated for 4 days with either TO9 (5 M), AcLDL (50 g/ml), or AcLDL + TO9 before cultures were immunostained with anticholesterol microdomain MAb 58B1 (green fl uorescence) and DAPI nuclear stain (blue fl uorescence). Upper and lower rows are, respectively, the fl uorescence and phase photomicrographs. Scale bar = 50 m and applies to all. was partially reduced in TO9-treated cholesterol-enriched ABCA1 Ϫ / Ϫ macrophages compared with TO9-treated cholesterol-enriched wild-type macrophages ( Fig. 5 ). MAb 58B1 immunofl uorescence levels of ABCA1 Ϫ / Ϫ macrophages were 34 ± 2% of wild-type macrophages. This partial reduction would be expected in TO9-treated cholesterol-enriched deposition ( 15 ), and that TO9 stimulates extracellular cholesterol deposition by ABCG1 Ϫ / Ϫ mouse macrophages ( Fig. 2 ), we considered the possibility that with TO9 treatment, ABCA1, in addition to ABCG1, would contribute to macrophage extracellular cholesterol deposition . If ABCA1 contributes to extracellular cholesterol deposition by TO9-treated cholesterol-enriched mouse macrophages, then probucol should inhibit the component of cholesterol deposition stimulated by TO9, as probucol inhibits ABCA1 but does not inhibit ABCG1 ( 27,28,30 ). We tested this by incubating cholesterol-enriched wild-type mouse macrophages with TO9 in the presence and absence of probucol. We observed that TO9 increased MAb 58B1 immunofl uorescence 2.5-fold compared with macrophages incubated with AcLDL alone ( Fig. 3 ). However, when probucol was added to AcLDL + TO9, there was no signifi cant difference in MAb 58B1 immunofl uorescence compared with macrophages treated with AcLDL alone. Thus, probucol blocked the increment of macrophage extracellular cholesterol deposition that was stimulated by TO9, consistent with ABCA1 mediating a portion of this macrophage cholesterol deposition. We further tested the function of ABCA1 in macrophage cholesterol deposition by incubating TO9-treated cholesterol-enriched ABCG1 Ϫ / Ϫ macrophages with probucol. Given that ABCG1 would not be contributing to cholesterol deposition in these macrophages, we expected that probucol should completely block macrophage cholesterol deposition through its inhibition of ABCA1. That is what we observed, in that probucol blocked the TO9-stimulated cholesterol deposition that occurred with cholesterolenriched ABCG1 Ϫ / Ϫ macrophages ( Fig. 4 ). Quantifi ed MAb 58B1 immunofl uorescence levels for macrophages incubated with AcLDL + TO9 + probucol were similar to macrophages incubated with AcLDL (not shown). Next, we directly confi rmed that ABCA1 functioned in macrophage cholesterol deposition. Cholesterol deposition Ϫ / Ϫ mouse bone marrow-derived macrophages were incubated for 4 days with AcLDL (50 g/ml) + TO9 (5 M) before cultures were immunostained with anti-cholesterol microdomain MAb 58B1 (green fl uorescence) and DAPI nuclear stain (blue fl uorescence). Upper and lower rows are, respectively, the fl uorescence and phase photomicrographs. Scale bar = 50 m and applies to all. This experiment was repeated two additional times with similar results. extracellular cholesterol, which can occur even in the absence of macrophages ( 17 ). Although both ABCA1 and ABCG1 mediated deposition of cholesterol into the extracellular matrix, they functioned independently. Absence of one or the other did not eliminate cholesterol deposition by TO9-treated cholesterol-enriched macrophages. A block in cholesterol deposition would be expected if the two proteins were functioning in a sequential fashion. Rather, elimination of either protein partially decreased the extent of cholesterol deposition compared with that occurring with TO9-treated cholesterol-enriched wild-type macrophages. Thus, there was an additive effect of ABCA1 and ABCG1 in mediating macrophage cholesterol deposition. Similarly, ABCA1 and ABCG1 produce an additive effect in their mediation of reverse cholesterol transport in vivo ( 3 ). ABCA1 and ABCG1 induction of cholesterol microdomains that label with MAb 58B1 in the plasma membrane of human macrophages and the extracellular matrix surrounding human and mouse macrophages may occur through enrichment of the plasma membrane with cholesterol ( 8,33 ). Cholesterol microdomains form in both model and cell membranes when these membranes are enriched with cholesterol ( 23,(32)(33)(34)(35). This is due to lateral phase separation of cholesterol within the membrane as certain critical membrane cholesterol concentrations are reached. We previously showed that SU6656, a Src kinase inhibitor, causes human macrophage cholesterol microdomains to accumulate in association with the plasma membrane rather than deposit into the extracellular matrix ( 17 ). Thus, there could be a two-step process in which ABCA1 and ABCG1 mediate transport of cholesterol to the plasma membrane, and then some other process mediates shedding of these microdomains into the extracellular matrix. In support of an independently regulated two-step process, we have observed that cholesterol-enrichment of fi broblasts induces plasma membrane-associated cholesterol microdomains that do not shed ( 16 ). Furthermore, cholesterol enrichment of human macrophages grown on certain substrates also blocks the shedding process of plasma membrane cholesterol microdomains detected with MAb 58B1 (unpublished observation). The cholesterol microdomains we detect here in the extracellular matrix could be related to previously observed plasma membrane-associated structures that form with cholesterol enrichment of cells or increased expression of cellular ABCA1. Upregulation of ABCA1 in fi broblasts and macrophages induces the formation of ApoA-I binding to plasma membrane-associated ( р 200 nm diameter generally spherical) structures ( 34,35 ). Possibly also related to the extracellular lipid particles that we have observed are the lipid-containing binding sites for ApoA-I that underlie cultured J774 mouse and THP-1 human macrophages ( 36 ). Without liver X receptor stimulation of cholesterol effl ux ABC transporters, ABCA1 mediates extracellular cholesterol deposition by human macrophages as deposition is eliminated by probucol, an ABCA1 inhibitor ( 17 ), while ABCG1 mediates extracellular cholesterol deposition by ABCA1 Ϫ / Ϫ macrophages if both ABCA1 and ABCG1 were contributing to macrophage cholesterol deposition by TO9-treated cholesterol-enriched wild-type macrophages. This is because TO9 stimulation of ABCA1 could not occur with the ABCA1 Ϫ / Ϫ macrophages. Lastly, we tested the effect of probucol on cholesterol deposition by TO9-treated cholesterol-enriched ABCA1 Ϫ / Ϫ macrophages. If probucol's effect of inhibiting macrophage cholesterol deposition was mediated by ABCA1, then we would expect no effect of probucol on macrophage cholesterol deposition by these macrophages. Indeed, that is what we observed ( Fig. 6 ). There was no quantitative difference between MAb 58B1 immunofl uorescence of macrophages incubated with AcLDL + TO9 and macrophages incubated with AcLDL + TO9 + probucol. DISCUSSION While in some studies ABCA1 mediates cholesterol effl ux to mature HDL as well as nascent HDL ( 11,31 , and references contained therein), this cholesterol effl ux has been attributed to ApoA-I that dissociates from the mature HDL and then interacts with ABCA1 generating nascent HDL. In this scenario, nascent HDL functions as the true cholesterol acceptor ( 32 ). Recently, this point of view has been challenged based on no evidence for dissociation of ApoA-I from HDL3b particles, a very effi cient acceptor of cholesterol effl uxed by ABCA1 ( 31 ). Previously we reported that ABCG1 mediates macrophage deposition of cholesterol into the extracellular matrix ( 15 ). Our new fi nding that ABCA1 as well as ABCG1 mediate macrophage deposition of cholesterol into the extracellular matrix can explain how ABCA1 and ABCG1 both mediate cholesterol effl ux to mature HDL: by mature HDL mobilizing Ϫ / Ϫ macrophages. ABCA1 Ϫ / Ϫ mouse bone marrow-derived macrophages were incubated for 4 days with AcLDL (50 g/ml) + TO9 (5 M) without or with probucol (10 M) before cultures were immunostained with anti-cholesterol microdomain MAb 58B1 (green fl uorescence) and DAPI nuclear stain (blue fl uorescence). Upper and lower rows are, respectively, the fl uorescence and phase photomicrographs. Scale bar = 50 m and applies to all. mouse macrophages ( 15 ). A similar difference in mouse and human macrophage effl ux to HDL has been reported ( 11 ( 11 )]. In conclusion, we have shown that in addition to ABCG1, ABCA1 independently mediates deposition of cholesterol into the extracellular matrix by cholesterol-enriched macrophages. Our fi ndings show a novel function for both ABCA1 and ABCG1 that results in excretion of cholesterol from the cell that is not mediated by formation of classical HDL cholesterol acceptor lipoproteins. While macrophage export of excess cholesterol into the extracellular matrix may be a protective cellular mechanism, if not mobilized through reverse cholesterol transport, buildup of this extracellular cholesterol possibly promotes atherosclerosis.
2017-11-09T20:10:20.231Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "cafac24b13c3744439f4700f661697abb445311d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1194/jlr.m060053", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "67f46c8960ae80e82404c77c276517a035f4a633", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
210637453
pes2o/s2orc
v3-fos-license
E ff ect of Pipe Diameter and Inlet Parameters on Liquid CO 2 Flow in Transportation by Pipeline with Large Height Di ff erence : Fire prevention and extinguishing and CO 2 sequestration in coal mine gob require continuous transportation of liquid CO 2 in pipelines with large height di ff erence (from ground to underground). However, the temperature and pressure variation of liquid CO 2 in pipelines with large height di ff erence is still unclear, which hinders the design of a liquid CO 2 pipeline transportation system. The influence of pipe diameter and inlet parameters (temperature and pressure) on the variation of temperature and pressure of liquid CO 2 along the 1000 m vertical pipeline was studied in this paper. The study found that for each pipeline diameter considered there existed a range of flowrates where safe flow conditions could be ensured, at which no phase transition occurs throughout the length of the pipeline. When the transporting flow is larger than the maximum limit flow, phase transition occurs dramatically, which will lead to a sudden drop in temperature and pressure. When the transporting flow rate is lower than the minimum limit flow rate, phase transition of CO 2 occurs slowly along the pipeline. According to the requirement of underground fire prevention and extinguishing for transporting flow rate and the economic cost of the pipeline system, the optimum diameter is 32 mm, and the corresponding safe transporting flow range is 507–13,826 kg / h. In addition, when the inlet pressure is constant, if the inlet temperature is too high, phase transition of CO 2 occurs dramatically at the entrance. For a 1000 m vertical pipe with diameter of 32 mm, when the inlet pressure is 14 bar, 16 bar, 18 bar, 20 bar, 22 bar, 24 bar, the corresponding maximum allowable inlet temperatures are − 30 ◦ C, − 26 ◦ C, − 23 ◦ C, − 19 ◦ C, − 16 ◦ C and − 13 ◦ C, respectively. This research has significant guidance for safety transportation scheme of liquid CO 2 from coal mine surface to underground. Introduction Coal can produce heat by compound reaction with oxygen (physical adsorption, chemical adsorption and chemical reaction), and when the heat cannot be dissipated to the surrounding environment in time, the accumulation of heat will lead to the continuous increase of coal temperature and cause it to reach the ignition point. This phenomenon is called spontaneous combustion of coal [1][2][3][4]. The spontaneous combustion of coal is one of the main hazards faced by coal mines. It mainly occurs in the gob of underground mine with the generation of toxic flammable gases, which can cause gas and coal dust explosion. Thus, the spontaneous combustion of coal seriously threatens the safety of coal mining [5,6]. Liquid CO 2 injection is a popular method of fire prevention and extinguishing, because a large amount of thermal energy is absorbed during gasification of liquid CO 2 , which can significantly reduce the temperature of the fire area, while the gaseous CO 2 can also inert the area to stifle the fire source [7][8][9][10][11]. At present, two main methods are used for injection of liquid CO 2 into gob. One is to transport small storage cylinder containing liquid CO 2 from the ground to the vicinity of underground gob, and then to inject CO 2 into the fire prevention and extinguishing area [12]; the other is to drill holes from the ground to the underground gob and directly inject liquid CO 2 through drilling holes [13]. The first method is complicated in process and cannot continuously inject CO 2 into the gob for a long time and large flow rate, so the effect of fire prevention and extinguishing is poor. The second method can inject CO 2 with large flow rate, but it can only inject liquid CO 2 directionally into the fixed position of the single gob, so this method is mainly used after ignition and when the location of the ignition source is clear, and it is not suitable for deep mines. Therefore, in order to realize real-time high-flow injection of liquid CO 2 into the whole gob with different mining depths and improve the efficiency of fire prevention and extinguishing, it is necessary to construct a liquid CO 2 pipeline transportation system with large height difference from ground to underground. Moreover, the construction of a pipeline transportation system of liquid CO 2 is also the prerequisite for fully utilizing the gob for storage and mineralization of CO 2 . Controlling the emissions of CO 2 is the key to solving the problem of global warming [14,15]. The gob formed after coal mining, especially the old deep gob, has good gas tightness and CO 2 adsorption capacity, which provides an important method for low-cost underground storage of CO 2 . There are about 7000 coal mines with a large number of gobs in China, and there is a good prospect for sequestering CO 2 in the gob. At present, many scholars are developing new materials to inject into gob, which cannot only prevent and extinguish fire, but also mineralize a large amount of CO 2 to realize CO 2 sequestration [16][17][18][19][20][21][22]. However, the precondition to achieve a large amount of CO 2 sequestration in gob is to transport CO 2 to gob in large quantities. The distance from the surface to the underground gob is about 1-5 km. Previous studies have shown that the cost of transporting liquid CO 2 is the lowest compared with transporting gaseous CO 2 and supercritical CO 2 in this distance range [23]. Therefore, in order to make full use of the large amount of CO 2 storage in gob, it is also necessary to construct a liquid CO 2 pipeline transportation system from ground to underground. At present, many sets of CO 2 pipeline transportation systems have been constructed worldwide to inject CO 2 into underground for CO 2 enhanced oil recovery (EOR) or enhanced coal bed methane production (ECBM). For example, a Bravo CO 2 long-distance pipeline for carbon capture and storage (CCS)-EOR was constructed between New Mexico and Texas, with a diameter of 0.51 m (20") and a length of 351 km [24,25]. The Weyburn CO 2 monitoring and storage project in Canada has established a pipeline for transporting supercritical CO 2 from Great Plains Coal Gasification Plant to Weyburn with a diameter of 305-356 mm and a length of 328 km [25,26]. The Sleipner project in Norway established a CO 2 pipeline with an annual capacity of about 1 million tons and a length of 4 km to transport CO 2 to saltwater reservoirs 800-1000 m below the North Sea [27,28]. However, these pipeline systems are mainly used to transport supercritical CO 2 and research on liquid CO 2 pipeline transportation systems, especially for a large height difference pipeline transportation system, is scarce. When building a pipeline transportation system for CO 2 , it is necessary to first grasp the law of temperature and pressure variation of CO 2 in the pipeline. Figure 1 is the phase diagram of CO 2 , and it can be seen that the phase of CO 2 depends on both temperature and pressure. Only when the temperature and pressure are controlled in the area shown in the shadow of the Figure 1 can CO 2 remain liquid phase. During the transportation process, the temperature and pressure are in a dynamic change, and CO 2 is prone to gasification. Liquid CO 2 gasification will cause severe vibration of the pipeline and absorb a large amount of heat rapidly, which will affect the mechanical properties of the pipeline and easy to form dry ice particles in the pipe to influence the normal transportation of the pipeline. At the same time, high-pressure bubbles in two-phase flow can easily damage pumps and other equipment, leading to paralysis of the transportation system. Zhang et al. [23] found that gasification occurs easily in the transportation of liquid or supercritical CO 2 , resulting in a large pressure drop or even blocking. Teh et al. [29] studied the effects of burial and ambient temperature on liquid and supercritical CO 2 pipelines and found that in order to avoid phase transformation during transportation, pumps/compressors are needed in the transportation pipeline system. For coal mines, the mining depth is generally 200-1000 m. To build a liquid CO 2 pipeline transportation system from the ground to the underground, a large height difference needs to be taken into account. Therefore, it is necessary to study the temperature and pressure changes of liquid CO 2 in a larger-height differential pipeline transportation system, and the measures to prevent gasification in the process. It is difficult to adjust the temperature and pressure of CO 2 in vertical roadway of coal mine by installing equipment. Only by optimizing the pipe diameter, inlet parameters (temperature and pressure of liquid CO 2 at the inlet of the pipe), insulation method and other measures can the temperature and pressure of liquid CO 2 in the pipeline with large height difference be always within the shaded area in Figure 1. Many scholars have studied the method of determining the optimum diameter of a CO 2 pipeline transportation system. For example, Mohitpour et al. [30] constructed the calculation model of the optimal diameter based on the energy balance equation of flowing CO 2 . Zhang et al. [23,31] obtained the formula for calculating the optimal pipe diameter based on Peters from the economic point of view, which involves the factors of pipeline cost, pump/compressor power, heat preservation, and maintenance, etc. For the determination of inlet parameters, some studies have been carried out to determine the optimum value by analyzing the temperature and pressure changes along the pipeline with different inlet parameters combining economic factors. For example, Zhang et al. [23] studied the pressure drop along the pipeline from the power plant to the injection point in the CO 2 -ECBM project at different inlet temperatures and pressures and the number of booster stations needed to be installed along the pipeline and based on these, the optimum inlet parameters were determined. Witkowsk et al. [32] studied the variation of pressure drop and density at different inlet temperatures under isothermal and adiabatic conditions, and determined the influence of inlet temperature on the maximum safe transportation distance. It can be found that these studies clarify the influence of pipe diameter and inlet parameters and the determination method of their optimum values from the perspective of economy and temperature and pressure along the pipeline, which has important guiding significance for the construction of a liquid CO 2 pipeline transportation system from coal mine surface to gob. However, the data about the influence of pipe diameter and inlet parameters on the temperature and pressure change along the liquid CO 2 transportation pipeline with large height difference are still scarce, which is the basic premise of constructing a liquid CO 2 pipeline transportation system of coal mine. Therefore, the main purpose of this study was to analyze the temperature and pressure variation of liquid CO 2 along large-height difference pipelines with different pipe diameters and inlet parameters, so as to determine the method to ensure the CO 2 is always liquid in pipeline with large height difference by controlling pipe diameter and inlet parameters. Firstly, using Aspen HYSYS V7.2 ® commercial simulation software (Aspen Technology, Inc., Bedford, MA, USA), based on whether phase change occurs in vertical pipelines with different diameters when transporting liquid CO 2 with different flow rates, the limit flow range of each pipe diameter was studied, and the optimum diameter was determined by combining the liquid CO 2 flow required for underground fire prevention and extinguishing. Then, the effect of inlet temperature and pressure on the process of transporting liquid CO 2 in vertical pipeline was studied by obtaining the optimal diameter before, and determining the optimal inlet parameters. This study provides a theoretical basis for the construction of a liquid CO 2 pipeline transportation system from coal mine surface to underground. The Liquid CO 2 Pipeline Transportation System from Coal Mine Ground to Underground The liquid CO 2 pipeline transportation system from coal mine ground to underground is shown in Figure 2, which mainly includes liquid CO 2 cylinder, pipeline, valve and so on. The liquid CO 2 used in coal mines is mainly transported from nearby chemical plants and power plants, etc. Therefore, liquid CO 2 storage cylinders are set up on the ground of coal mines [12]. The pipeline part mainly includes three parts: ground pipeline, vertical pipeline and underground pipeline. The vertical pipes are laid through vertical air intake or return wells in coal mines. Temperature and pressure variation of liquid CO 2 along the vertical pipeline is the main content of this study. Method of Studying the Variation of Temperature and Pressure along Pipeline ASPEN HYSYS ® V7.2 commercial simulation software (Aspen Technology, Inc., Bedford, MA, USA) was used to study the effect of pipe diameter and inlet temperature and pressure on the variation of liquid CO 2 temperature and pressure in the pipeline with large height difference. Many scholars have used this software to do relevant research, which proves the accuracy and scientificity of the software used to study the temperature and pressure variation rules along the pipeline when CO 2 is transported. For instance, Teh et al. [29] used the software to study the pipeline transporting process of CO 2 . Luo et al. [33] used ASPEN HYSYS ® to study the gas pipeline network in Humber area, UK and the research results were compared with those of PIPE-FLO ® , and they are in good agreement. Firstly, the steady-state model was used for simulation. The molecular weight, boiling point, ideal fluid density, critical point of CO 2 , etc. were input. Peng-Robinson (PR) equation of state was used for physical calculation [34], as it could strictly calculate any single, two or three-phase system and is efficient, reliable and widely applicable. Li et al. [35] concluded that PR equation has high accuracy for pure CO 2 or the CO 2 containing impurities of hydrogen sulfide or methane. Luo and Teh et al. [29,33] also used PR equation to simulate the pipeline transportation process of CO 2 , which proved that the calculation results are of reliability. Mohitpour et al. compared the predicted values obtained by different equation of state with the critical temperature and pressure measured by Dusckek et al. [36,37], and found that the predicted values of PR equation of state were very close to the experimental data. All the equations for PR used in this study are the standard form in HYSYS without any modifications. Then, the composition of the fluid (pure CO 2 ), pipeline parameters (material, pipe diameter, length, etc.) and thermodynamic parameters (insulation layer, environmental medium, ambient temperature, etc.) and other parameters were input. Considering the cost of a pipeline, pipe installation and maintenance, the lower diameter of pipeline is more suitable for liquid CO 2 transportation. According to the diameter standard of pipelines stipulated by the General Administration of Quality Supervision, Inspection and Quarantine of China, the inner diameters of the pipes were selected to be 15 mm, 20 mm, 25 mm, 32 mm, 40 mm and 50 mm, respectively [38], so as to study the influence of the pipe diameters. Pipeline material was set as mild steel with roughness of 4.572 × 10 −5 m [26,29,39,40], ignoring the influence of local factors such as valves and elbows in the simulation. Because deep seam mining seldom exceeds 1000 m, the length and the height difference of the pipeline was 1000 m between the inlet and outlet of pipeline section in the simulation (i.e., 1000 m vertical pipeline). The overall ambient temperature was set to 20 • C (the ambient temperature in the vertical mine roadway is generally about 20 • C), the environment medium was air, and there was no insulation outside the pipeline. For the calculation of heat loss, the standard method from HYSYS was used. When studying the flow range of pipe with different diameters, the inlet pressure was set to 22 bar and the inlet temperature was −20 • C. This is due to the fact that storage cylinders are usually used to store liquid CO 2 on the surface of coal mines, and the pressure in storage cylinders is generally in the range of 14-24 bar, while the storage temperature is generally −20 • C or lower [41][42][43][44]. In order to determine the influence of pipe diameter and flow rate on liquid CO 2 transportation, it was analyzed whether phase transition occurred in a pipeline of different diameter when transporting CO 2 with different mass flow rate. The specific process was as follows: (1) Setting the pipe diameter, and sequentially setting the predetermined flow rate (i.e., 500, 1000, 2000, 5000, 10,000, 20,000, 50,000, 100,000 kg/h) to obtain phase state data at different positions of the transporting process. does not occur when the flow rate is m 1 , but occurs when the flow rate is m 2 , then take m 1 for maximum limit flow. For determining the minimum limit flow, the same method was used. In this way, the accurate safe transportation flow ranges of pipe with different diameters can be obtained, and the optimal pipe diameter can be obtained combining the flow demand of CO 2 for fire prevention and extinguishment in general mines. When studying the influence of inlet parameters on CO 2 transport, the inlet pressure of a 1000 m vertical pipeline was set to 14 bar, 16 bar, 18 bar, 20 bar, 22 bar, 24 bar, respectively. Based on whether two-phase flow occurs during transportation, the maximum allowable inlet temperature corresponding to each inlet pressure was determined step by step by the dichotomy, which was similar to the method used to determine the limit flow. The setting of other boundary conditions during calculation was the same as when analyzing the influence of pipe diameter and flow. Results and Discussion The limit transporting flow corresponding to each diameter was determined by studying the variation of temperature and pressure of liquid CO 2 along pipelines with different diameters and height difference, based on whether two-phase flow occurs or not. Maximum Limit Transporting Flow of Pipelines with Different Diameters During the process of determining the maximum safe transporting flow, it was found that when the flow rate is lower than the maximum limit flow rate, a small change in flow rate will not lead to a significant change in temperature and pressure, and the difference between different flow rates cannot be clearly identified on temperature and pressure diagram with distance. However, when it is larger than the maximum limit flow rate, the results are exactly opposite. In order to clearly show the temperature and pressure changes along the pipeline at different flow rates, the results of flow rates of 0.4, 0.7, 1, 1.01, 1.02 and 1.03 times the maximum limit flow rate are listed in Figures 3 and 4. When the flow rate is 0.4 and 0.7 times the maximum limit flow rate, no phase transition occurs along the pipeline. It can be seen that with the increase of pipeline depth, the temperature of liquid CO 2 increases gradually. This is the result of heat exchange between the pipeline and its external environment [32]. Under the same pipeline parameters, the higher the flow rate, the faster the CO 2 velocity of flow, the shorter the heat absorption time of the same mass flowing through the whole pipeline, which leads to the larger the flow rate and the smaller the temperature rise. For instance, when the flow rate is 0.4, 0.7 and 1 times the maximum limit flow rate with the 15 mm diameter pipeline, the liquid CO 2 temperature at the end of the pipeline reaches 20.3 • C, 13.3 • C and 6.2 • C, respectively (the inlet liquid CO 2 temperature of the pipeline is set to −20 • C). When the flow rate exceeds the maximum limit flow rate, phase transition will occur in CO 2 flow, and the higher the flow rate, the earlier the phase transition will occur. A 15 mm pipe diameter condition is taken as a representative example, the flow rate is 1.01, 1.02 and 1.03 times the maximum limit flow rate, respectively. At 940 m, 800 m and 680 m of the pipeline, CO 2 changes from liquid to gas very quickly. This process absorbs a large amount of heat instantaneously, resulting in a sudden drop in the temperature of CO 2 at initial gasification point, which shows a turning point of temperature curve close to 90 • C. Figure 4 is the pressure curve of liquid CO 2 along the pipeline at different flow rates. When the flow rate is 0.4 and 0.7 times the maximum limit flow rate, the pressure of liquid CO 2 increases gradually along the pipeline, but the higher the flow rate, the smaller the pressure rise. When the diameter of the pipeline is 15 mm, the pressure at the end of the pipeline reaches 100.8 and 80.0 bar, respectively. In addition, it can be seen that the pressure rise is very small with the maximum limit flow rate. Pressure along the pipeline is mainly determined by two factors, namely, static gradient and friction gradient. Static gradient is mainly determined by mining depth and fluid density. Friction gradient is determined by friction strength and has a direct relationship with pressure drop [32]. When the sum of friction gradient and the static gradient is positive, the higher the sum is, the faster the pressure drop rate is; when the sum is negative, the smaller it is, the faster the pressure rises. Figure 5 shows the variation of friction gradient and static pressure gradient with pipeline depth in 15 mm diameter pipeline when the flow rate is 0.4, 0.7, 1, 1.01, 1.02 and 1.03 times the maximum limit flow rate, respectively. For the condition that there is no two-phase flow (i.e., 0.4, 0.7, 1 times the maximum limit flow rate), it can be seen that the influence of pipe depth on the friction gradient is very small, but the static gradient increases with the transportation depth. Additionally, the flow rate has a great influence on the friction gradient, the higher the flow rate, the greater the friction gradient. At the end of the pipeline, the friction gradient at the maximum limit flow rate is 8.3 times that at the 0.4 times maximum limit flow rate. The static gradient decreases with the increase of flow rate, but the decrease is small (change within 2 kPa/m). When the flow rate is large, the pressure drop caused by the large friction gradient will be larger than the increase of the static pressure gradient, and even cause the pressure at the end of the pipeline to be lower than the inlet pressure. When the flow rate is 1.01, 1.02 and 1.03 times the maximum limit flow rate respectively, the pressure drops sharply after the phase transition occurs, which is due to the rapid increase of the volume flow rate in the pipeline caused by the appearance of the gas after the phase transition occurs (the gas volume of the same mass CO 2 is hundreds of times the liquid volume of CO 2 ), which leads to the rapid increase of friction resistance. Minimum Limit Transporting Flow of Pipelines with Different Diameters It can be seen from the results presented in Section 3.1.1 that the smaller the flow rate, the greater the heat exchange between CO 2 and the surrounding environment, resulting in faster temperature rise along the pipeline. Therefore, it can be inferred that when the flow rate of liquid CO 2 is very small, phase transition will also be found in the transporting process, and there is a minimum limit transporting flow to avoid the occurrence of two-phase flow. When the flow rate of liquid CO 2 is small, the changes of liquid holdup, temperature and pressure along the vertical pipeline are shown in Figures 6 and 7. Figure 6 shows that the minimum limit flow rates are 336, 389, 417, 461, 504 and 555 kg/h when the pipe diameters are 15, 20, 25, 32, 40 and 50 mm respectively. Gasification of liquid CO 2 will occur during transportation as the flow rate continues to decrease. According to the variation of liquid holdup along the pipeline (Figure 6), it can be seen that there are three kinds of changes in the phase state of CO 2 in the 1000 m vertical pipeline: the whole process of transporting liquid CO 2 ; transporting liquid CO 2 , then transporting gas-liquid two-phase CO 2 after phase transition occurs; transporting liquid CO 2 , gas-liquid two-phase CO 2 after phase transition occurs and gas-phase CO 2 after complete gasification in turn. For a pipeline with diameter of 15 mm, when the inlet flow rate is 336 kg/h, the liquid holdup of the whole pipeline is 1, which means that CO 2 remains in the liquid phase; but when the inlet flow rate is reduced to 316 kg/h, the liquid holdup of the pipeline decreases continuously from the position of 40 m to the end of the pipeline (the liquid holdup at the end of the pipeline is 0.0603), indicating that liquid CO 2 is gasified gradually during this period, that is to say, liquid CO 2 is transported before 40 m, and then gas-liquid two-phase CO 2 is transported. When the inlet flow rate continues to decrease to 296 kg/h, gasification begins at 40 m and liquid holdup drops to 0 at 900 m, indicating that liquid CO 2 is transported before 40 m, gas-liquid two-phase CO 2 is transported between 40 m and 900 m, and then gas-liquid CO 2 is transported. The change of liquid holdup is consistent with that of temperature and pressure. For instance, when liquid CO 2 is transported by a pipeline of 15 mm diameter at 296 kg/h, it will remain liquid until 40 m. At this time, the temperature rises rapidly, because the temperature difference between CO 2 and the surrounding environment is large in the initial stage, and the flow rate is small. The slow flow rate leads to the rapid heat transfer between CO 2 and the surrounding environment; The pressure rises linearly at this time because the transport medium is pure liquid CO 2 , and the density change is less affected by its temperature and pressure change, which makes the static pressure rise rate stable, and the density change small makes the velocity of CO 2 along the pipeline stable, which makes the pressure drop change stable, too; at the same time, the very low flow rate makes the pressure drop especially low, in two aspects. The two aspects make the pressure rise almost linearly. From 40 m to 900 m, liquid CO 2 continues to gasify, and the temperature rises slowly or even decreases at this time. The main reason is that the heat absorption of CO 2 gasification inhibits the temperature rise caused by heat transfer with the surrounding environment. Because the specific heat capacity of gaseous CO 2 is much lower than that of liquid CO 2 , this leads to even lower temperature when liquid holdup is low in the later stage. While the density of the pipeline decreases due to the gradual gasification of liquid CO 2 , which impedes partly the increase of static pressure, and the pressure drop increases due to the increase of friction between flow and the wall of the pipeline. The rate of pressure rise decreases dramatically, and even the pressure does not increase with the decrease of altitude; that is to say, the pressure decreases. After 900 m, the temperature of CO 2 rises rapidly because of the lower specific heat capacity of fully gasified CO 2 ; convective heat transfer with the surrounding makes the temperature of CO 2 rise faster, and the temperature rise slows down near ambient temperature, which is due to the smaller temperature difference between CO 2 and the ambient environment. The CO 2 density of complete gasification is very low, which makes the rate of static pressure rise very low, and the pressure drop caused by CO 2 friction after gasification is much higher than that of liquid state, which makes the pressure drop. Comparing Figures 3 and 4, with Figures 6 and 7, it can be found that phase transition occurs when liquid CO 2 is transported at very high or very low flow rates, but there are obvious differences between them. At high flow rate, a sudden change in phase transition occurs, and the rate at which CO 2 is converted into gases in pipes is very fast. Conversely, the phase transition of liquid CO 2 is a slow process at low flow rate. The results of Section 3.1.1 show that the pressure drop of CO 2 before two-phase flow occurs at high flow rate is much larger than that at low flow rate, which is one of the reasons why the phase change rate at high flow rate is much higher than that at low flow rate. The volume flow rate of CO 2 produced by gasification at high mass flow rate is much higher than that at low mass flow rate, which further increases the pressure drop of the flow in the pipeline. These two factors lead to the rapid increase of pressure drop after gasification at high flow rate, which leads to the rapid phase transition process. At low flow rate, the pressure drop is much lower than that caused by phase change at large flow rate, which is the main reason for the fast phase change process at high flow rate and the slow phase change process at low flow rate. Pressure drop is very small before two-phase flow does not occur at low flow rate. After gasification of two-phase flow, pressure drop does not rise rapidly, and under this condition the main reason for the two-phase flow is due to sufficient heat transfer with the surrounding environment. The endothermic effect of gasification can inhibit the heating process caused by heat transfer and the rate of phase transition, which makes the phase transformation process slow. In order to further improve the safety, the minimum limit flow rate and maximum limit flow rate determined based on whether two-phase flow is generated are multiplied by 110% and 90% respectively as the limit value of actual transport flow. All the results are listed in Table 1. In order to ensure the effectiveness of fire prevention and fire extinguishing in coal mine, the liquid CO 2 transporting system is required to have 1-10 t/h transporting capacity [41,42]. It can be found from Table 1 that the pipe diameters of 32, 40 and 50 mm can meet the requirements. Considering the economic cost, a 32 mm pipe diameter is determined as the optimal diameter. Optimum Inlet Temperature and Pressure Temperature and pressure of liquid CO 2 at the entrance of the pipeline section can affect temperature and pressure changes along the pipeline [32]. Only by obtaining the optimum inlet parameters which can avoid phase change along the pipeline can determine whether it is necessary to install a temperature controller or a booster pump before the entrance of the pipeline. According to the results of the study in Section 3.1, a pipe with an inner diameter of 32 mm and a flow rate of 5 t/h was used to study the optimum inlet temperature and pressure. Figures 8 and 9 show the change of temperature and pressure along the pipeline at different inlet temperatures and inlet pressures. When the inlet pressure is set to 14 bar and the inlet temperature is −30 • C, −32.5 • C and −35 • C, respectively, the temperature rise curve is very smooth and nearly linear, and it can be judged that there is no phase transition in the transport process. The three lines gradually approach as the depth increases, and it can be seen that the lower inlet temperature has a higher rate of rise. This is because the ambient temperature is constant and the lower inlet temperature of the fluid causes a higher temperature difference, so that the heat transfer rate of the whole depth range is faster. However, when the inlet temperature is −25 • C and −27.5 • C, respectively, CO 2 is always in the gas phase from the starting position, and it can be noticed that both temperature and pressure dropped rapidly. This is because the volume flow of CO 2 in the pipeline is very high, resulting in a very large pressure drop. Simultaneously, according to the basic state equation PV = nRT, where P is the pressure, V is the gas volume, n is the amount of the substance, T is absolute temperature, and R is the gas constant, when the three values (V, n, R) are fixed, if the pressure value decreases, the temperature must go down. According to the above analysis, it can be determined that when the inlet pressure is 14 bar, the maximum allowable inlet temperature is −30 • C. The maximum allowable inlet temperature is obtained by the same method under other pressure conditions. The maximum allowable inlet temperatures corresponding to the six inlet pressures ( Figures 8 and 9, for each inlet pressure, the change of fluid temperature and pressure along the pipeline is basically the same. Under the maximum allowable inlet temperature, the endpoint temperature rises with the increase of inlet pressure. For example, when the inlet pressure and temperature are 24 bar and −13 • C (the maximum allowable inlet temperature), the endpoint temperature and pressure are 5.8 • C and 104.9 bar, respectively. However, when the inlet pressure and temperature are 14 bar and −30 • C (the maximum allowable inlet temperature), the end temperature and pressure are −4.0 • C and 108.6 bar, respectively. The end temperature and pressure difference between the two conditions is 9.8 • C and 3.7 bar, which is far below the initial pressure and temperature difference of 17 • C and 10 bar. Moreover, in Figure 10, through simple calculations, it is found that for every 2 bar increase in inlet pressure, the maximum allowable inlet temperature can be increased by about 3-4 • C, which indicates that the requirement for the inlet temperature will decrease when the inlet pressure rises. Discussion on Safe Pipeline Transportation of Liquid CO 2 According to Section 3.1, for pipelines with different diameters, there is a flow interval that could avoid phase transition of liquid CO 2 . Therefore, when designing a liquid CO 2 pipeline transportation system, the pipe diameter and its corresponding maximum and minimum limit flow rates should be determined according to the actual demand for liquid CO 2 flow. In actual pipeline transportation, it is easy to control the flow below the maximum limit flow. However, in many cases it is not possible to guarantee that the flow is above the minimum limit, such as at the beginning and the end of the transportation. For example, after the end of the transportation, a large amount of liquid CO 2 remains in the pipe, but the flow rate is zero. In these cases, the temperature of the liquid CO 2 rises rapidly, and gasification occurs. In order to solve this problem, it is necessary to study the corresponding countermeasures. In Section 3.2, it was found that the suitable setting of inlet pressure and temperature could avoid the phase transition of liquid CO 2 in the pipeline with large height difference. For every 2 bar increase in inlet pressure, the maximum allowable inlet temperature can be allowed to rise by about 3-4 • C. When designing the system, it is possible to determine whether to adopt pressure regulation or temperature control from the perspective of economic cost. In addition, only three parameters of pipe diameter, flowrate and inlet temperature and pressure were involved in this study. However, many other factors, such as ambient temperature, type and thickness of insulation layer, pipe inclination and pipe roughness, also affect the temperature and pressure changes of liquid CO 2 during transportation. They are important factors in determining the safe transportation of liquid CO 2 . The impact of these parameters also requires further in-depth research. Conclusions The influence of pipe diameter and inlet temperature and pressure on the temperature and pressure along a 1000 m vertical pipeline for transporting liquid CO 2 was studied. Based on the phase change of liquid CO 2 during transportation, the transportation flow range of a 1000 m vertical pipeline with different diameters was determined. According to the requirement of underground fire prevention and extinguishing for transporting flow rate and economic cost, the optimum diameter of the pipe is 32 mm, and the transporting flow range is 507-13,826 kg/h. When the transporting flow is larger than the maximum safe transporting flow, the liquid CO 2 will change dramatically, which will lead to a sudden drop in temperature and pressure, and will influence the mechanical performance of the pipeline. When the transport flow rate is lower than the minimum safe transport flow rate, the liquid CO 2 will phase change slowly along the pipeline. In addition, the maximum allowable inlet temperature corresponding to different settled inlet pressures was determined. When the inlet pressure is constant, if the inlet temperature is too high, the liquid CO 2 will change dramatically at the entrance of the pipeline. For every 2 bar increase in inlet pressure, the maximum allowable inlet temperature can be allowed to rise by about 3-4 • C.
2019-10-24T09:12:05.859Z
2019-10-16T00:00:00.000
{ "year": 2019, "sha1": "03d8154a9d63943527df00782dc9616eba480e4b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/7/10/756/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3ea61b567395ae1ce2a818ba96bce07a7e06388d", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
204953072
pes2o/s2orc
v3-fos-license
The goat β-casein/CMV chimeric promoter drives the expression of hLF in transgenic goats produced by cell transgene microinjection There is growing interest in the application of lactoferrin (LF) as a drug or food additive for animals and humans. The objective of this study was to produce transgenic cloned goats that would serve as living bioreactors, expressing high levels of recombinant human LF (rhLF) in their milk. We designed a pCL25 expression vector containing goat β-casein/CMV chimeric promoter in order to facilitate rhLF expression. This pCL25-rhLF-Neo vector was microinjected into goat fetal fibroblasts. G418 selection and PCR analysis were used to identify transgenic donor cells suitable for somatic cell nuclear transfer (SCNT). After SCNT and embryo transplantation, goats harboring the hLF gene were produced, as confirmed via PCR and southern blotting. The average rhLF concentration in milk from this transgenic goat was 3.89 mg/ml as determined via ELISA. We also used an optimized buffer in order to effectively elute high-purity (95.8%) rhLF from a cation-exchange column, with the recovered rhLF exhibiting high biological activity. Findings from this study demonstrated that it is possible to generate a transgenic goat harboring the hLF transgene driven by the goat β-casein/CMV chimeric promoter. It represents an initial step towards the production of rhLF, potentially allowing for industrialized purification in the future. Introduction There is an increasing need for the production of recombinant therapeutic proteins generated via a range of transgenic techniques, with the optimal approach to such expression being to produce these recombinant proteins in bioreactors such as bacteria, yeast, plants, mammalian cells, or transgenic animals (1)(2)(3)(4)(5)(6)(7). Among these bioreactors, mammary gland bioreactors in transgenic animals offer the advantage of being fully compatible with humans and being approved by the FDA (8). Such mammary gland-derived recombinant proteins have already been implemented for clinical use (9) For example, recombinant human antithrombin III (Atryn ® ) is produced from the milk of transgenic goats (10). Such mammary gland bioreactors are highly advantageous for the production of those proteins that require post-translational modifications in order to mediate their stability or activity (11). Producing recombinant proteins in mammary glands therefore represents a more profitable approach to the production of recombinant human proteins. Pantano et al suggested that relatively few mammary cells in transgenic animals ultimately express recombinant proteins (12), underscoring that there is an urgent need to determine how to bolster the in vivo expression of these recombinant proteins using optimized expression vector systems. Achieving high-level production of recombinant proteins within the milk of transgenic animals depends upon ensuring high-level transcription of the introduced cdNA. This makes it essential to select appropriate cis-acting elements, including promoters and enhancers, for the introduced genes. Large quantities of β-casein proteins are produced in goats during lactation in response to hormonal stimulation with β-casein concentrations being 43% higher in goat milk relative to bovine milk (13,14). It is thought to be a binding site for STAT5a in the -300 bp region of the goat β-casein promoter, and this binding site serves to mediate responses to lactogenic hormone stimulation (15). This lays the theoretical foundation for the selection of a goat β-casein promoter, thus allowing for the efficient expression of proteins in mammary glands. Although the goat β-casein promoter has previously been widely used to drive the transcription of many recombinant proteins in transgenic goats, the expression of these proteins has not been sufficiently high for commercial applications. A variety of approaches have been employed in an effort to boost The goat β-casein/CMV chimeric promoter drives the expression of hLF in transgenic goats produced by cell transgene microinjection mammary expression of these recombinant proteins, including the use of distal regulatory elements/large genomic dNA fragments (16), insulators (17), matrix-attached regions (18), and targeted site integrations (19). The cytomegalovirus (cMV) promoter is a high-efficiency promoter/enhancer widely used for transgene expression in cells. Zarrin et al found this promoter to be more efficient than alternatives such as the SV40, Rous sarcoma virus (RSV), and Vλl promoters for certain B-cell lines (20). There are few reports, however, regarding the use of a goat β-casein/cMV chimeric promoter to facilitate protein production in the mammary glands of medium and large transgenic animals. The properties of a given protein determine its protein purification strategy. LF is a cationic protein, thus making it well suited to purification via cation-exchange chromatography (21,22). This approach is widely used for bovine LF purification by bLF-producing companies. Concanavalin A affinity chromatography or metal ion affinity chromatography are also viable strategies for purifying LF owing to its glycosylation and Fe 3+ -binding activity (23,24). In this study, we generated a transgenic goat harboring the human lactoferrin transgene driven by a chimeric goat β-casein/cMV promoter. This animal was generated using goat fetal fibroblasts microinjected with the pCL25-rhLF-neo vector as ScNT donor cells, allowing for mammary gland-specific transgene expression (25,26), while retaining the biological characteristics necessary for better efficacy as a drug or food additive. We additionally conducted ELISAs, western blot, and antibacterial activity assays to confirm that human lactoferrin was efficiently expressed in transgenic goat milk while retaining its normal biological activity. Materials and methods Ethics statement. Animal experiments and procedures were performed in accordance with the guide for the care and Use of Laboratory Animals (Ministry of Science and Technology of the People's Republic of China) and approved by the animal care and use committee of Yangzhou University, Yangzhou, China [license no. SYXK(Su)2017-0044]. A total of 50 female dairy goats (45-60 kg, 13-18 months old; Jiangsu Academy of Agricultural Sciences, Nanjing, china) used in the current study were raised at room temperature (25±2˚C), with a 12 h day/night cycle, and allowed free access to food and water. All animals were anesthetized using xylazine hydrochloride injection (0.001-0.002 ml/kg) purchased from huamu Animal health Products Co., Ltd. during surgery, with all possible effort being made to reduce their pain, distress, and suffering. Lactoferrin expression vector construction. human lactoferrin (GenBank: KT006756.1) cDnA was synthesized by Genscript (China), using cDnA containing 5' and 3' terminal XhoI sites. The sequence encoding the mature lactoferrin peptide was fused to both the goat β-lactoglobulin signal peptide as well as the Kozak translation initiation sequence. The synthesized lactoferrin gene was cloned into the pcL25 vector (generated internally) containing the goat β-casein/cMV chimeric promoter and a neo-selectable cassette in the goat β-casein 3' genomic region near the vector NotI site (Fig. 1). NotI and SalI were used for vector digestion, and a QIAquick Gel Extraction kit (28704, Qiagen, Germany) was used to purify the resultant fragments. Cell culture and transgene expression. A 30-day old fetus was surgically removed from a Sannen dairy goat and used to generate fibroblasts. Briefly, fetal tissue was cut into small fragments following the removal of the internal organs, head, and limbs, and these fragments underwent 0.05% trypsin-EDTA-mediated digestion. Fibroblasts were then isolated from the supernatant portion of this digestion and grown using dMEM/F12 (Sh30023.01; hyclone) containing 10% FBS (Sh30406.01; hyclone), and 1% penicillin-streptomycin (SV30010; hyclone) at 37˚C in a 5% Co 2 humidified incubator. Cells underwent passaging at 80% confluency, and after the second passage cells were aliquoted and frozen in freezing media containing 10% DMSo (D2650; Sigma) and 20% FBS. Aliquots of cells were frozen, and once they grew to 80% confluency they were microinjected with 5 ng/µl of the purified pcL25-rhLF-Neo dNA fragment using an Eppendorf InjectMan (NI2; Eppendorf) and then cultured as above. After 24 h, the cells were grown in selective medium containing 800 ng/µl G418 (SV3006801; hyclone) for approximately 10 days. A cloning ring was used to isolate and expand healthy colonies following selection, and these clones were subcultured as above. Some of these subcultures were frozen for long-term storage, while the rest were screened for expression of the transgene via polymerase chain reaction (PCR). Generation of a transgenic goat via SCNT. Somatic cell nuclear transfer (ScNT) was conducted after identifying transgene-positive clones. Enucleated oocytes served as recipients for transgenic cell nuclei, with a super electro cell fusion generator (EGFE21; nepa Gene) being used for the SCnT procedure. next, 5 µmol/l ionomycin (I0634; Sigma) and 7.5 µg/ml cytochalasin B (c6762; Sigma) in M16 medium (M7292; Sigma) was used to activate these reconstructed embryos for 5 min, and then cells were treated with M16 containing 2 mmol/l 6-dimethylaminopurine (d2629; Sigma) and 7.5 µg/ml cytochalasin B for 5 h. After activation, the embryos were implanted into recipient goats, and after a 1-month period these animals were assessed via ultrasound to confirm pregnancy. Approximately 150 days later, kids were delivered naturally. For all kids, a small portion of the ear was taken as a biopsy sample from which dNA was isolated and used to assess transgene incorporation by PCR and southern blotting. dL2000 dNA marker (3427A; Takara) was purchased from Takara Biotech (dalian) co., Ltd. Confirmation of transgene integration in cloned goats. Genomic DnA from the transgenic donor cells and ear tissues of cloned goats was prepared with an Easy Pure Genomic DnA kit (EE101-1; TransGen). A pair of primers specific for human LF was used to determine which donor cells had incorporated the transgene. Primer sequences used were: CMV-crhLF-1: ATG GGC GTG GAT AGC GGT TTG AC and CMV-crhLF-2: CCA CCA TCA AGG GTC ACA GCA TCG. To identify transgenic goats, the following primers were instead used: CMV-grhLF-1: ATA GTA ACG CCA ATA GGG A and CMV-grhLF-2: GGT CG CA GTT TGT AGG G. The following conditions were used for all PCR reactions: 94˚C for 5 min, 33 cycles at 94˚C for 1 min, 56˚C for 1 min, then 72˚C for 38 sec, and finally held at 72˚C for 10 min. Product sizes for the two primer pairs were 450 and 775 bp, respectively. Sequencing analysis was performed by Sangon Biotech (Shanghai) co., Ltd. Southern blotting was next employed to confirm specific transgene dNA integration in goats. Ear biopsy-derived dNA from transgenic and wild-type (WT) goats underwent overnight BamhI digestion, with the PCL25-CMV-rhLF-neo plasmid serving as a positive control. A digoxigenin-labeled probe then underwent PCR amplification with the CMV-grhLF-1 and cMV-ghLF-2 primer pair. Samples underwent 4-h agarose gel electrophoresis, after which dNA was transferred to a nylon membrane (11417240001; Roche) for blotting. This membrane next underwent probe hybridization for 18 h, followed by incubation with biotin-labeled mouse anti-digoxin for 30 min. A positive band was expected to be approximately 9.1 kb in size. Southern blotting reagents were purchased from Boster co. (Wuhan). ELISAs. Milk samples collected from lactating transgenic and WT goats were centrifuged at 10,000 x g for 30 min at 4˚C for whey isolation. The samples were diluted at 1:10 with PBS, and were used for ELISA reactions with a rabbit-anti-lactoferrin polyclonal primary antibody (dilution 1:2,000, 4% FBS/PBS; Sangon; D121815-0025). After incubation at 37˚C for 1 h and being washed three times with PBS-T (PBS containing 0.05% Tween-20), wells were probed with an hRP-conjugated goat-anti-rabbit secondary antibody (dilution 1:1,000, 4% FBS/PBS; Sangon) at 37˚C for 1 h. The samples then underwent a colorimetric reaction via adding TMB substrate to each well, after which absorbance at 450 nm was measured via microplate reader (Rayto). Protein standards (SRP6519; Sigma) were used for standard curve generation, and sample rhLF concentrations were determined based on this standard curve. Purification of rhLF from the transgenic cloned goat. Fat and other undissolved substances were removed from milk via centrifugation at 12,000 x g for 30 min at 4˚C, after which the ph was reduced to 4.0 in order to facilitate casein precipitation. Milk was then centrifuged at 4˚C at 100,000 x g for 1 h. Supernatant ph was adjusted to 6.0 using acetic acid, after which the samples were centrifuged again as in the previous step. A protein purification system (ÄKTAprimeTM PLuS; GE healthcare) was used for all purification reactions. First, after equilibration in a column containing Buffer A (0.07 mol/l hAc, ph 3.1), samples were loaded onto a hiTrap Capto S cation exchange column (1 ml; GE healthcare) and the bound proteins were eluted via a step gradient of 30 and 100% Buffer B (0.5 mol/l Nacl, 0.07 mol/l Tris-hAc, ph 7.5). The eluate containing 100% Buffer B was collected and desalted via a Bestdex G-25 column (1.6x2.5 cm; BestChrom) for use in downstream experiments. SDS-PAGE analysis was then used to assess protein purity. Western blotting. Whey was isolated as above and then boiled in SdS loading buffer for 10 min, after which samples were electrophoretically separated using 12% polyacrylamide Tris-glycine gels. Afterwards, these gels were stained using Coomassie Brilliant Blue G-250, and the sample purity and concentrations were determined using Tanon Gis software (Bio-Tanon). For western blotting, separated proteins were then transferred onto PVDF membranes (F019531; Sangon). The membranes were blocked using 5% BSA/TBST overnight at 4˚C, and then probed using a polyclonal rabbit-anti-LF antibody (1:2,000, 10% FBS/TBST; Sangon) at 37˚C for 1.5 h. next, an hRP-conjugated secondary goat-anti-rabbit IgG (1:1,000, 10% FBS/TBST; Sangon) was used to probe blots at 37˚C for 1 h. The blots were then washed three times in TBST (20 mM Tris-base, 137 mM naCl, 0.05% Tween-20), and protein bands were detected with an EcL substrate solution (Millipore corporation) based on provided directions. Bacteriostatic activity assessment. Lactoferrin has been shown to be able to inhibit the growth of both Gram-positive and -negative bacteria, including important pathogenic species such as Helicobacter pylori, Staphylococcus aureus, Shigella flexneri, enteropathogenic Escherichia coli (EPEC) and Salmonella enterica serovar typhimurium (2,(27)(28)(29)(30)(31). We therefore selected E. coli K88 grown on LB plates as a model strain to test the bacteriostatic activity of transgenic goat milk. A single E. coli K88 colony was transferred into 15 ml LB culture medium, shaken and kept overnight at 37˚C. The resultant bacteria were then streaked evenly across an LB agar plate using a cotton swab. After a 4-h incubation at 37˚C, the size of the growth inhibition area surrounding a given sample was used to assess bacteriostatic activity. Results Lactoferrin expression vector construction. We successfully inserted the LF cdNA fragment into the pcL25 vector, thereby producing a pcL25-rhLF-Neo recombinant vector that was found to be of appropriate size based on restriction enzyme digestion and sequencing. Sequencing confirmed that the rhLF coding region was fused in-frame upstream of pcL25. Fetal goat fibroblast transfection. Goat fetal fibroblasts were microinjected with 5 ng/µl of the purified pCL25-rhLF-neo DnA fragment, and were selected using G418. A total of 16 G418-resistant transgenic cells were obtained by single cell amplification. of these, 9 were determined to express the hLF transgene via PCR using the CMV-crhLF-1 and CMV-crhLF-2 primers (Fig. 2). In total, 56.25% (9/16) of the cell clones had confirmed pcL25 integration. clone no. 4 cells served as ScNT donors as they were found to exhibit the best viability and quality. SCNT-mediated transgenic cloned goat generation. ScNT was used to produce transgenic goats as previously identified (27). We transferred 65 reconstructed embryos into 5 recipient goats, leading to the birth of a single female kid that was found by PCR to harbor the pCL25-rhLF-neo transgene ( Table I). The female kid was designated as LF-1 (Fig. 3), and upon reaching sexual maturity underwent mating with a WT ram. Confirmation of transgene integration in cloned goats. PCR and Southern blotting were used to confirm that the transgenic goat integrated the rhLF transgene. human LF-specific primers (cMV-grhLF-1 and cMV-grhLF-2) were used to identify the cloned goats by PCR, while digoxigenin-labeled versions of these primers were used as probes for Southern blotting. Following PCR, we were able to amplify a 775 bp product, confirming successful rhLF transgene integration into this cloned goat (Fig. 4A). Southern blotting further confirmed this finding (Fig. 4B). Assessment of milk rhLF expression in transgenic goats. Expression of rhLF in WT and transgenic goat milk samples was next assessed via ELISA. Milk was collected during days 1-30 of lactation following delivery. We found that the rhLF concentration reached a peak of 4.7 mg/ml on day 4, with an average concentration of 3.89±0.82 mg/ml from days 1-30 of lactation. To assess the possible ectopic expression of rhLF in this transgenic goat, rhLF levels in the serum and saliva of lactating goat were measured via ELISA. There was no indication of rhLF expression in the serum or saliva of this transgenic goat (data not shown). Purification of rhLF from the transgenic cloned goat. cation exchange chromatography can be used to separate lactoferrin from milk, as lactoferrin has a net-positive charge. In order to explore the optimal elution conditions for separation and purification of rhLF via cation exchange chromatography, we first assessed the optimal solution conductivity for rhLF. Stepwise elution was used to achieve one-step elution and separation of the target protein. Two elution peaks were obtained from the hiTrap Capto S cation exchange column eluted with a step gradient of 30 and 100% Buffer B. SDS-PAGE and western blotting revealed that high-purity rhLF was successfully collected in peak P3 of the eluent (Fig. 5), with a size of 80 kDa. The concentration of the purified rhLF was found to be 1.25 mg/ml by spectrophotometry (One drop 1000+; Onedrop Technologies, Inc.). The purity was determined to be 95.8% based on densitometric scanning of the SDS-PAGE gel. Western blotting confirmed that samples in the transgenic goat were identical to native hLF control samples, with a size of approximately 80 kDa (Fig. 6). Bands were absent in the WT control goat sample, as expected. Assessment of transgenic goat milk bacteriostatic activity. The bacteriostatic activity of the rhLF in the transgenic goat milk was assessed via an agar disc diffusion method in order to allow for observation of bacteriostatic activity in vitro. Sterile filter paper was placed onto agar plates containing E. coli K88, and bacteriostatic activity was estimated based on inhibition zone sizes surrounding the sterile filter papers following a 4 h incubation at 37˚C. These results revealed that rhLF from transgenic milk exhibited comparable bacteriostatic activity to that of hLF (inhibition zone diameters of 17 and 19 mm, respectively). WT goat milk served as a negative control, with no inhibition zone being evident. We also found that rhLF purified by cation-exchange chromatography exhibited similar bacteriostatic activity (an inhibition zone diameter of 13 mm) (Fig. 7). Discussion In this study, we successfully used ScNT as a means of generating a transgenic goat producing rhLF in mammary cells, using transgenic goat fetal fibroblast cells as donor cells. To date, there have been no previous reports of using fetal fibroblasts microinjected with rhLF gene as donor cells for ScNT. We detected no abnormalities in the founder transgenic goat or its offspring, indicating no effect of the vector on goat biology. To determine whether the rhLF transgene could be stably transmitted to offspring, the female founder transgenic goat was mated with a wild-type ram and a single male lamb was birthed. A subsequent PCR assay demonstrated that it was transgenic for rhLF (data not shown), indicating that the rhLF transgene can be inherited by offspring. There are many reports regarding high-level rhLF expression in transgenic mice, rabbits, and cows (32-34). Mice and rabbits, however, are not suitable for large-scale commercial rhLF production due to their limited milk production and short life span (2-3 years for mice, 8-10 years for rabbits). Cows also are not appropriate models for producing rhLF because bovine milk has more allergenic protein than does goat milk (35,36). Therefore, goats are more suitable as a biologic mammary reactor for the large-scale production of rhLF given that goat milk has been reported to contain smaller fat globules and a distinct casein composition relative to bovine milk, making it less allergenic (37,38). At present, many transgenic animals are produced via ScNT or pronucleus microinjection, including sheep (39), goats (40,41), cows (42), mice (43) and rabbits (44). The success rate of ScNT remains low and varies based upon factors such as the vector used, the source of recipient and donor cells, the exact SCnT protocol employed, and the influence of exogenous genes on embryonic development (45)(46)(47). The quality of donor cells is critical for producing transgenic animals via SCnT. Preparation of transgenic animals using electroporation-mediated transfection requires optimization of transfection conditions and is often associated with a high rate of cell death. however, cell microinjection avoids these challenges, instead offering a high integration rate while remaining suitable for genetic engineering and the establishment of transgenic animals. In this study, we improved upon the process of preparing transgenic goats using goat fetal fibroblasts microinjected with the rhLF gene as donor cells for ScNT. In previous reports, we constructed various mammary gland-specific vectors containing a cMV enhancer and a chimeric promoter [goat β-casein, bovine αs1-casein, and goat β-lactoglobulin (BLG)] based on milk protein promoter sequences. These vectors allowed for 1.17-8.10 mg/ml hLF levels in transgenic murine milk-roughly 100,000-fold higher than the levels produced from control promoters (7-40 ng/ml). We also found that the inclusion of the cMV enhancer significantly increased hLF expression in these mice. Use of hLF cdNA did not achieve expression levels as high as those from hLF genomic dNA in these mice (25). Many factors can influence recombinant milk protein expression levels, including copy number, site of chromosomal insertion, and species-specific differences in expression patterns (48,49). rhLF expression levels in transgenic goats can be as high as 4.7 mg/ml-levels which are markedly higher than the levels The whey of human colostrum (x20 concentrated), circle diameter=19 mm; 3: The whey of rhLF transgenic milk (x20 concentrated), circle diameter=17 mm; 4: 1 mg purified rhLF, circle diameter=13 mm; 5: The whey of wild-type goat milk (x20 concentrated), circle diameter=0 mm. observed in transgenic goats without a cMV enhancer (50). There were no indications of rhLF expression in the serum or saliva of the transgenic goat, as the goat β-casein promoter is specifically expressed only in lactating mammary tissue and not at other ectopic sites. This means that there is no potential risk of transgenic animals expressing heterologous proteins in the mammary gland when using this goat β-casein/cMV chimeric promoter. using western blotting we further confirmed that the size of the rhLF expressed in transgenic goats was roughly 80 kDa, which is comparable to the size of hLF. The secretion of lactoferrin in milk is directly related to the nutritional status and environmental conditions of the mother, and as such the secretion of lactoferrin in milk can be improved by improving maternal housing conditions and other factors. however, for transgenic animals, in addition to these growth conditions and environmental factors, improving the inheritance and stability of foreign genes remains a major challenge. In this study, our aim was to produce transgenic-cloned goats as living mammary bioreactors that exhibited a high-level of rhLF expression in their milk. An optimized construct is essential in order to achieve a high level expression of recombinant proteins. We used ELISA to confirm the expression of rhLF in transgenic goat milk on days 1-30 of lactation following delivery, revealing that rhLF was continuously expressed in goat milk during this 30-day period. There were no clear decreases in rhLF expression during the lactation period. These results thus clearly show that a transgenic goat carrying the pcL25-rhLF-Neo mammary gland-specific expression vector encoding goat β-casein/cMV chimeric promoter can express rhLF stably in the mammary gland. At present, phosphoric buffer (PB) is widely used as an eluent when extracting lactoferrin via cation-exchange chromatography (23). however, PB easily associates with common ca 2+ ions, Mg 2+ ions, and heavy metal ions to form precipitates, and it can also inhibit certain biochemical processes as well as the activity of most enzymes. PB is thus not an ideal eluent choice when purifying lactoferrin by cation-exchange chromatography. In order to achieve superior purified rhLF activity, we therefore used a commercially available hiTrap Capto S cation exchange column for its effective purification from the milk of a transgenic goat, using the Tris-hAc buffer as an eluent. Similarly to the rhLF purified in other previous reports (24,51), rhLF purification efficiency in transgenic goats was high (≥95.8%). When we assessed the bacteriostatic activity of this rhLF, we found it to be comparable to that of natural hLF, penicillin, and streptomycin, which suggests that rhLF may be an effective antibiotic for future use. In conclusion, we have successfully used ScNT to produce a transgenic goat, with goat fetal fibroblast cells serving as donor cells microinjected with the expression vector pcL25. Our results conclusively demonstrate that the pcL25 vector, which contains goat β-casein/cMV chimeric promoter, can drive transgenic goats to stably express a biologically active form of rhLF. This study offers an initial strategy for rhLF production for incorporation into drugs or food products, thereby facilitating future studies of this protein.
2019-10-30T13:04:48.597Z
2019-10-24T00:00:00.000
{ "year": 2019, "sha1": "c4c4cbe68650114822c9eac4fa9de19a3be920b1", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2019.4382/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6da6527058d874e20a4b2bbd6dd005dc2fa7bba5", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
57761753
pes2o/s2orc
v3-fos-license
Protocol for a systematic review of the effects of interventions for vaccine stock management Background Inadequate vaccine stock management in health facilities leads to vaccine stock-outs. The latter threatens the success of immunisation programmes. Countries have used various approaches to reduce stock-outs and improve vaccine availability, but we are not aware of a systematic review of these interventions. This protocol describes the methods we will use to assess the effects of existing approaches for improving vaccine stock management. Methods We include randomised and non-randomised studies identified through a compehensive search of peer-reviewed and grey literature databases. We will search PubMed, Cochrane Central Register of Controlled Trials, Embase, Web of Science, PDQ-Evidence and Scopus. We will also search websites of the World Health Organisation (WHO), Global Alliance for Vaccine and Immunisation, PATH Vaccine Resources Library and United Nations Children’s Fund. In addition, we will search the WHO International Clinical Trials Registry Platform and reference lists of included studies and relevant reviews. Finally, we plan to do a citation search for included studies. We will use Cochrane recommended methods to screen search outputs, assess study eligibility and risk of bias, extract and analyse study results. We will use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) tool to assess the certainty of the evidence on the effects of the interventions. Discussion We believe that the findings of this review will serve as valuable information for policy makers on ways to improve vaccine stock management and vaccine availability. When vaccine availability is improved, those who need them, especially children, will be adequately protected from vaccine-preventable diseases. Systematic review registration PROSPERO CRD42018092215 Electronic supplementary material The online version of this article (10.1186/s13643-018-0922-3) contains supplementary material, which is available to authorized users. Background The success of immunisation programmes depends on a well-functioning supply chain that ensures the constant availability of quality vaccines to the target population [1][2][3]. Effective vaccine stock management is one of the criteria for an effective vaccine supplychain [1,4]. Vaccine stock management at health facility level involves the checking and monitoring of vaccines on arrival at a storage point, during storage and when they are administered to the users [1,2]. Adequate vaccine stock management helps to maintain the quality of vaccines [1,2] and prevent vaccine stock-outs. Vaccine stock-outs refer to the absence of vaccine(s) at the point of service delivery to the patient [3,5]. An analysis of global data on effective vaccine management assessments between 2009 and 2014 showed that most low-and middle-income countries performed below the minimum standard for adequate vaccine stock management [6]. Recent data reported by countries in World Health Organisation (WHO) and United Nations Children's Fund (UNICEF) joint reporting show that each year at least one-third of countries experience one or more vaccine stock-outs lasting for at least 1 month [7]. The most vulnerable groups who suffer the effects of vaccine stock-outs in resource-constrained settings are the urban poor and rural communities who depend on public facilities for health services. When vaccines are not available, these recipients of public health services are obliged to make repeated and costly trips to health facilities. Ultimately, immunisation targets are not met, universal health coverage remains an elusive dream and lives are lost [8]. Due to the upward trend in the rates of vaccine stock-outs, countries are currently creating approaches to improve vaccine stock management [7,9]. The approaches for improving vaccine availability may include the use of digital systems to monitor vaccine stock levels in real time [10][11][12] . These dashboards measure performance and make them visible for managers to make informed decisions [13]. Another vaccine stock management approach involves the crowd sourcing of reports of stock-outs from patients and community volunteers. These reports are then sent to relevant health system structures to elicit system changes for improving vaccine availability [14]. However, we are not aware of a systematic review of these and other potential interventions for improving vaccine stock management. Objective We aim to assess the effects of approaches used for vaccine stock management at facility level. Methods This systematic review protocol has been prepared according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 guideline (Additional file 1). Registration of the review We registered the systematic review in the International Prospective Register of Systematic Reviews (PROSPERO) [15]. Eligibility criteria for studies We will include individually randomised trials, cluster randomised trials, controlled before-after studies, interrupted time series studies and repeated cross-sectional studies. Eligible participants include the healthcare systems which deliver vaccines, healthcare facilities where vaccines are administered, healthcare workers involved in providing immunisation services and recipients of immunisation services. We plan to include interventions targeting recipients or providers of immunisation services. Recipient-oriented interventions may include the involvement of end-users in monitoring vaccine availability at facilities, e.g. using mobile phone services or hotline platforms. Examples of interventions directed at providers of immunisation services include education or training, audit and feedback, prompts or reminders and supportive supervision. We will also include interventions targeting the health system offering immunisation services, e.g. action plans, re-designing (components) of the supply chain and integration with other services. Other interventions intended to ensure vaccine availability, including multi-component interventions, are also eligible for inclusion. We will consider the following as eligible comparisons: standard vaccine stock management practices in the study setting, alternative interventions and similar interventions implemented with different degrees of intensity. Our primary outcomes are vaccine availability and vaccine stock-outs. We will measure vaccine availability as the proportion of vaccination days in which vaccines were available and no one eligible for vaccination was turned back for lack of vaccines; but will also consider other measures of vaccine availability used by the authors of included studies. Vaccine stock-out rates in the review will be measured as the percentage of facilities that experienced a stock-out of a specific vaccine that the site is expected to provide, at any point, within a defined period; or other definitions as used in included studies. Our secondary outcomes include acceptability, adverse events and cost of the intervention, as well as other outcomes as reported by included studies. Data sources We will develop a comprehensive search strategy for both peer-reviewed and grey literature. We will search the following databases PubMed, Cochrane Central Register of Controlled Trials (CENTRAL), Embase, WHO Library Information System (WHOLIS), Web of Science, PDQ (Pretty Darn Quick)-Evidence and Scopus. We will also search the websites of WHO, Global Alliance for Vaccine and Immunisation, PATH Vaccine Resources Library and UNICEF. In addition, we will search the WHO International Clinical Trials Registry Platform, reference lists of included studies and related systematic reviews and citations of included studies. A preliminary search strategy developed for PubMed is found in the Appendix. Data collection and analyses Two authors will independently screen the titles and summaries of records retrieved from the search for potentially eligible studies. We will obtain full-texts for all the potentially eligible studies. Two authors will assess these full-text publications for eligibility. Any disagreements between the two authors regarding study eligibility will be resolved by discussion and consensus. A third author will arbitrate any unresolved disagreements. We will provide a table with the characteristics of the included studies, and another of excluded studies with reasons for their exclusion. We will seek additional information, for studies with missing information, to assist us in our decision-making process. For each included study, two authors will independently extract information using a piloted data extraction form. Extracted data will include study design, participant, intervention and outcome characteristics as well as outcome data. Any differences will be resolved through discussion and consensus. A third author will be consulted to arbitrate if disagreements persist between the two authors. If there are missing data, we will contact study investigators to obtain the missing information. Two authors will independently assess risk of bias in included studies using the appropriate tool for randomised trials [16] and non-randomised studies [17]. Differences in judgement will be resolved by discussion and consensus, with arbitration by a third author. We will present study results as risk ratios for dichotomous data (e.g. frequency of vaccine stock-outs), and mean differences for continuous data (e.g. duration of vaccine stock-outs) will be presented as mean difference. We will combine data from clinically homogenous studies (in terms of designs, participants, interventions and outcomes) using random-effects meta-analysis. However, if we come across variation between studies, the findings will be summarised in a narrative format. We will analyse results of interrupted time series studies using regression analysis with time trends before and after the interventions [18,19]. We will assess certainty of the evidence of effects of interventions using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach [16]. We will look out for and correct any errors made in the analysis of included studies. For example, if clustering is not addressed in an eligible cluster randomised trial, we will re-analyse the data if sufficient information is available. Otherwise, we will request necessary data from the authors or attempt to adjust data for clustering by inflating the standard errors by multiplying them by the square root of the design effect [16]. The adjusted effect will then be added in the meta-analysis. We will assess statistical heterogeneity among study results using the I 2 statistic. We will consider heterogeneity as substantial, if the I 2 is 50% or more. We will investigate the causes of substantial statistical heterogeneity using subgroup analyses. We will define subgroups based on participant and study design characteristics. We will use the chi-squared test for subgroup differences to assess for subgroup interactions. We will carry out sensitivity analyses, if applicable on aspects that could potentially affect the meta-analysis results such as study designs and overall risk of bias. We may also conduct a sensitivity analysis to explore the effects of fixed-versus randomeffects analyses for outcomes with statistical heterogeneity [16]. Discussion This systematic review will examine the effectiveness of existing approaches for managing vaccine stock levels at health facilities, in order to prevent vaccine stock-outs. Study findings will serve as valuable information for policy makers on ways to improve vaccine stock management and vaccine availability. When vaccine availability is improved, target populations will be adequately protected from vaccine-preventable diseases. Furthermore, there will be reduction in the number repeated visits that patients should make in a bid to get vaccinated. This will increase their trust in the health systems. Ultimately, there will be a reduction in deaths caused by vaccine-preventable diseases as well as an improvement in other health outcomes. Additional file Additional file 1: PRISMA P checklist for the protocol. (DOCX 21 kb)
2019-01-09T12:48:19.034Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "3d7a2732acaa925f0dc5e521f0ef0c55f2894c2b", "oa_license": "CCBY", "oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-018-0922-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d7a2732acaa925f0dc5e521f0ef0c55f2894c2b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247762367
pes2o/s2orc
v3-fos-license
Hybrid Parallelization of Euler-Lagrange Simulations Based on MPI-3 Shared Memory The use of Euler-Lagrange methods on unstructured grids extends their application area to more versatile setups. However, the lack of a regular topology limits the scalability of distributed parallel methods, especially for routines that perform a physical search in space. One of the most prominent slowdowns is the search for halo elements in physical space for the purpose of runtime communication avoidance. In this work, we present a new communication-free halo element search algorithm utilizing the MPI-3 shared memory model. This novel method eliminates the severe performance bottleneck of many-to-many communication during initialization compared to the distributed parallelization approach and extends the possible applications beyond those achievable with the previous approach. Building on these data structures, we then present methods for efficient particle emission, scalable deposition schemes for particle-field coupling, and latency hiding approaches. The scaling performance of the proposed algorithms is validated through plasma dynamics simulations of an open-source framework on a massively parallel system, demonstrating an efficiency of up to 80% on 131000 cores. Introduction The initialization time, i.e., the time from beginning of the code execution until the first computation step, plays a critical role in Euler-Lagrangian solvers in a high-performance computing context as it is closely linked with adequate load balancing. Ideally, each processor should receive equal load to achieve maximum overall simulation efficiency. Accurate load estimation is already challenging in the case of a pure Euler solver as various cell sizes, time steps, local models and boundary conditions must be considered. Nonetheless, there are effective techniques to determine the local load and thus the grid distribution a priori, also known as static load balancing [1,2,3]. For these cases, a prolonged initialization period is acceptable if it results in improved runtime performance. The presence of the Lagrangian phase adds substantial complexity as discrete particles introduce additional load, which is only weakly correlated with the local element sizes. Furthermore, particle concentrations may shift during the simulation, with high fluid and particle loads often occurring at the same mesh location, especially in fluid simulations. Load balancing approaches in Euler-Lagrangian solvers must adapt to these changes during runtime, which is referred to as dynamic load balancing. Over time, various load distribution strategies have evolved, which can generally be classified into two categories: 1) task parallelization and 2) domain partitioning. Task parallelization splits the work along the phase interface, distributing the fluid work and the particle work to different processors. The advantage is that due to the same nature of work within a phase, both groups of processors can internally subdivide the overall task in an optimal way. Implementations of this approach have been presented e.g., in Refs. [4,5,6]. The downside of task parallelization is the loss of any local connectivity, resulting in large communication effort, great memory requirements or both, rendering it inadequate for massively parallel computation where memory and interconnect bandwidth are scant [7]. Domain decomposition keeps the locality between the two phases intact but requires the load distribution to be performed on the combined work with examples of this approach published in Refs. [8,9,10,11]. Additionally, communication patterns become unpredictable as a processor may receive elements of discrete phase but not necessarily transmit them and vice versa [12]. Nevertheless, most massively parallel codes use domain decomposition, an approach we also follow. As the focus of this work is on the discrete phase, an efficient solver for the continuous phase is presumed. Modern CFD solutions require high scalar performance and preferably minimal communication as storage and communication resources are unable to keep up with the steadily increase of available computing power [13,14]. High-order codes based on the Discontinuous Galerkin Spectral Element Method (DGSEM) have emerged as a well-suited approach as the fluid phase requires only the exchange of flux information at an element face, leading to a highly efficient numerical scheme while at the same time having dense, local operations [15]. Implementation as a solver for unstructured grids with possibly curved elements facilitates the creation of body-fitted domains even for complex geometries while retaining the high-order accuracy [16]. The presence of the Lagrangian phase necessitates dynamic load balancing, which should be regularly performed as particle loads can heavily shift during the simulation. Hereby, time spent on the load balancing step has to be kept minimal in order not to counteract benefits in application performance [17]. However, this task is non-trivial. While the DGSEM leads to a highly local scheme for the continuous phase, following the distributed memory approach means that each processor contains only local information on both the solution, associated quantities and the mesh without ready access to adjacent grid information in the case of distributed I/O. The absence of this mesh information prevents full tracking of a particle in the event of it crossing a partition boundary if not remedied. One method that allows the tracking of particles across different partitions is based on the idea of a shared layer of elements surrounding each partition, an idea akin to the ghost cell approach in e.g., finite volume methods. These halo regions contain the geometric information of neighboring cells within a given distance, here referred to as the halo distance, from the local domain and enable the completion of particle tracking on the initial processor, thus delaying the need for communication and requiring only the exchange of particles after having crossed the domain boundary [18]. While the process of halo element identification is straightforward for structured grids, it becomes significantly more complex for unstructured approaches as given within the present framework. Here, the search must be performed in physical space as the grid cells within a spatial region are not trivially mapped to locations in the mesh file [19]. Performing the search exclusively on processor-local mesh information, i.e., an inward search from the processor MPI border, leads to a severe performance bottleneck as the grid information for each cell residing on a single processor needs to be sent to a multitude of other processors within the halo distance, requiring many-to-many or in the worst case all-to-all point communication. This congests the interconnect infrastructure with the potential to stall code execution for minutes or even hours. As the information which needs to be exchanged grows both with grid size and number of processors involved in the simulation, the issue only becomes more urgent on modern massively parallel architectures where memory and communication constraints are all the more apparent. Moreover, as runtime load balancing is generally desirable to counter the shifting of particle loads during the simulation, the identification of the halo region must be performed multiple times during a given simulation, thus prompting the need for an efficient scheme which avoids detrimental effects on overall simulation performance. While this constitutes a new challenge for unstructured approaches that is absent in structured grids, this is easily outweighed by the advantages of such unstructured approaches for body fitted grids in domains of practical relevance. Towards this goal, we present in this work a novel approach to unstructured Euler-Lagrangian simulations based on MPI-3 shared memory. Within this approach, we store information with compute node granularity and perform a multi-step communication-free parallel search on the compute node to identify the elements in the halo region, thereby retaining excellent scaling properties on today's massively parallel supercomputing architectures. Building on these data structures, we present methods for efficient particle emission, scalable deposition schemes for particlefield coupling, and latency hiding approaches. These efforts then give us the ability to conduct high-order simulations of rarefied gas flows at an industrial scale beyond generic test cases. To the best of our knowledge, this is the first unstructured framework enabling massively-parallel Euler-Lagrangian simulations at this problem size. The implementation considered in the present work is open-source and available on GitHub 1,2 . The outline of this paper is as follows: The governing equations for non-equilibrium gas flows followed by the DGSEM scheme as well as the theory for particle motion and tracking are given in section 2. A high-level overview on the parallelization strategy is given in section 3. In section 4, we present the shared-memory approach for halo region determination and distribution, thereby shifting the communication load from the processor to the compute node level and alleviating the aforementioned scaling restrictions. The remainder of this section is designated to methods for 1 https://github.com/flexi-framework/flexi 2 https://github.com/piclas-framework/piclas emission, deposition on latency hiding based on the same shared-memory approach. The test case of an adiabatic box representing the optimum for the parallelization concept as well as near-application cases of a supersonic flow and a gyrotron resonator are discussed in section 5, and the scaling results are presented in section section 6. We conclude with a brief summary and give an outlook on further developments in section 7. Theory The approaches presented here are applicable to any unstructured Euler-Lagrange code and are already implemented in the two high-order open-source frameworks FLEXI 3 [20,21] and PICLas [18,22]. Both are actively developed at University of Stuttgart and share a common code basis for the DGSEM solver with FLEXI solving a continuous fluid phase prescribed by the compressible Navier-Stokes-Fourier equations while PICLas uses Maxwell's equations for the electromagnetic fields. FLEXI currently focuses on inertial particles in turbomachinery applications [23] with initial performance for the current methods presented in [24]. The present work focuses on solutions to non-equilibrium gas and plasma flows within the PICLas framework using the Particle-In-Cell (PIC) approach [25,26] as well as a particle based Bhatnagar-Gross-Krook (BGK) solver [27,28]. Non-equilibrium gas flows Non-equilibrium gas and plasma flows are generally characterized by possibly charged particles that interact with an electromagnetic field where the statistical distribution is described by Boltzmann's equation [29,25] Here, f = f (x, v, t) represents the probability distribution function, i.e., the expected particle density at position x with velocity v. In the methods used here, the distribution function is approximated by particles. These can move freely and represent the phase space from a Lagrangian point of view through their location and velocity. The particles interact with each other through the right hand side described by a collision operator whereas the interaction for charged particles occurs through the Lorenz force F = F L . The Lorentz forces are calculated from electromagnetic fields, which are solved on a fixed grid in an Eulerian fashion. Typically, the Boltzmann collision integral is used as collision operator which effectively gives the change of the particle probability density function caused by binary particle-particle collisions. However, since the Boltzmann collision integral is numerically difficult and time-consuming for various reasons [29], it is often approximated, e.g., by the Fokker-Planck (FP) solution algorithm [30,31] or the BGK approximation [27,28,32], see section 2.3.2. Beyond particle-particle collisions, charged particles experience the Lorentz force 3 https://www.flexi-project.org Here, q is the electric charge of a given particle whereas the electric field E and magnetic field B obey Maxwell's equations [33] ∂D ∂t with D being the electric displacement field, H the magnetic field strength whereas ρ and j represent the charge and current density, respectively. The field equations from Eq. (6) are thereby solved on a fixed grid, i.e. the Eulerian view. The coupling between the Eulerian view and the Lagrangian view arises on the one hand through the Lorentz force Eq. (2). Here, the fields are interpolated from the solution on the Euler grid to the Lagrangian particles in order to calculate the forces acting on these particles. On the other hand, the charge densities and current densities as source terms of the Maxwell equations on the Euler grid correspond to the zeroth and first moment of the distribution function, i.e., they are obtained by interpolating the particle data to the fixed Euler grid. Discontinuous Galerkin Spectral Element Method (DGSEM) In order to enforce charge conservation, Maxwell's equations are cast into the purely hyperbolic Maxwell (PHM) form [34] which can then be solved using the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) [35,36]. DG methods operate on a weak formulation of the conservation equations following the method of lines approach by projecting them onto a space of polynomial test functions in reference space. Collocation of interpolation and integration points yields a highly efficient scheme which is advanced by an explicit Runge-Kutta scheme in time. PICLas is designed as a solver for unstructured grids, thereby allowing the straightforward creation of body-fitted grids with curved boundaries even for complex geometries. The approximation of the solution by a high order polynomial in each element ensures that high-order accuracy is retained [16]. Particle Behavior Particles contribute to the electrical and magnetic field (deposition, see section 4.5) but are simultaneously influenced by the field through the Lorentz force and optionally through particle-particle collisions. Equation of Motion Based on eq. (2), the change in position and velocity of each particle is given by the relativistic equation of motion, with γ being the Lorentz factor given as Here, x and v represent the position and velocity of a given particle in physical space, q and m its charge and mass, and c the speed of light. Bhatnagar-Gross-Krook (BGK) The BGK operator approximates the collision term in Eq. (1) to a simple relaxation form where the distribution function relaxes towards a target distribution function f t with a certain relaxation frequency ν: The original BGK model assumes that the target velocity distribution function is the Maxwellian velocity distribution with the particle density n, particle mass m, temperature T and the thermal particle velocity c = v − u from the particle velocity v and the average flow velocity u [27]. In order to obtain the correct Prandtl number in the flow, more complex target distribution functions must be used. A very frequently used target distribution function that is also applied here is the ellipsoidal statistical BGK target function, see details in Holway Jr [37]. In PICLas, the solution of the BGK equation is performed in a pure Lagrangian and stochastic manner with particles using the stochastic particle Bhatnagar-Gross-Krook (SP-BGK) method as described in [38,39]. It offers an efficient approach to simulate non-equilibrium flows in smaller Knudsen number regimes. All particles within a cell interact with each other by the relaxation process described in Eq. (10). This is similar to the collision process in the well-known Direct Simulation Monte Carlo (DSMC) method [29], which also happens only within a cell. Subsequently, the convection due to the free movement of the particles is modeled together with boundary conditions in the computational domain in order to perform a relaxation process again. Localization and Tracking Particles are tracked in physical space for all cases considered within this work. The first step within this approach is the solution of eqs. (7) and (8) to obtain the new particle position. In order to identify a particle crossing an element boundary and its recipient, all faces of the previous element are checked for intersections with the particle path, see fig. 1. This procedure is performed iteratively until no more intersections are found and thus the final element is determined. If an element face is representing a boundary face, the corresponding boundary conditions can intuitively be incorporated by adjusting the remaining particle path. For more details and an alternative tracking approach based on localization in the reference space, see [19]. Parallelization Strategy High-performance clusters are almost exclusively constructed as distributed systems, connecting separate nodes through an interconnect. In general, any interconnect imposes bandwidth starvation compared to local memory while simultaneously incurring latency costs. Thus, efficient parallelization approaches need to employ two strategies: 1) Communication avoidance and 2) latency hiding. By reducing the amount of transferred data, congestion on the interconnect can be alleviated. Performing the communication in a non-blocking manner allows local work to continue, thereby obfuscating the additional latency of the interconnect. Continuous Phase Relying on the DGSEM allows PICLas to make extensive use of both strategies. By enforcing a basis with local support, the volume integral becomes a purely local operation and only the surface flux information has to be exchanged on the cell boundaries. To ensure fast initialization times, the unstructured fluid elements are pre-sorted along a space-filling curve (SFC) during mesh generation. SFC have shown their suitability for efficient calculation of new distributions during runtime for load balancing purposes in PIC simulations [40,41]. Furthermore, the SFC allows for highly parallel, non-overlapping disk storage access with an arbitrary number of processors [42]. Details on the implementation are given in [21]. Halo Region Halo regions bring these two strategies from continuous Eulerian to discrete Lagrangian phase. Since particle tracking is performed in unstructured physical space, geometric information along the considered particle path must be available at the time of tracking. By enriching the local DG domain with geometric information up to given physical distance from the domain boundaries, each processor can complete tracking to the final particle position. This halo distance is chosen as the maximum possible distance any particle can travel within a simulation time step, thereby ensuring a processor has all eligible elements accessible while simultaneously generating the minimal number of halo elements. Using this approach, communication is delayed until a particle is found to have left the processor domain after accounting for boundary conditions and only the minimum required information, i.e., the particle properties including the new particle position, must be communicated. However, this shifts some work from the simulation time-stepping to the routines where geometric and neighboring information must be established or updated. This corresponds to the initialization and any load balancing step, which always also includes an update of this information. PICLas follows the commonly used restart-based load-balancing approach where the simulation is saved to disk and reloaded using an improved load distribution. As a result, this procedure relies on fast initialization times and is aided by the approaches outlined in the previous section. However, given the requirement to work on unstructured meshes, the required halo elements for the particulate phase can only be determined through a search in physical space even with the mesh elements already pre-sorted along the SFC. As was previously outlined, performing the search and subsequent communication of mesh information using only processor-local information incurs severe performance penalties stemming from the differences in processor work from load distribution and the required many-to-many communication. The latter case in particular is exacerbated by modern many-core architectures, which consequently limits the scaling of the approach and necessitates the novel approach outlined in the subsequent section 4. Load Balancing Immediate benefit of the improved performance for the restart-based load balancing is the ability to increase the number of load evaluation and -if necessary -balancing steps. Following the classification by Watts and Taylor [43], the load evaluation can be based on the application, i.e., a priori using information on the algorithms involved; the system, i.e., at runtime using timing information, or a combination of both. While the application-based approach is simple to implement and successfully used for single-phase simulations with constant computation and communication time per degree of freedom [21], the determination of the correct weights becomes challenging for multi-phase flows. Hence, a hybrid approach is commonly considered more robust. The approach utilized by PICLas relies on runtime measurements of the field solver and particle solver. Load evaluation steps are performed by comparing the runtime per rank at user-defined intervals. For this, total time spent in the field and particle solver are recorded prior to the load balancing step. Particles are assumed to remain in their element sufficiently long to assign their load to the element they are currently residing in. Thus, the load of a given element is estimated by the combination of the total time spent in field and particle solver, divided by the local number of elements and the element's contribution to the total tracking steps, respectively. The resulting time for a respective element is then given as t elem,tot = t field + δt particle · n particles,elem . If an imbalance exceeding an acceptable threshold is detected, each rank gets assigned a new range of elements such that the load deviation becomes minimal, i.e., where i start and i end correspond to the respective element indices along the space-filling curve. More details on this implementation can be found in [44]. Implementation This section describes the parallel implementation of the halo region search, emission and runtime deposition mechanisms. Following the MPI-3 shared memory paradigm, we store information with compute node granularity and perform subsequent routines in a communication-free way on the shared memory region. This chapter serves to illustrate the allocation of the shared memory window, the mesh distribution and aforementioned routines in section 4.1, section 4.2 and sections 4.3 to 4.5, respectively. Where applicable, we print the Fortran source code rather than pseudo-code to facilitate implementation in other scientific frameworks. Further information on how the stored mesh information is subsequently used for particle tracking is given in [19]. Shared Memory Allocation Classical shared memory programming involves OpenMP. However, this approach is limited to single-node cases as OpenMP cannot handle distributed memory. Message Passing Interface (MPI)-3 introduces the concept of shared memory with the MPI Shared Memory (SHM) model. The resulting coding approach is also called "hybrid parallel programming" as it combines the shared memory approach of OpenMP with the distributed memory view of MPI. Memory regions allocated with MPI-3 SHM can be distributed arbitrarily between the processors while being accessible by all processors on the compute node. In our implementation for cache-coherent systems (MPI_WIN_UNIFIED), the shared memory window is allocated only on the compute node root to avoid offset calculations. The shared window is continuous in memory and thus can be directly read by each process. We do not employ RMA routines for store operations but ensure non-overlapping writes through data distribution along the SFC. MPI_WIN_SYNC calls ensure explicit synchronization and immediate availability of the written information on the compute node. The code for this approach is given in listing 1. Mesh Distribution High-order meshes are created with the in-house preprocessor HOPR 4 [45]. The mesh elements are ordered along a space-filling curve and saved in binary HDF5 format for highly parallel access [21], together with likewise ordered side connectivity information and the grid coordinates. PICLas initially determines the number of elements per processor taking available load balancing information into account. Each processor only accesses mesh information for its region along the SFC and stores it in processor-local memory, thereby maximizing file system parallelism. However, each compute node additionally allocates shared memory sufficient to hold the raw mesh information (as stored in the HDF5 file) and save its compute node information at the correct offset. Once every processor has finished the reading of the mesh, the compute node root processors perform an non-blocking IALLGATHERV operation on the interconnect, making use of available hardware offload capabilities. A graphical representation of this procedure is shown in fig. 2. In addition to the unstructured computation grid, a Cartesian background mesh is created during runtime in order to reduce the eligible computational elements when performing particle localization procedures. The number of Cartesian elements in each direction is case-dependent and currently determined by the user. During runtime, particle intersection calculations are performed using either the physical element face corner coordinates in the case of a purely linear mesh or clipped Bézier surfaces for a curved grid. Additional particle information includes e.g., the face normal vectors, the distance of an element to the nearest boundary, the surrounding mesh node indices, and the tolerance of a curved element in reference space. Some of this information is only calculated given a specific tracking method and whether deposition is desired. Since part of this information varies depending on the mesh distribution and storing it would also inflate the size of the mesh file, these particle metrics are computed during the initialization phase. Halo Element Search Algorithm In order to minimize both computational effort and memory requirements, a two-step search algorithm to determine eligible halo elements is performed before calculating the particle mesh metrics. This approach not only alleviates computational effort during the initialization phase but also significantly reduces the memory footprint as the derived metrics are only stored for the local and actual halo elements. For the first step depicted in fig. 3a, a Cartesian bounding box around all mesh elements local to a compute node is calculated. This bounding box is then extended by the halo distance in each direction.Through projection of the bounding box onto the Cartesian background mesh (BGM), the corresponding limits for the required I,J,K indices are obtained. Next, a similar Cartesian bounding box is created for each mesh element. This bounding box is again projected onto the background mesh, thus creating a mapping from each mesh element to the overlapping BGM cells. The BGM mapping of every element not located on the compute node is then compared against the BGM region previously extended by the halo distance. All elements whose BGM bounding boxes overlap with the extended bounding box are flagged as potential halo elements. Since the global mesh information is available in the MPI-3 shared memory array, this calculation is distributed among all compute node processors through slicing of the space-filling curve. At this point, the potential halo elements need to be further reduced as the Cartesian bounding box arbitrarily extends beyond the compute node local mesh elements. However, comparing the distance of every potential halo elements again all available compute node local elements would measurably affect the initialization time. Thus, the number of elements to compare against is reduced by only considering the elements having at least one MPI boundary on the compute node circumference, therefore representing the MPI boundary of the compute node local mesh. The second step shown in fig. 3b then calculates the radius of the convex hull of the elements on the MPI boundary and the radius of the potential halo elements. Each potential halo element is compared against each MPI-border element by subtracting the distance of the two barycenters from the sum of the two radii plus the halo distance. If the result is negative, the element is flagged as confirmed halo element. A positive value indicates that particles cannot reach the element within a given time increment and the element is subsequently discarded. This process is again distributed among all compute node processors through a uniform partitioning of the potential halo elements. Once every compute node processor indicates that there are no more potential halo elements to check, a mapping containing first the compute node local elements followed by the compute node halo elements is built. This mapping allows for efficient looping when building the derived particle mesh metrics on the reduced mesh. Emission Since particles are tracked in physical space, this approach naturally has to extend to the particle emission as well. In order to maintain good scalability, emission is performed in parallel with each processor calculating the initial Local element Halo element Periodic element Figure 4: Local, halo and periodic element within halo distance for the adiabatic box, see section 5.1. particle positions within the complete emission region. Subsequently, the grid element corresponding to each position has to be identified and the particle sent to the respective processor. However, the emission region might extend beyond the compute node-local mesh even when including the halo region, meaning that the emitting processor cannot uniquely identify the target element. Worse yet, without knowledge of the elements outside the halo region, the emitting processor cannot distinguish between a valid position and one outside of the complete mesh. Yet, it is equally undesirable to retain all elements within the emission region. The solution to the problem is again provided through the BGM. As mentioned in the previous paragraph, there exists a mapping from each BGM element to the overlapping grid elements located within the compute node and halo region. During initialization, each processor additionally provides the number of local elements per BGM cell. This number is summed up across all processors and stored with a mapping containing all processors which overlap with a given BGM cell. As the local elements are inherently distributed, this process is by design automatically scaling. During the particle emission, after an emission position is computed, the processor calculates the associated BGM cell. Next, we compare the number of compute node grid elements mapped to this BGM cell with the total number of grid elements connected to this cell. If the numbers match, the processor flags the position as locally computable. The other positions are gathered and sent to all processors associated with the BGM cell. Next, we perform the search algorithm of the locally computable positions, thereby acting as latency hiding. After identifying all local particle to element mappings, the search of communicated positions is performed on each processor. Since any position can only correspond to one single element, no further communication is required. All other processors silently discard the position. Deposition with Shape Function The charged particles are responsible for the source terms of the field equations and therefore are coupled with the underlying grid on which the field equations are solved. The source terms themselves are determined from the respective moments as described by the distribution function. Here, this is achieved by mapping the particle position and velocity to the grid via shape functions that smoothly distribute the charge and current densities of the particles on the grid, which is referred to as deposition. The cut-off radius of the deposition is determined by the physical problem and can range across multiple elements of the grid as shown in fig. 5, hence, deposition may occur in processorlocal elements as well as elements that are of different processors or even different nodes. Thus, processors require the communication of either the deposited properties or the particle properties, which are in turn deposited by the receiving processor. Since the presented parallelization concept allows elements to be uniquely globally identified, the particles are deposited by the host process of the particle. Source terms that are possibly deposited in other processors are stored in a separate array for communication. Subsequently, a message is created from this array for all processors in whose elements were deposited and communicated regardless of which node they are located on. In order to communicate only with processors that can potentially exchange source terms, a list of all reachable processors in the halo region is initially created using the shape function radius for each processor. In order to avoid multiple communications between all processors in this initial process, this is done in a two-step communication procedure. In a first step, the list with the processors to be communicated is sent to the MPI roots of the compute node for each processor. These node leaders on each compute node gather the information for distribution to other processors from all corresponding processors and store them in a shared array so that the information about the necessary exchange processors is available to all processors. Latency Hiding The main goal of latency hiding is to allow communication and computation to overlap completely, which means that during the time of communication, parts of the algorithm are already being carried out, so that there is no waiting time during communication. There are two basic problems with latency hiding for PICLas. PICLas is a modular toolbox, so e.g., PIC can be used as a module without or with collision term, within PIC there is the distinction whether electromagnetic or electrostatic simulations should be carried out or which interpolation method between particles and grid should be used. On the collision term side, there is also a wide choice of methods such as DSMC, BGK, FP and others. Each possible combination of modules with different types of time integration has different requirements as to which data must be available and when. Obviously, it is therefore not possible to use a latency hiding method that represents the optimum for all possible module combinations. Therefore, in the following we will only concentrate on the methods used in this work and already described in the theory section. The other problem is that many particle methods such as DSMC, BGK or FP have very sequential structures, so that only a few parts can already be calculated if the information from particles from other cores is not yet available. Most of the time, calculations that are carried out in parallel with communication require a large amount of additional memory, because quantities that would otherwise only be needed locally in the cell have to be stored for all cells so that they are available after communication. Maxwell-PIC with Shape Functions Maxwell-PIC within this context refers to the electrodynamic PIC method in which the complete set of Maxwell's equations are solved as described in section 2.1. The time integration scheme used in this example is a 5-stage 4 thorder low-storage Runge-Kutta (RK) method [46]. For simulations of this type, there are three MPI communications per time step which should require latency masking, i.e., the communication of the flux data for the Maxwell-DG solver, the current densities and charge densities deposited on the grid as source terms for the DG solver by the shape functions and the particle data that leave the processor after the movement. In general, the compute time of the discrete phase is dominating compared to the continuous phase, hence the focus of latency hiding is to avoid stalling of the particle routines. A flow chart of one RK stage is given in fig. 6. Performing the deposition as outlined in section 4.5 allows to hide the costly exchange of volume data behind the particle operators. Interpolation, calculation of the Lorenz forces as well as particle tracking are purely local operations. At their end, particles are already assigned to their final processor, so the communication of particle data can start. As the continuous phase requires two communication steps, we start by extrapolating the field data to the element faces and communicate the surfaces data. This corresponds to (N−)D information and thus requires considerably less interconnect time than the previous field data. As was already outlined, the DG volume integral is a purely local operation and is thus performed on the first half of the local elements to hide the communication latency. After receiving the surface data, the numerical flux on the MPI sides is calculated and immediately sent back. Local operations on the inner sides as well as the remaining volume integrals can commence. Once these routines return, the numerical flux on the MPI sides should be received and the surface integral on the MPI sides is calculated. The RK stage concludes by incorporating the particle data into the local arrays. SP-BGK The SP-BGK method requires the allocation of the particles to the cells in order to calculate the moments of the distribution function per cell. However, this is a purely discrete phase method, thus no field solver part or deposition Receive ρ,j Send Particle Data Receive Particle Data to hide the communication time of the particle communication as in the PIC method. Therefore, latency hiding is implemented here in two different approaches. In the first approach, the elements of each processor are divided into exchange elements in which particles from other processors can potentially move within a time step and purely local elements in which this is not possible. For this purpose, the halo region is extended from the MPI boundaries into each processor's own computational domain and the elements are flagged accordingly as shown in fig. 7. With this information, the BGK operator can already be applied to the purely local elements during particle communication, as this operator is cell local. After all particles have been received, the BGK operator is then applied to the exchange elements. This type of latency hiding is only effective if each processor has enough elements so that a minimum amount of purely local elements exists. However, this is often not the case for very high numbers of processors. Therefore, the assignment of the particles to the elements is additionally divided. All particles that do not leave the processor's slice of the computational domain are already assigned to the elements during the particle communication. Within the elements, an adaptive octree is used to create subcells in order to better capture any gradients [47,18]. As far as possible, parts of this assignment are also carried out during the communication of the particles. Subsequently, after the particle data is received, the element assignment is done for the received particles. This second step of latency hiding has the advantage that it can always be performed regardless of the number of purely local elements. MPI boundary Purely local elements Potential exchange elements Test Cases Three test cases were selected to investigate the efficiency of the proposed methods. The first test case consists of a generic setup representing a weak scaling case of an adiabatic periodic Cartesian box to assess specific performance metrics in a regular and adaptable setup. The second and third simulation showcase the strong scaling of practical application setups with fixed sizes on unstructured grids. This section presents the test cases themselves with the scaling results given in section 6. All simulations are performed on the HPE Adiabatic Box The first test is an adiabatic box which represents the optimum for the parallelization. The domain itself is a fullyperiodic 3D box into which methane with a particle density of n = 1 · 10 23 1/m 3 is homogeneously inserted. The start temperature is T ∞ = 2000 K, resulting in a required time step of ∆t = 3 · 10 −10 s [28]. The simulation is carried out using the stochastic particle Bhatnagar-Gross-Krook (SP-BGK) method as described in section 2.3.2, with the periodic structure of the domain ensuring that the number of particles remains homogeneous in the entire computational domain and thus the computational load for all processors is equally distributed over the entire computation time. Nevertheless, of course, the particles move at each time step and the collision operator is also executed. As Supersonic Flow around 70°Sphere-Cone The second test case represents the near-application setup of a supersonic flow. The geometry, dimensions and the test case itself are adapted from the paper by Hollis et al. [49]. The test vehicle is a 70°sphere-cone blunt-body in an high-enthalpy carbon-dioxide flow. The inflow at an angle of attack has a temperature of T ∞ = 126 K, a velocity of u ∞ = 2030 m/s and a density of ρ ∞ = 5.9 · 10 −3 kg/m 3 resulting in a Mach number of M ∞ = 11.4. The flow is again simulated with the SP-BGK method. The 3D grid with a total of 1 638 637 hexahedral cells is shown in fig. 8. The particle number in the simulation is 1.25 · 10 8 , where the time step to resolve the stiff BGK relaxation term (see Pfeiffer [28]) is chosen to be ∆t = 3 · 10 −10 s. The average number of communication partners for each core ranges between 15 and 21, increasing with the number of cores and a total number of 1000 time steps is carried out during the test. 140 GHz Gyrotron Resonator The third test case is a gyrotron resonator operating at 140 GHz. The details of the setup are found in [50], an adaption of the original setup found in [51]. The geometry resembles a tapered hollow cylinder, the diameter of which increases along the symmetry axis as depicted in fig. 10. The resulting charge density within the domain that is created by the electron hollow beam is shown in fig. 10. Only the elements that show a charge density contain simulation particles. Therefore, the majority of the simulation domain is empty, leading to a strong workload imbalance between empty elements and those containing simulation particles. This imbalance is addressed by the timer-based dynamic load balancing as described in section 3.3, which assigns weights to each element and partitions the complete domain into segments such that each segment has the same computational load. The average number of communication partners for each core ranges between 19 and 80, increasing with the number of cores and a total number of 522 time steps is carried out during the test. Results In the following, this paper will not interpret the results physically, but will examine the problematics of such flows for parallelization. First, the scaling of the initialization phase, i.e., the construction of the halo region and communicators, is examined. This is followed by an evaluation of the calculation phase, considering aspects such as field/particle operator and load imbalance. The total number of elements and particles as well as the initialization and execution times (without I/O and initialization) are summarized for all test cases in table 1. Initialization Within the context of this paper, the initialization comprises the complete code startup including the initial insertion of the particles or the corresponding restart routines in the case of a continued simulation. The initialization 6 50 · 10 6 100 · 10 6 200 · 10 6 400 · 10 6 800 · 10 6 1600 · 10 6 3200 · 10 6 6400 · 10 6 12 800 · 10 6 One exception is the run on a single node for both setups. For this case the initialization time of the gyrotron is shorter than with two nodes. The advantageous effect is that in the case of a single node, all grid cells are automatically compute node local elements and thus no halo region needs to be constructed, thus saving initialization time. Another point is visible for the supersonic test case. As the node count exceeds 16, the peculiarity of the test system becomes apparent. Since 16 nodes are directly connected to a switch, each doubling of the node numbers introduces an additional hop, leading to an increase in the initialization time as communication calls now have reduced bandwidth available. This yields an increased variation in initialization and calculation times and therefore, the minimum and maximum as well as the average values are depicted. Simulation Performance Weak Scaling. The parallel efficiency η N for the weak scaling was determined by where t 128 and t N are the computational time using 128 and N cores, respectively. Since the problem size increases linearly with the number of cores used, the parallel efficiency should ideally remain around one. Already with low core counts, weak scaling of the adiabatic box in fig. 12 shows a decreasing efficiency with an increasing number of cores. As the setup is inherently ideally load balanced, the increase in computing time is assumed to be related to an increasing time for particle communication via the nodes. However, the parallel efficiency with 32 nodes (4096 cores) is still around 0.9, which is satisfactory. Reaching 64 nodes, the influence of the reduced bandwidth becomes visible due to the higher particle density in the domain. For even higher node numbers, the efficiency increases as the weak scaling results benefit from communication locality, thereby compensating for the interconnect penalty. Strong Scaling. The strong scaling was calculated using the basis of one node corresponding to 128 cores by with the respective parallel efficiency η N determined by where t 128 and t N are the computational time using 128 and N cores, respectively. The strong scaling results of the supersonic flow and gyrotron test case are depicted in figs. 13a and 14a with the parallel efficiency shown in figs. 13b and 14b, respectively. Due to the load balancing as described in section 3.3, whereby the grid cells are weighted by the number of particles they currently contain when they are divided among the processors and the latency hiding as described in section 4.6.2, the parallel efficiency of the supersonic flow test case remains around one up to 16 nodes, which corresponds to 2048 processors. For larger processor numbers, the parallel efficiency then drops due to the large discrepancies in computational load between different regions of the computational domain. Some elements contain so many particles that they can no longer be better distributed by the load balancing and thus also the latency hiding can no longer work properly. In this case, it would be necessary to increase the number of mesh elements in the regions of high particle density, thereby allowing the computational load to be better distributed. Nevertheless, even with 64 nodes, which corresponds to 8192 processors, a parallel efficiency of 0.77 is still achieved for this case with a very nonuniform load distribution. This setup thus retains the better scaling properties than the adiabatic box, presumably because the lower particle number poses less requirements on the interconnect bandwidth. Nonetheless, it is still sensitive to an even further increase in latency as is seen when 64 nodes are exceeded. Compared to the strong scaling of the supersonic flow case, the scaling behavior is distinctively worse in the gyrotron case. As visible in fig. 14b, the parallel efficiency drops significantly faster as the number of processors increases, so that with 64 nodes (8192 cores), a parallel efficiency of only 0.37 is achieved. The main reason for this is most likely the more complex load balancing for this case. On the one hand, the presence of the Maxwell solver introduces differing loads per cell between field solver and the particle solver. On the other hand, the number of particles per element differs by orders of magnitude, as shown in fig. 10b. Numerous cells contain no particles at all while only very few carry the particle beam shaped as a hollow cylinder. As a consequence, not only is an ideal load distribution increasingly difficult to achieve as the number of processors increases, but efficient latency hiding also becomes progressively unattainable. At any given time, there exists a considerable fraction of processors that partake in the particle communication without ever receiving particles in their corresponding cells. These processors thus stall the particle communication of the remaining cores, a phenomenon which cannot be hidden. Ultimately, in this test case, more and more cores have to wait for communication, resulting in a scaling which is not optimal. Conclusions As computers become increasingly parallel, code areas with previously negligible performance impact such as initialization become increasingly more relevant due to their influence on load balancing. In the work presented here, our aim was to contribute to this challenge by presenting a massively parallel, communication-free approach to build the halo region required for Euler-Lagrange simulations. The use of the MPI-3 shared memory model enabled us to utilize the ever increasing core count per socket without introducing additional interconnect load. Based on this programming model, we developed new methods for emission, deposition and latency hiding which were implemented in the open-source plasma dynamics framework PICLas. This framework was applied to a generic test setup as well as two practical application cases. In all setups, we were able to show respectable initialization times as long as no interconnect congestion occurred in other parts of the startup phase. Furthermore, we were able to retain good efficiency for both the weak and the strong scaling case of the BGK setup while also highlighting challenges inherent to the Euler-Lagrange setup including strong variations in particle density in the PIC setup. In the future, we will extend our research to communication-minimizing decomposition approaches for the Euler-Lagrange codes. These pose additional challenges since the continuous and the disperse phase entail separate communication regions with machine-dependent costs, resulting in a multi-point optimization problem.
2022-03-29T01:15:31.759Z
2022-03-25T00:00:00.000
{ "year": 2022, "sha1": "0b6b1488471599f624bc2e1578a066bbfe088507", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2203.13840", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b6b1488471599f624bc2e1578a066bbfe088507", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
255283365
pes2o/s2orc
v3-fos-license
Tomato Maturity Estimation Using Deep Neural Network : In this study, we propose a tomato maturity estimation approach based on a deep neural network. Tomato images were obtained using an RGB camera installed on a monitoring robot and samples were cropped to generate a dataset with which to train the classification model. The classification model is trained using cross-entropy loss and mean–variance loss, which can implicitly provide label distribution knowledge. For continuous maturity estimation in the test stage, the output probability distribution of four maturity classes is calculated as an expected (normalized) value. Our results demonstrate that the F1 score was approximately 0.91 on average, with a range of 0.85–0.97. Furthermore, comparison with the hue value—which is correlated with tomato growth—showed no significant differences between estimated maturity and hue values, except in the pink stage. From the overall results, we found that our approach can not only classify the discrete maturation stages of tomatoes but can also continuously estimate their maturity. Furthermore, it is expected that with higher accuracy data labeling, more precise classification and higher accuracy may be achieved. Introduction The continuous shortages in the agricultural labor force require solutions to ensure stable agricultural production.Robotic farming for the realization of unmanned agriculture is emerging as a potential technological alternative, where unmanned agriculture is a technology-intensive farming method that automatically or autonomously performs various agricultural tasks based on intelligent approaches [1].This issue has received more attention recently, due to the acceleration of global population growth, with the global population expected to reach 10 billion by 2050.Various robot systems are being developed to automate agricultural operations such as harvesting, monitoring, planting, and so on.In particular, the use of harvesting robots in horticultural facilities has potential for practical application in the near future, as the mechanism associated with automatic fruit harvesting is similar to the general gripping system in industrial robots. Harvesting robots are developed through the integration of various subsystems such as vision, manipulator, gripper, and mobile systems, where vision is a priority requirement for robotic harvesting, being primarily used to implement object (e.g., fruit) recognition.Recent research has shown the capacity for high-level performance when using advanced datadriven approaches such as deep neural networks (DNNs).A convolutional neural network (CNN) is a representative DNN structure for image-based learning, which can extract object features effectively through data learning without human intervention.CNNs have become widely used with the improvement of computing speed.CNN-based approaches for fruit detection have been demonstrated for various crops.Tomato is one of the main target crops for fruit detection in robotic harvesting, as it is an economically significant horticultural crop worldwide with steadily increasing production.For fruit detection in robotic harvesting, it is necessary to determine not only the position of the target fruit, but also whether it is ripe.The maturity of a tomato is visually expressed in terms of its fruit size and color as it grows.It is easier to qualitatively determine how mature a fruit is from the color change of its surface, which gradually changes from green to red [2].For this reason, tomato maturity is commonly classified into 4-6 stages based on red color occupancy; thus, tomato maturity can be estimated or classified based on color features, which various studies have implemented simultaneously with fruit detection or segmentation.Most studies have conducted detection using supervised learning methods, in which the model was trained using a labeled dataset.There are several limitations associated with learning tomato maturity using manually labeled images, due to the inaccuracy caused by ambiguity.In the case of tomato maturity, it is difficult to determine the specific class and maturity level for the classification task, due to the continuous change in the color of a tomato skin.There is a color occupancy-based guide for determining tomato maturity level; however, it is hard to obtain consistent results, as they vary depending on labeling quality [3].Therefore, rather than being limited to a set class through classification learning, a method that can evaluate maturity in terms of a continuous value is expected to be effective for use in various environments. The purpose of this study was to estimate tomato maturity continuously, based on the use of a deep neural network.The maturity distribution between samples within a specific class was learned intrinsically using a label distribution learning method and meanvariance loss, which has been demonstrated [4] as being capable of estimating the age of humans from facial images.The CNN structure was used to implement the classification model, and the model was trained using images collected in a greenhouse.Tomato maturity was evaluated using the expected value of the probability distribution from the model output.The novelty of this study is that the proposed approach can be used to estimate the continuous maturity of tomatoes from images, although the model was trained in a supervisory manner using specific maturity level classes.We validated the continuous maturity results by comparing the distribution with color information by class.This can contribute to providing information regarding the precise growth stage, and the method's consistency will be further increased as the dataset is expanded. Literature Review A convolutional neural network (CNN), which is the representative structure of deep learning, can detect objects effectively via hierarchical feature extraction [5].CNN-based object detection has shown rapid progress in various fields, with visual recognition being the most spotlighted research area due to its similarity to human visual perception [6].The architecture was extended to object-level detection [7] and pixel-level segmentation [8], and it was recently made possible to provide real-time level performance (YOLO) [9].CNNs have been modified continuously to implement intelligent functions beyond object detection, and some researchers have generated a remarkable expansion in various fields, such as image generation [10,11] and context interpretation [12].CNNs have been applied in the field of agriculture for fruit detection in robotic harvesting.Rong et al. [13] detected tomato fruit along with its peduncle, in order to determine the exact cutting point.They used a YOLO model, which presented a detection accuracy of approximately 93% and a localization performance of 73.1 mAP (mean average precision).Padilha et al. [14] also detected tomato fruit based on ripeness using popular models such as YOLO and SSD.They reported that the YOLO-v4-based detection model had high precision (at 91%).CNN-based fruit detection has also presented performances that are high enough for practical use with sweet pepper [15], apple [16], and lychee [17].In robotic harvesting, it is required that the detected fruit be harvested by estimating the maturity of the fruit.Zu et al. [18] segmented mature green tomatoes using Mask R-CNN, and the results showed that the F1 score reached 92%.They reported that their method could be used to detect mature tomatoes robustly under various conditions, such as occlusion by other objects and similarly colored backgrounds, and that it could be used in a real environment.Afonso et al. [19] carried out the detection of ripe tomatoes in images captured from a real greenhouse.Their results show that the ripe tomatoes were detected successfully, even when tested with a challenging experimental setup in which simple inexpensive cameras were used.They stated that their method could be used practically in automatic harvesting.Seo et al. [20] focused, in particular, on the classification of the level of tomato maturity.Color space analysis was performed to determine the various harvest times of mature tomato fruits and showed an accuracy of more than 90% in the classification of six stages of maturity (i.e., green, breaker, turning, pink, light red, and red).The studies mentioned above classified maturity into specific stages; however, they required a method that could evaluate maturity as a continuous value.This problem, referred to as label ambiguity, is similar to that of age estimation from facial images, and distribution learning has been proposed to address it [4].Mean-variance loss is one of the approaches that allows this problem to be solved by learning the label distribution inner class, with the advantage of easy implementation by adding a mean-variance loss term into the model. Data Collection The tomato (Solanum lycopersicum L.) "Dafnis" variety, was cultivated in a general hydroponic greenhouse in South Korea for this study; Figure 1 shows an interior view of the greenhouse, as well as representative sample images.Tomato fruits had various sizes, with a weight range of 150 g to 250 g, and an almost round shape.Images of tomatoes were captured automatically using a developed monitoring system [20] that can travel remotely along the hot water pipes installed in the greenhouse.The monitoring system can track a straight path following a magnetic line installed between both sides of the pipes and can detect the start and end points of crop beds using proximity sensors.A camera (RealSense D435; Intel, Santa Clara, CA, USA) was installed on the side of the monitoring robot to capture the images of the tomatoes, and the images were saved with 800 × 600 pixel resolution at 30 fps.The environmental conditions were 20.6 • C, 67.7%, and 52.9 w/m 2 for humidity, temperature, and light intensity, respectively. high precision (at 91%).CNN-based fruit detection has also presented performances that are high enough for practical use with sweet pepper [15], apple [16], and lychee [17].In robotic harvesting, it is required that the detected fruit be harvested by estimating the maturity of the fruit.Zu et al. [18] segmented mature green tomatoes using Mask R-CNN, and the results showed that the F1 score reached 92%.They reported that their method could be used to detect mature tomatoes robustly under various conditions, such as occlusion by other objects and similarly colored backgrounds, and that it could be used in a real environment.Afonso et al. [19] carried out the detection of ripe tomatoes in images captured from a real greenhouse.Their results show that the ripe tomatoes were detected successfully, even when tested with a challenging experimental setup in which simple inexpensive cameras were used.They stated that their method could be used practically in automatic harvesting.Seo et al. [20] focused, in particular, on the classification of the level of tomato maturity.Color space analysis was performed to determine the various harvest times of mature tomato fruits and showed an accuracy of more than 90% in the classification of six stages of maturity (i.e., green, breaker, turning, pink, light red, and red).The studies mentioned above classified maturity into specific stages; however, they required a method that could evaluate maturity as a continuous value.This problem, referred to as label ambiguity, is similar to that of age estimation from facial images, and distribution learning has been proposed to address it [4].Meanvariance loss is one of the approaches that allows this problem to be solved by learning the label distribution inner class, with the advantage of easy implementation by adding a mean-variance loss term into the model. Data Collection The tomato (Solanum lycopersicum L.) "Dafnis" variety, was cultivated in a general hydroponic greenhouse in South Korea for this study; Figure 1 shows an interior view of the greenhouse, as well as representative sample images.Tomato fruits had various sizes, with a weight range of 150 g to 250 g, and an almost round shape.Images of tomatoes were captured automatically using a developed monitoring system [20] that can travel remotely along the hot water pipes installed in the greenhouse.The monitoring system can track a straight path following a magnetic line installed between both sides of the pipes and can detect the start and end points of crop beds using proximity sensors.A camera (RealSense D435; Intel, Santa Clara, CA, USA) was installed on the side of the monitoring robot to capture the images of the tomatoes, and the images were saved with 800 × 600 pixel resolution at 30 fps.The environmental conditions were 20.6 °C, 67.7%, and 52.9 w/m 2 for humidity, temperature, and light intensity, respectively.Each collected image contains multiple tomatoes at various growth stages, and the tomatoes were annotated for use as samples in training the deep learning model.An annotation tool was developed using Python 3.8 and OpenCV 4.1, which can support the drawing of a polygon shape around each object (tomato).The tomato objects in each image were extracted as rectangles, that is, the smallest bounding box including the polygon of the tomato object.The extracted samples were used with the background removed to represent the maturity features by deep learning.The overall process is depicted in Figure 2. Each collected image contains multiple tomatoes at various growth stages, and the tomatoes were annotated for use as samples in training the deep learning model.An annotation tool was developed using Python 3.8 and OpenCV 4.1, which can support the drawing of a polygon shape around each object (tomato).The tomato objects in each image were extracted as rectangles, that is, the smallest bounding box including the polygon of the tomato object.The extracted samples were used with the background removed to represent the maturity features by deep learning.The overall process is depicted in Figure 2. The background removed samples had different sizes; thus, all of the samples were resized to 128 × 128 pixels (which is the input size of the model) for use as training data, after making each sample square with padding in order to maintain the aspect ratio.In addition, each sample was classified into four maturity stages (green, turning, pink, and red), in order to provide the labels for supervised learning.In general, the maturity stages of tomatoes can be divided into six stages (green, breaker, turning, pink, light red, and red), depending on the ratio of the red region to the entire region [21]; however, it is hard to determine the maturity stage using only the visual information in RGB images examined by humans, as the boundary of the red region is ambiguous due to the continuous color change in the tomato skin.Therefore, the intermediate stages are more difficult to assess.We aimed to evaluate the successive values between maturity stages through classification learning between specific classes; thus, the use of four stages was considered appropriate in this study, as these stages can be easily distinguished by humans.The samples labeled by maturity stage were randomly divided into training and test sets in a 1:1 ratio, and half of the training set was used as a validation set.The numbers of samples were 472, 472, and 944 in the training, validation, and test sets, respectively. Deep Neural Network Model For this study, tomato maturity was estimated using a deep neural network (DNN).The maturity was evaluated continuously based on the implicitly learned feature distribution in each specific class during DNN training for the classification of the four maturity stages.The maturity stages of the tomato can be divided into several classes, and the results depend on the data type and quality, as well as the characteristics of the annotator who conducts the image labeling.For this reason, there is some variety within each class, which is called the label distribution [22].This challenge is also being actively dealt with for human age estimation from facial images [23], due to the great diversity in faces even at the same age.In this case, the label in the classification task is only a clue, and it is necessary to learn where the sample is positioned in the internal distribution of The background removed samples had different sizes; thus, all of the samples were resized to 128 × 128 pixels (which is the input size of the model) for use as training data, after making each sample square with padding in order to maintain the aspect ratio.In addition, each sample was classified into four maturity stages (green, turning, pink, and red), in order to provide the labels for supervised learning.In general, the maturity stages of tomatoes can be divided into six stages (green, breaker, turning, pink, light red, and red), depending on the ratio of the red region to the entire region [21]; however, it is hard to determine the maturity stage using only the visual information in RGB images examined by humans, as the boundary of the red region is ambiguous due to the continuous color change in the tomato skin.Therefore, the intermediate stages are more difficult to assess.We aimed to evaluate the successive values between maturity stages through classification learning between specific classes; thus, the use of four stages was considered appropriate in this study, as these stages can be easily distinguished by humans.The samples labeled by maturity stage were randomly divided into training and test sets in a 1:1 ratio, and half of the training set was used as a validation set.The numbers of samples were 472, 472, and 944 in the training, validation, and test sets, respectively. Deep Neural Network Model For this study, tomato maturity was estimated using a deep neural network (DNN).The maturity was evaluated continuously based on the implicitly learned feature distribution in each specific class during DNN training for the classification of the four maturity stages.The maturity stages of the tomato can be divided into several classes, and the results depend on the data type and quality, as well as the characteristics of the annotator who conducts the image labeling.For this reason, there is some variety within each class, which is called the label distribution [22].This challenge is also being actively dealt with for human age estimation from facial images [23], due to the great diversity in faces even at the same age.In this case, the label in the classification task is only a clue, and it is necessary to learn where the sample is positioned in the internal distribution of the specific class.Mean-variance loss is an approach for label distribution learning (LDL), which can be implemented easily by adding the mean and variance differences to the probability between the ground truth and estimated values [4].In this way, the system can learn a label distribution that has a mean value close to the ground truth label. Figure 3 shows the architecture of the DNN used to estimate tomato maturity in this study.The model has a convolutional neural network (CNN)-based structure for a simple classification task, consisting of four layers to extract the features.With this structure, it is not difficult to classify the four categories that are likely to depend on color features.The architecture consists of four convolutional layers, one fully connected layer, and a classifier (softmax), with each conv-net (convolutional layer) consisting of max pooling and rectified linear unit (ReLU) functions.The output of the final convolutional layer includes 256 feature maps, where each feature map has a size of 8 × 8 pixels.The fully connected layer for classification has 256 × 8 × 8 input neurons with four output neurons.The output values are then converted into probabilities with regard to the four maturity stages. the probability between the ground truth and estimated values [4].In this way, the system can learn a label distribution that has a mean value close to the ground truth label. Figure 3 shows the architecture of the DNN used to estimate tomato maturity in this study.The model has a convolutional neural network (CNN)-based structure for a simple classification task, consisting of four layers to extract the features.With this structure, it is not difficult to classify the four categories that are likely to depend on color features.The architecture consists of four convolutional layers, one fully connected layer, and a classifier (softmax), with each conv-net (convolutional layer) consisting of max pooling and rectified linear unit (ReLU) functions.The output of the final convolutional layer includes 256 feature maps, where each feature map has a size of 8 × 8 pixels.The fully connected layer for classification has 256 × 8 × 8 input neurons with four output neurons.The output values are then converted into probabilities with regard to the four maturity stages.In the training stage, tomato images were used as inputs for the classification model and the features were represented through hierarchical convolutions.The output, expressed as probability distributions between classes (maturity stages), was then compared with one-hot encoded labels.The loss, that is, the numerical value obtained from this comparison, was backpropagated into the DNN in order to update the weights.If tomato maturity is evaluated using only the results of general classification learning, only one of the four stages is selected, which makes it difficult to reflect the various intermediate maturity stages.In order to implicitly learn the distribution within a class during classification, the weights in the DNN were updated considering not only crossentropy loss (softmax loss) but also mean and variance losses, as shown in Equations ( 1) and (2), respectively [4].The mean-variance loss has the advantage of being able to learn the probabilistic distribution of the class and can reflect the inherent ambiguity when the class has an inaccurate label due to the ambiguous selection of classes, which may be the case for tomato maturity.In the training stage, tomato images were used as inputs for the classification model and the features were represented through hierarchical convolutions.The output, expressed as probability distributions between classes (maturity stages), was then compared with onehot encoded labels.The loss, that is, the numerical value obtained from this comparison, was backpropagated into the DNN in order to update the weights.If tomato maturity is evaluated using only the results of general classification learning, only one of the four stages is selected, which makes it difficult to reflect the various intermediate maturity stages.In order to implicitly learn the distribution within a class during classification, the weights in the DNN were updated considering not only cross-entropy loss (softmax loss) but also mean and variance losses, as shown in Equations ( 1) and (2), respectively [4].The mean-variance loss has the advantage of being able to learn the probabilistic distribution of the class and can reflect the inherent ambiguity when the class has an inaccurate label due to the ambiguous selection of classes, which may be the case for tomato maturity. where L m is the mean loss, N is the batch size, m i is the mean of the i sample, y i is the label of the i sample, L v is the variance loss, and v i is the variance of the i sample. In the test stage, the model weights were fixed, and the final inference of maturity from the tomato image, expressed as the probability distributions for the four maturity stages, was calculated as an estimated value regarding the ground truth.Here, the ground truth probability distribution has a value of 1 for the target class and 0 for the other classes. Finally, the estimated value was normalized to obtain a value between 0 and 1, as shown in Equation (3): where K is the number of classes, j is the class number (starting from zero), and p j is the probability of the j class in the softmax output. Model Training and Evaluation The DNN was trained for 300 epochs using the training and validation samples.Input samples were augmented every epoch, in order to minimize overfitting to the training samples, where the augmentation included vertical, oblique, and horizontal flips, as well as stretching.Limited data augmentation related to color features was conducted, because it may have affected the maturity estimation performance.Brightness was only changed for indirectly training the model to have light-invariant performance, and other features (e.g., contrast and saturation) were not augmented.In detail, the V (value) channel was scaled entirely with a random ratio for augmenting the brightness of samples by converting the RGB to HSV.The weights were updated using the Adam optimizer, where the learning rate and weight decay were set at 0.001 and 1 × 10 −5 , respectively.The model was trained with a batch size of 64 examples, and the training was terminated before the minimum validation loss, in order to enhance the generalization performance [24].The model training was implemented in Python 3.7 using PyTorch 1.1, and the CPU and GPU used for image training were an i7-8700 K and NVIDIA Titan-V, respectively.The Titan-V, which is a GPU with 5120 CUDA cores and a 1455 MHz boost clock, was used to implement our approach in an optimal manner. The classification performance was evaluated through the use of general metrics, including accuracy, precision, recall, and F1 score, as shown in Equations ( 4)- (7), respectively. Accuracy = TP + TN TP + TN + FP + FN (4) where TP (true-positive) denotes the correct classification of a positive label, FP (falsepositive) denotes the incorrect classification of a positive label, and FN (false-negative) denotes the incorrect classification of a negative label.When considering estimated tomato maturity, it is hard to validate the real values as they are difficult to measure, especially from captured scenes.We aimed to obtain continuous maturity in relation to the color distribution of tomato skin, which is difficult for a person to visually determine.Thus, the H (hue) channel of the HSV color model, which has a high correlation with tomato growth [20], was used to obtain a reference distribution of the test images for each class.Estimated tomato maturities were statistically compared with the averaged H values of the input images by class (i.e., maturity stage).The H values were also normalized to obtain a value between 0 and 1.We also tested our method in a Jetson board (Xavier NX; NVIDIA, Santa Clara, CA, USA), in order to evaluate its practical use in real time.The test was conducted using consecutive frames recorded from the greenhouse, and the total recording time was approximately 15 s with 30 fps.Our study aimed to estimate the maturity of detected tomatoes, which in each frame were already annotated with a bounding box.The maturities of the pre-determined tomato locations in each frame were estimated, and the inference time was calculated by frame.The inference time means the time taken only for maturity evaluation, excluding object detection. Classification Performance The DNN-based maturity classifier was trained repeatedly, and Figure 4 shows the loss and classification accuracy curves by epoch.The loss curve expresses the total loss, which is the sum of the softmax, mean, and variance losses.Each graph shows the difference between the training and validation sets during repeated learning, and it can be seen that the curve shapes are similar for both the training and validation data.In terms of the total loss, a rapid decrease can be observed up to 10 epochs, and the loss became saturated with a value of approximately 0.05 at 60 epochs.The training was terminated at 100 epochs, as we did not observe a further significant decrease in the training and validation loss.The model weights were finally selected when the validation loss was minimal.The classification accuracies (CAs) increased to greater than 0.95 after 50 epochs in both training and validation. The losses for the training and validation sets were both saturated, indicating that the model could train the data without overfitting by using a network with appropriately sized parameters. frames recorded from the greenhouse, and the total recording time was approximately 15 s with 30 fps.Our study aimed to estimate the maturity of detected tomatoes, which in each frame were already annotated with a bounding box.The maturities of the predetermined tomato locations in each frame were estimated, and the inference time was calculated by frame.The inference time means the time taken only for maturity evaluation, excluding object detection. Classification Performance The DNN-based maturity classifier was trained repeatedly, and Figure 4 shows the loss and classification accuracy curves by epoch.The loss curve expresses the total loss, which is the sum of the softmax, mean, and variance losses.Each graph shows the difference between the training and validation sets during repeated learning, and it can be seen that the curve shapes are similar for both the training and validation data.In terms of the total loss, a rapid decrease can be observed up to 10 epochs, and the loss became saturated with a value of approximately 0.05 at 60 epochs.The training was terminated at 100 epochs, as we did not observe a further significant decrease in the training and validation loss.The model weights were finally selected when the validation loss was minimal.The classification accuracies (CAs) increased to greater than 0.95 after 50 epochs in both training and validation. The losses for the training and validation sets were both saturated, indicating that the model could train the data without overfitting by using a network with appropriately sized parameters.Figure 5 shows the confusion matrix, represented using the classification results of the test set.In each box, two numbers are included: one is the number of corresponding samples, and the other (in parentheses) is the normalized value, which is divided by the total sample number of each class-that is, the sum of all samples of the corresponding row.In the cases of the green and red stages, the classes were composed of images of completely immature or fully mature tomatoes, respectively, and the percentage of correctly classified samples was 97-98% in each class.Meanwhile, the intermediate stages showed lower percentages.Samples in the turning and pink stages have mixed color skin, with colors ranging from green to red, making it difficult to determine the color Figure 5 shows the confusion matrix, represented using the classification results of the test set.In each box, two numbers are included: one is the number of corresponding samples, and the other (in parentheses) is the normalized value, which is divided by the total sample number of each class-that is, the sum of all samples of the corresponding row.In the cases of the green and red stages, the classes were composed of images of completely immature or fully mature tomatoes, respectively, and the percentage of correctly classified samples was 97-98% in each class.Meanwhile, the intermediate stages showed lower percentages.Samples in the turning and pink stages have mixed color skin, with colors ranging from green to red, making it difficult to determine the color boundary visually.For this reason, the classification performance was observed to be relatively low (at the level of 77-83%), due to the inaccuracy of the reserved labels [25]. Table 1 provides an analysis of classification performance by class using the test set.The accuracies were observed to be high (greater than 0.95), with an average value of 0.97-a similar level to that obtained in previous research on tomato maturity classification [26].When considering only the accuracy metric, incorrect predictions can be provided for the minority classes (i.e., those with a smaller sample number), although the model has high accuracy globally; thus, recall, precision, and F1 score were also calculated to evaluate the classification performance.Precision and recall can offer class-wise insight and, in particular, the F1 score, which is the harmonic mean of the precision and recall, can more accurately represent the performance for a data composition with high specific class (red stage) occupancy, which is imbalanced [27].Precision was observed in the range of 0.91-0.95,while recall was relatively low (around 0.8) in the intermediate stages.Therefore, the model can estimate class correctly; however, the sensitivity in detecting the target class was low.The F1 score was approximately 0.91 on average, with a range of 0.85-0.97. Appl.Sci.2023, 13, x FOR PEER REVIEW 8 of 14 boundary visually.For this reason, the classification performance was observed to be relatively low (at the level of 77-83%), due to the inaccuracy of the reserved labels [25].Table 1 provides an analysis of classification performance by class using the test set.The accuracies were observed to be high (greater than 0.95), with an average value of 0.97-a similar level to that obtained in previous research on tomato maturity classification [26].When considering only the accuracy metric, incorrect predictions can be provided for the minority classes (i.e., those with a smaller sample number), although the model has high accuracy globally; thus, recall, precision, and F1 score were also calculated to evaluate the classification performance.Precision and recall can offer class-wise insight and, in particular, the F1 score, which is the harmonic mean of the precision and recall, can more accurately represent the performance for a data composition with high specific class (red stage) occupancy, which is imbalanced [27].Precision was observed in the range of 0.91-0.95,while recall was relatively low (around 0.8) in the intermediate stages.Therefore, the model can estimate class correctly; however, the sensitivity in detecting the target class was low.The F1 score was approximately 0.91 on average, with a range of 0.85-0.97. Maturity Estimation The data distribution in each class was learned implicitly using the mean-variance loss, and the tomato maturity for the input images was evaluated as an expected value between 0 and 1. Figure 6 shows the classification results visually, by class, as well as the maturity classification and estimation results.The general classification results were also expressed as the generalized maturity between 0 and 1, where the four maturity stages were matched to approximately 0, 0.33, 0.67, and 1.00 for the green, turning, pink, and Maturity Estimation The data distribution in each class was learned implicitly using the mean-variance loss, and the tomato maturity for the input images was evaluated as an expected value between 0 and 1. Figure 6 shows the classification results visually, by class, as well as the maturity classification and estimation results.The general classification results were also expressed as the generalized maturity between 0 and 1, where the four maturity stages were matched to approximately 0, 0.33, 0.67, and 1.00 for the green, turning, pink, and red stages, respectively.The white circles indicate the correctly classified samples, the orange circles indicate the incorrectly classified samples, and the box with a dashed boundary line indicates the area equally divided into the four areas of the maturity range.The classification results provide the discrete maturity levels, where the samples were only mapped into one of the four classes, although they also have a continuous color range in each class.Furthermore, some of the incorrectly classified samples had maturation statuses close to the boundary between two maturity stages, making it more difficult to treat them as misclassifications.However, the right part of the figure shows the continuous maturity estimation performance in this study, and the distributions of the samples in each class were observed (expressed within the same class or maturity stage) according to the overall color and red occupancy of the tomato surface.In the intermediate maturity, turning, and pink stages, estimated maturities were distributed widely (over 70% over the entire range), thus affecting the increase in false-negative (FN) samples, which was related to the recall performance.The maturity stages at both ends (green and red) showed that the distribution had only a single-sided boundary with other maturity stages; thus, the performance in these stages was relatively high.This distribution in each class and for incorrectly classified Appl.Sci.2023, 13, 412 9 of 14 samples could be due not only to the continuous maturity characteristics of tomato growth, but also the mislabeled data caused by various factors such as the ambiguity of the image itself, capture conditions, the annotator's proficiency, and the number of classes used in model training.For this reason, it is difficult to verify the maturity stages presented in these results; thus, the validity of continuous maturity estimation was evaluated using an indirect method, as detailed in the following results. stage) according to the overall color and red occupancy of the tomato surface.In the intermediate maturity, turning, and pink stages, estimated maturities were distributed widely (over 70% over the entire range), thus affecting the increase in false-negative (FN) samples, which was related to the recall performance.The maturity stages at both ends (green and red) showed that the distribution had only a single-sided boundary with other maturity stages; thus, the performance in these stages was relatively high.This distribution in each class and for incorrectly classified samples could be due not only to the continuous maturity characteristics of tomato growth, but also the mislabeled data caused by various factors such as the ambiguity of the image itself, capture conditions, the annotator's proficiency, and the number of classes used in model training.For this reason, it is difficult to verify the maturity stages presented in these results; thus, the validity of continuous maturity estimation was evaluated using an indirect method, as detailed in the following results.Figure 7 shows a comparison of the distribution between estimated maturity and hue value for the test samples, where the results are expressed by class.In a previous study, hue value in the HSV color model was shown to have a high linear correlation with the accumulated temperature, with a coefficient of determination (R 2 ) of 0.96 [20].The accumulated temperature is the integrated excess of the deficiency in temperature for fixed data, which is usually used in crop growth modeling [28].It is hard to validate estimated maturity using the actual value, as this is hard to provide due to the ambiguity of the maturity; thus, we compared our results with the hue value of relevant input samples, referring to the above studies.In particular, the hue value was averaged over the tomato area in the images, and the value was normalized to the range of 0-1. Figure 7 shows a comparison of the distribution between estimated maturity and hue value for the test samples, where the results are expressed by class.In a previous study, hue value in the HSV color model was shown to have a high linear correlation with the accumulated temperature, with a coefficient of determination (R 2 ) of 0.96 [20].The accumulated temperature is the integrated excess of the deficiency in temperature for fixed data, which is usually used in crop growth modeling [28].It is hard to validate estimated maturity using the actual value, as this is hard to provide due to the ambiguity of the maturity; thus, we compared our results with the hue value of relevant input samples, referring to the above studies.In particular, the hue value was averaged over the tomato area in the images, and the value was normalized to the range of 0-1. Each graph consists of a probability distribution, expressed as both a histogram and a Gaussian distribution.The green and red bars indicate the relative frequencies for hue value and estimated maturity, respectively, while the brown bars indicate the intersection of the two methods.The Gaussian fitted distributions appeared to be similar between the two methods in the green stage (i.e., immature), when green almost fully occupied the tomato.For the turning stage, there was a slight difference compared with that of the green stage, but the maximum probability of maturity was similar (around 0.4).In the case of the pink and red stages, they had more differences in variance than the green and turning stages, although they showed similar mean maturities.In the DNN-based maturity estimation, the pink stage showed a narrower range distribution, compared with that of the hue value, whereas the opposite was observed in the red stage.It seems that the illuminance may affect the feature representation in DNNs, whereas the hue value is only related to color, with saturation (s) and value (v) having been separated.This is one of the reasons why tomato images have color variance, which might have affected the high variance in the maturity of the red stage, which was even higher than that in the pink stage.Each graph consists of a probability distribution, expressed as both a histogram and a Gaussian distribution.The green and red bars indicate the relative frequencies for hue value and estimated maturity, respectively, while the brown bars indicate the intersection of the two methods.The Gaussian fitted distributions appeared to be similar between the two methods in the green stage (i.e., immature), when green almost fully occupied the tomato.For the turning stage, there was a slight difference compared with that of the green stage, but the maximum probability of maturity was similar (around 0.4).In the case of the pink and red stages, they had more differences in variance than the green and turning stages, although they showed similar mean maturities.In the DNN-based maturity estimation, the pink stage showed a narrower range distribution, compared with that of the hue value, whereas the opposite was observed in the red stage.It seems that the illuminance may affect the feature representation in DNNs, whereas the hue value is only related to color, with saturation (s) and value (v) having been separated.This is one of the reasons why tomato images have color variance, which might have affected the high variance in the maturity of the red stage, which was even higher than that in the pink stage.The comparison between the DNN-based estimated maturity and hue values was analyzed statistically by maturity class, as detailed in Table 2.Each value is represented as the average and standard deviation of the samples, with respect to each maturity class.The means were similar between the two methods, and their differences were in the range 0.1-0.6, with a relatively similar 10% level of their means.There were no differences between the two methods at the 0.05 significance level (except in the pink stage) and the similarity between the two groups was highest in the red stage. From this result, it can be concluded that our method can efficiently estimate tomato maturity from images, further enabling assessment in terms of a continuous value, and the performance of our method indicates that the estimated values are significant enough to represent the tomato growth when compared with the hue value, which is correlated with tomato growth. (1Average ± standard deviation. Evaluation of the Estimation Speed The DNN model was constructed using shallow CNN layers; as such, this architecture has advantages in terms of real-time processing, allowing for its practical use in the target system.The method was tested with a high-end GPU-based hardware configuration, as well as in a low-cost Jetson board (Xavier NX; NVIDIA, Santa Clara, CA, USA).The test was conducted by inputting the consecutive frames, and multiple tomatoes in each frame were evaluated for their maturity, where the locations of the tomatoes were pre-determined for every frame.Figure 8 depicts the inference time measured by the software timer by frame and Table 3 shows the processing speed comparison by the GPU.For the Volta GPU in the Xavier NX, the number of CUDA cores is more than 10 times smaller than that in the Titan-V; however, the Jetson board showed only an approximately 3-4 times longer processing time.The processing time for the Xavier NX was approximately 0.02 s, which is equivalent to 50 fps, meaning that it can operate well enough to be used in a robotic system without a significant increase in the processing time.This result indicates the practicality of our method, considering a shallow DNN architecture (although pre-or post-processing algorithms were not considered).correlated with tomato growth. (1Average ± standard deviation. Evaluation of the Estimation Speed The DNN model was constructed using shallow CNN layers; as such, this architec ture has advantages in terms of real-time processing, allowing for its practical use in the target system.The method was tested with a high-end GPU-based hardware configura tion, as well as in a low-cost Jetson board (Xavier NX; NVIDIA, Santa Clara, CA, USA) The test was conducted by inputting the consecutive frames, and multiple tomatoes in each frame were evaluated for their maturity, where the locations of the tomatoes were pre-determined for every frame.Figure 8 depicts the inference time measured by the software timer by frame and Table 3 shows the processing speed comparison by the GPU.For the Volta GPU in the Xavier NX, the number of CUDA cores is more than 10 times smaller than that in the Titan-V; however, the Jetson board showed only an ap proximately 3-4 times longer processing time.The processing time for the Xavier NX was approximately 0.02 s, which is equivalent to 50 fps, meaning that it can operate wel enough to be used in a robotic system without a significant increase in the processing time.This result indicates the practicality of our method, considering a shallow DNN ar chitecture (although pre-or post-processing algorithms were not considered).Titan-V 5120 1200 0.004 ± 0.0003 (1) Volta 384 854 0.018 ± 0.0028 (1) Average ± standard deviation. Discussion These results indicate that our method has comparable performance with other deep learning-based maturity classifiers studied in the field of agriculture, although a four-layer convolutional neural network, which has a shallow-level architecture suitable for practical use, was considered in this study.The results of the maturity classification show that the performances were 0.97, 0.89, 0.93, and 0.91 for classification accuracy, with a higher range than in the previous study [19,20].In addition, our model had a practical processing time of less than 0.02 s.Although there are differences in the complexity of the problem and the GPU used in other studies, this finding can still make a significant contribution to practical systems. In this study, tomato maturity was estimated as a continuous value, and the label distribution was considered to reflect the uncertainty within the maturity stage.The results were compared with the hue value, which is highly correlated with tomato growth [20], showing that not only was a linear relationship observed, but there was also no difference in distribution for each maturity stage between estimated maturity and hue value.It is known that our method shows similar results to previous studies and enables the continuous prediction of maturity values.However, the maturity of a tomato is affected by the distribution of various color features, not just a color channel (e.g., the hue value used in this study).The DNN-based tomato maturity estimation approach proposed in this study has the potential to consider the overall color distribution of the object by conducting hierarchical feature extraction.From the comparison results, the predicted means of the two methods were similar, whereas their distributions differed at each maturity stage.This may indicate that the DNN can consider more features than when simply considering the hue value, although it presented errors with respect to previous research.It is expected that our method can be complemented with accurately labeled tomato images and optimized parameter configuration, thus guiding further training of the model to enhance the accuracy of maturity stage classification.Furthermore, a fully mature tomato can be selected based on the confidence provided in the mean and variance losses.However, our approach only conducted maturity estimations for pre-determined objects, which means that the performance depends on how the target area is determined, including the same tomato object.In addition, the maturity can be overestimated or underestimated if the target is occluded by another object.Securing the accuracy of segmentation in the detection stage is required in order to address this issue, which is outside the scope of this study. Conclusions We aimed to continuously estimate tomato maturity from tomato-specific images, for which a DNN model with mean-variance losses was used to learn the maturity features and label distributions.The model structure consists of four CNN layers in order to extract the features, and the weights in the model were updated considering three losses: cross-entropy, mean, and variance.For maturity estimation in the test stage, the estimated value based on the output probability distribution of four maturity classes was calculated as a normalized value between 0 and 1.The results indicate that the F1-score was approximately 0.91 on average, with a range of 0.85-0.97,thus providing comparable performance to that reported in relevant research.The estimated maturity was evaluated by comparing the probability distribution with the hue value (which has a high linear correlation with the accumulated temperature index commonly used to model crop growth) in the HSV color model, and the comparison was conducted according to the maturity stage.The comparisons indicated that there were no significant differences between the estimated maturities and hue values, except in the pink stage, and that the similarity between the two groups was highest in the red stage. Our approach shows that DNN-based distribution learning can be utilized to continuously evaluate tomato maturity and has the advantage of allowing for the evaluation of intermediate classes between specific classes, based on the confidence of classes (i.e., the four maturity stages considered in this study).The results were verified through comparison with a color information index related to tomato growth.It is expected that a higher accuracy of data labeling and more precise classification performance are possible, given that this study was conducted under limited test conditions, such as using sparse maturity stages and uncertain labeled samples determined empirically.Such enhancements may Figure 1 . Figure 1.The greenhouse structure (left) and a sample captured image (right).The greenhouse is of a general hydroponic type, and hot water pipes and a magnetic line were installed between the two crop beds. Figure 1 . Figure 1.The greenhouse structure (left) and a sample captured image (right).The greenhouse is of a general hydroponic type, and hot water pipes and a magnetic line were installed between the two crop beds. Figure 2 . Figure 2. The tomato object annotation and background elimination process. Figure 2 . Figure 2. The tomato object annotation and background elimination process. Figure 3 . Figure 3. Model structure of DNN for tomato maturity estimation.The model has a shallow 4-CNN architecture with a practical processing speed, and mean and variance losses are added to learn the tomato maturity distribution within a class.The output of softmax is calculated as the argmax value for the training stage and the estimated value of the maturity level for the test stage. Figure 3 . Figure 3. Model structure of DNN for tomato maturity estimation.The model has a shallow 4-CNN architecture with a practical processing speed, and mean and variance losses are added to learn the tomato maturity distribution within a class.The output of softmax is calculated as the argmax value for the training stage and the estimated value of the maturity level for the test stage. Figure 4 . Figure 4. Total losses and classification accuracies for training and validation samples by epoch. Figure 4 . Figure 4. Total losses and classification accuracies for training and validation samples by epoch. Figure 7 . Figure 7. Comparisons of histograms and fitted Gaussian distributions between the estimated maturity and hue value from tomato images: (a) green, (b) turning, (c) pink, and (d) red. Figure 7 . Figure 7. Comparisons of histograms and fitted Gaussian distributions between the estimated maturity and hue value from tomato images: (a) green, (b) turning, (c) pink, and (d) red. Figure 8 . Figure 8. Inference times of maturity estimations for detected tomatoes by frame. Figure 8 . Figure 8. Inference times of maturity estimations for detected tomatoes by frame. Table 1 . Classification performance analysis of tomato maturity estimation model. Table 1 . Classification performance analysis of tomato maturity estimation model. Table 2 . Analysis of the estimated tomato maturity, by comparing the hue value with the maturity class. Table 2 . Analysis of the estimated tomato maturity, by comparing the hue value with the maturity class. Table 3 . Processing speed of tomato maturity estimation with two GPUs.
2022-12-31T16:08:45.717Z
2022-12-28T00:00:00.000
{ "year": 2022, "sha1": "25e0af5ac3df850a9589226aba26aea83bf34fc7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/13/1/412/pdf?version=1672832847", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f5c88e657b5bdd13dd67c0918be4fd2f2ad09d6c", "s2fieldsofstudy": [ "Computer Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
67832153
pes2o/s2orc
v3-fos-license
Particle scattering and vacuum instability by exponential steps Particle scattering and vacuum instability in a constant inhomogeneous electric field of particular peak configuration that consists of two (exponentially increasing and exponentially decreasing) independent parts are studied. It presents a new kind of external field where exact solutions of the Dirac and Klein-Gordon equations can be found. We obtain and analyze in- and out-solutions of the Dirac and Klein-Gordon equations in this configuration. By their help we calculate probabilities of particle scattering and characteristics of the vacuum instability. In particular, we consider in details three configurations: a smooth peak, a sharp peak, and a strongly asymmetric peak configuration. We find asymptotic expressions for total mean numbers of created particles and for vacuum-to-vacuum transition probability. We discuss a new regularization of the Klein step by the sharp peak and compare this regularization with another one given by the Sauter potential. I. INTRODUCTION Particle creation from a vacuum by strong electromagnetic and gravitational fields is a well-known quantum effect [1], which has a number of important applications in laser physics, heavy ion collisions, astrophysics, and condensed matter processes (see Refs. [2][3][4] for a review). Depending on the strong field structure different approaches have been proposed for nonperturbative calculating of the effect. In these approaches the strong fields are considered as external ones (external classical backgrounds). Initially, the effect of particle creation was studied for time-dependent external electric fields that are switched on and off at the initial and final time instants, respectively. We call such external fields t-electric potential steps. Initially, scattering, particle creation, and particle annihilation by t-electric potential steps have been considered in the framework of the relativistic quantum mechanics, see for example Refs. [5,6]. At present it is well understood that only an adequate quantum field theory (QFT) with a corresponding external background may consistently describe this effect and possible accompanying processes. In the framework of such a theory, particle creation is related to a violation with time of a vacuum stability. In quantum electrodynamics (QED), backgrounds that may violate the vacuum stability are electriclike electromagnetic fields. A general nonperturbative formulation of QED with t-electric potential steps was developed in Refs. [7][8][9]. The corresponding technique uses essentially special sets of exact solutions of the Dirac equation with the corresponding external backgrounds. The cases when such solutions can be found explicitly (analytically) are called exactly solvable cases. At the current moment, all known exactly solvable cases for t-electric potential steps are studied in detail, see Ref. [10] for a review. However, there exist many physically interesting situations where external backgrounds are formally presented by time independent fields (which is obviously some kind of idealization). For example, one can mention time-independent nonuniform electric fields that are concentrated in restricted spatial areas. Such fields represent a kind of spatial or, as we call them, conditionally, x-electric potential steps for charged particles. The x-electric potential steps can also create particles from the vacuum; the Klein paradox is closely related to this process [11][12][13]. Approaches for treating quantum effects in the t-electric potential steps are not applicable to x-electric potential steps. Some heuristic calculations of particle creation by x-electric potential steps in the framework of the relativistic quantum mechanics with a qualitative discussion from the point of view of QFT were first presented by Nikishov in Refs. [6,14]. In the recent article [15], quantizing the Dirac and the Klein-Gordon (scalar) fields in the presence of x-electric potential steps, Gavrilov and Gitman presented a consistent nonperturbative formulation of QED with x-electric potential steps. Similar to t-electric potential step case, special sets of exact solutions of the Dirac equation with the corresponding external field are used to form a base of this formulation. By the help of this approach particle creation in the Sauter field E(x) = E cosh −2 (x/L S ) and in the so-called L-constant electric field (a constant electric field between two capacitor plates) were studied in Refs. [15] and [16], respectively. These two cases are exactly solvable for x-electric potential steps. In the present article, we consider another new exactly solvable case of this kind, which is a constant electric field of particular peak configuration. The corresponding field is a combination of two exponential parts, one exponentially increasing and the other one exponentially decreasing. Different choice of these two parts allows one to imitate different realistic and physically interesting spatial configuration of electric fields. Besides of this, a very sharp peak can be considered as a field of a regularized Klein step. We compare this regularization with one given by the Sauter potential in Ref. [15]. The article is organized as follows. In Sec. II, a general form of the constant electric field of a peak configuration that consist of two (exponentially increasing and exponentially decreasing) independent parts is introduced. We obtain and analyze corresponding in-and out-solutions of the Dirac and Klein-Gordon equations. By their help we introduce initial and final sets of creation and annihilation operators of electrons and positrons and define initial and final vacua. In Sec. III we discuss scattering and reflection of particles outside of the Klein zone while possible processes in the Klein zone are studied in Sec. IV. Characteristics of the vacuum instability in the Klein zone are calculated by the help of in-and out-solutions using results of a general theory [15]. Here a particular case of a small-gradient field is discussed as well. In Sec. V we study a strongly asymmetric peak configuration. In Sec. VI we consider a very sharp peak. Mathematical details of study in the Klein zone are placed in Appendix A. In Appendix B some necessary asymptotic expansions of the confluent hypergeometric function are given. We use the system of units where ℏ = c = 1 in which the fine structure constant is α = A. Dirac equation We consider an external electromagnetic field, placed in (d = D + 1)-dimensional Minkowski space, parametrized by the coordinates X = (X µ , µ = 0, 1, . . . , D) = (t, r), X 0 = t, r = (x, r ⊥ ), r ⊥ = X 2 , . . . , X D . The potentials of an external electromagnetic field which corresponds to the zero magnetic field and the electric field of the form The electric field (2.2) is directed along the x-axis, inhomogeneous and constant in time in general case. The backgrounds of this kind represent a kind of spatial x-electric potential steps for charged particles. The main properties common to any x-electric potential steps where A 0 (±∞) are some constant quantities, and the derivative of the potential ∂ x A 0 (x) does not change its sign for any x ∈ R. For definiteness, we suppose that The basic Dirac particle is an electron, and the positron is its antiparticle. The electric charge of the electron q = −e, e > 0. The potential energy of the electron in this field is Fig. 1) and the magnitude of the corresponding x-potential step is One can distinguish two types of electric steps: noncritical and critical, In the case of noncritical steps, the vacuum is stable, see Ref. [15]. We are interested in the critical steps, where there is electron-positron pair production from vacuum. System under consideration consists of a Dirac field ψ(X) interacting with the electric field of particular exponential configuration. This electric field is composed of independent parts, wherein for each one the Dirac equation is exactly solvable. The field in consideration grows exponentially from the minus infinity x = −∞, reaches its maximal amplitude E at x = 0 and decreases exponentially to the infinity x = +∞. Its maximum E > 0 occurs at a very sharp point, say at x = 0, such that the limit is not defined. The latter property implies that a peak at x = 0 is present. We label the exponentially increasing interval by I = (−∞, 0] and the exponentially decreasing interval by II = (0, +∞). The field and its corresponding x-electric potential step are where E > 0, and k 1 , k 2 > 0; see Fig. 2. The potential energies of electron at x = −∞ and x = +∞ for this particular configuration are It should be noted that, for example, the strongly asymmetric peak configuration, given by the potential can be considered as a particular case of the step, that is, k 1 is sufficiently large for this case, k 1 → ∞. The Dirac equation for the system under consideration has the following form: Due to the configuration of the field (2.8), the structure of Dirac spinor ψ(X) in directions X 0 and X 2 , . . . X D is a simple plane wave, so we consider stationary solutions of the Dirac equation (2.11) having the following form where ψ n (x) and φ n (x) are spinors that depend on x alone. These spinors are stationary states with the given total energy p 0 and transversal momentum p ⊥ (the index ⊥ stands for the spatial components perpendicular to the electric field). Spin variables are separated by where v χ,σ is the set of constant orthonormalized spinors with χ = ±1, σ = σ 1 , . . . , σ [d/2]−1 , σ s = ±1, satisfying the following relations The scalar functions ϕ n (x) have to obey the second order differential equation where π 2 ⊥ = p 2 ⊥ + m 2 . B. Solutions with special left and right asymptotics In each interval we introduce new variables η j , j = 1 for x ∈ I, j = 2 for x ∈ II: and represent the scalar functions ϕ n (x) as Here π 0 (L) and π 0 (R) are the sum of its asymptotic kinetic and rest energies at x = −∞ and x = +∞, respectively. We call the quantity π ⊥ the transversal energy. The functions A fundamental set of solutions for the equation is composed by two linearly independent confluent hypergeometric functions: Φ (a j , c j ; η j ) and η where Φ (a, c; η) = 1 + a c η 1! + a (a + 1) c (c + 1) The general solution of Eq. (2.16) in the intervals I and II can be expressed as the following linear superposition: with constants A j 1 and A j 2 being fixed by boundary conditions. The complete set of solutions for Klein-Gordon equation can be formally obtained by setting χ equal to zero in all formulas, In this case n = p. The Wronskian of the y j 1,2 (η j ) functions is In what follows, we use solutions of the Dirac equation denoted as ζ ψ n (X) and ζ ψ n (X) , ζ = ± , with special left and right asymptotics at x → −∞ and x → +∞, respectively, where there is no electric field. Nontrivial solutions ζ ψ n (X) exist only for quantum numbers n that obey the relation whereas nontrivial solutions ζ ψ n (X) exist only for quantum numbers n that obey the relation Such solutions have the form (2.13) with the functions ϕ n (x) denoted as ζ ϕ n (x) or ζ ϕ n (x), respectively. The latter functions satisfy Eq. (2.16) and the following asymptotic conditions: The solutions ζ ψ n (X) and ζ ψ n (X) asymptotically describe particles with given momenta ζ p L and ζ p R , correspondingly, along the axis x. We consider our theory in a large spacetime box that has a spatial volume V ⊥ = D j=2 K j and the time dimension T , where all K j and T are macroscopically large. The integration over the transverse coordinates is fulfilled from −K j /2 to +K j /2, and over the time t from −T /2 to +T /2. The limits K j → ∞ and T → ∞ are assumed in final expressions. In this case the electric current of the Dirac field through the hypersurface x = const, is x-independent. Using Eq. (2.27) we subject the solutions ζ ψ n (X) and ζ ψ n (X) to the orthonormality conditions and calculate the normalization constants ζ N and ζ N in (2.26) as (see Ref. [15] for details) By virtue of these properties, electron (positron) states can be selected as follows: The solutions ζ ψ n (X) and ζ ψ n (X) are connected by the decomposition if the conditions (2.24) and (2.25) are simultaneously fulfilled, where η L/R = sgn π 0 (L/R), and the coefficients g are defined by the corresponding inner product, These coefficients satisfy the following unitary relations Taking into account the complete set of exact solutions (2.21) and mutual decompositions (2.30), for example, one can present the functions − ϕ n (x) and + ϕ n (x) in the form for the whole axis x. C. In-and out-sets According to the general theory, in the case of x-electric potential steps, the manifold of all the quantum numbers n denoted by Ω can be divided into five ranges of quantum numbers where the corresponding solutions of the Dirac equation have similar forms, so that Ω = Ω 1 ∪ · · · ∪ Ω 5 ; see Fig. 1. Note that the range Ω 3 exists if 2π ⊥ < U. We denote the quantum numbers in corresponding zone Ω i by n i . The conditions (2.24) and (2.25) are simultaneously fulfilled for Ω i , i = 1, 3, 5, as follows For the detailed description of the ranges Ω i and their properties see Ref. [15]. The exact expressions for g's can be obtained from Eqs. (2.33) and (2.34) as follows. The functions − ϕ n (x) and + ϕ n (x) given by Eqs. (2.33) and (2.34) and their derivatives satisfy the following gluing conditions: Using Eq. (2.36) and the Wronskian (2.23), one can find each coefficient g ζ | ζ ′ and g ζ | ζ ′ in Eqs. (2.33) and (2.34). For example, applying these conditions to the set (2.33), one can find the coefficient g ( − | + ): . (2.37) The same can be done to Eq. (2.34) to obtain One can easily verify that the symmetry under a simultaneous change k 1 ⇆ k 2 and π 0 (L) ⇆ −π 0 (R) holds, A formal transition to the Klein-Gordon case can be done by setting χ = 0 and η L = η R = 1 in Eqs. (2.37) and (2.38). In this case, normalization factors ζ C and ζ C are The coefficient g ( − | + ) for scalar particles is with ∆ given by Eq. (2.37) . The symmetry under the simultaneous change k 1 ⇆ k 2 and π 0 (L) ⇆ −π 0 (R) holds as As follows from Eqs. (2.37), (2.38), and (2.41), if either p R or p L tends to zero, one of the following limits holds true: These properties are essential for the justification of in-and out-particle interpretation in the general construction [15]. However, it should be noted that quantum field theory deals with physical quantities that are presented by volume integrals on t-constant hyperplane. The time-independent inner product for any pair of solutions of the Dirac equation, ψ n (X) and ψ ′ n ′ (X), is defined on the t =const hyperplane as follows: where the improper integral over x in the right-hand side of Eq. (2.44) is reduced to its special principal value to provide a certain additional property important for us and the limits K (L/R) → ∞ are assumed in final expressions. As a result, we can see that all the wave functions having different quantum numbers n are orthogonal with respect of the inner product (2.44). We can find the linear independent pairs of ψ n (X) and ψ ′ n ′ (X) for each n and identify initial and final states on the t =const hyperplane as follows (see Ref. [15] for details): (2.45) In the ranges Ω 2 (−π ⊥ < π 0 (R) < π ⊥ and π 0 (L) > π ⊥ ) any solution has zero right asymptotic, which means that we deal with electron standing waves ψ n 2 (X). It means that we deal with a total reflection. Similarly, we can treat positron standing waves ψ n 4 (X) in the range Ω 4 (−π ⊥ < π 0 (L) < π ⊥ and π 0 (R) < −π ⊥ ) and see a total reflection of positrons. It has to be noted that the complete set of in-and out-solution must include solution ψ n 2 (X) and ψ n 4 (X). Using the identification (2.45) we decompose the Heisenberg field operatorΨ (X) in two sets of solutions of the Dirac equation (2.11) complete on the t = const hyperplane. Operator-valued coefficients in such decompositions are creation and annihilation operators of electrons and positrons which do not depend on coordinates. Following this way we complete initial and final sets of creation and annihilation operators as out − set : − a n 1 (out), + a n 1 (out); + b n 5 (out), − b n 5 (out); + b n 3 (out), + a n 3 (out). subsequent QFT analysis of correctness of such an identification in Ref. [15]. We define two vacuum vectors |0, in and |0, out , one of which is the zero-vector for all in-annihilation operators and the other is zero-vector for all out-annihilation operators. Besides, both vacua are zero-vectors for the annihilation operators a n 2 and b n 4 . We know that in the ranges Ω i , i = 1, 2, 4, 5 the partial vacua, |0, in (i) and |0, out (i) , are stable. The vacuum-to-vacuum transition amplitude c v coincides with the vacuum-to-vacuum transition amplitude c III. SCATTERING AND REFLECTION OF PARTICLES OUTSIDE OF THE KLEIN ZONE To extract results of the one-particle scattering theory, all the constituent quantities, such as reflection and transmission coefficients etc., have to be represented with the help of the mutual decomposition coefficients g. As an example, in the range Ω 1 , one can calculate absoluteR and relative R amplitudes of an electron reflection, and absoluteT and relative T amplitudes of an electron transmission, which can be presented as the following matrix elements R +,n =R +,n c −1 vR +,n = 0, out| − a n (out) + a † n (in) |0, in , It follows from the Eq. (3.1) that the relative reflection |R ζ,n | 2 and transition |T ζ,n | 2 probabilities are Similar expressions can be derived for positron amplitudes in the range Ω 5 . In particular, relation (3.2) holds true literally for the positrons in the range Ω 5 . It is clear that |R ζ,n | 2 ≤ 1. This result may be interpreted as QFT justification of the rules of time-independent potential scattering theory in the ranges Ω 1 and Ω 5 . Amplitudes of Klein-Gordon particle reflection and transmission in the ranges Ω i , i = 1, 2, 4, 5 have the same form as in the Dirac particle case with coefficients g given by the corresponding inner product. Substituting the corresponding coefficients g into relations It is clear that |g ( − | + )| −2 and then |R ζ,n | 2 and |T ζ,n | 2 are functions of modulus squared of transversal momentum p 2 ⊥ . It follows from Eq. (2.39) and Eq. (2.42), respectively, that |R ζ,n | 2 and |T ζ,n | 2 are invariant under the simultaneous change k 1 ⇆ k 2 and π 0 (L) ⇆ −π 0 (R) for both fermions and bosons. Then if k 1 = k 2 , |R ζ,n | 2 and |T ζ,n | 2 appear to be an even function of p 0 . The limits (2.43) imply that Thus, in these two cases the relative probabilities of reflection |R ζ,n | 2 tend to unity; i.e. they are continuous functions of the quantum numbers n on the boundaries. It can be also seen that |R ζ,n | 2 → 0 as p 0 → ±∞. A. General Here we consider possible processes in the Klein zone, Ω 3 , following the general consideration [15]. It is of special interest due to the vacuum instability. Due to specific choice of quantum numbers, processes for different modes n are independent. One sees that physical quantities are factorized with respect to quantum modes n and calculations in each mode can be performed separately. In particular, one can represent the introduced vacua, |0, in and |0, out , as tensor products of all the corresponding partial vacua in each mode n, respectively, and see that the probability for a vacuum to remain a vacuum can be expressed as product of the probabilities p n v for a partial vacuum to remain a vacuum in each mode n, where it is taken into account that in the ranges Ω i , i = 1, 2, 4, 5 the partial vacua are stable. The differential mean numbers of electrons and positrons from electron-positron pairs created are equal: N a n (out) = 0, in + a † n (out) + a n (out) 0, in = g − + −2 , and they present the number of pairs created, N cr n . It follows from the Eqs. (2.37) and (2.41) that N cr n = |C∆| −2 for fermions, It is clear that N cr n is a function of modulus squared of transversal momentum p 2 ⊥ . It follows from Eq. (2.39) and Eq. (2.42), respectively, that N cr n is invariant under the simultaneous change k 1 ⇆ k 2 and π 0 (L) ⇆ −π 0 (R) for both fermions and bosons. Then if k 1 = k 2 , N cr n appears to be an even function of p 0 . From properties (2.43), one finds that N cr n → 0 if n tends to the boundary with either the range Ω 2 ( p R → 0) or the range Ω 4 ( p L → 0), N cr n ∼ p R → 0, N cr n ∼ p L → 0, ∀λ = 0; (4.4) in the latter ranges, the vacuum is stable. Absolute values of the asymptotic momenta p L and p R are determined by the quantum numbers p 0 and p ⊥ , see Eq. (2.18). This fact imposes certain relation between both quantities. In particular, one can see that d p L /d p R < 0, and at any given p ⊥ these quantities are restricted inside the range Ω 3 , We have p L = k 1 |ν 1 |, p R = k 2 |ν 2 |, and U = eE k −1 1 + k −1 2 for the case under consideration. Then for any p 0 and p ⊥ the numbers N cr n are negligible if the Klein zone is tiny, The total number of pairs N cr created by the field under consideration can be calculated by summation over all possible quantum numbers in the Klein zone. Calculating this number in the fermionic case, one has to sum the corresponding differential mean numbers N cr n over the spin projections and over the transversal momenta p ⊥ and energy p 0 . Since the N cr n do not depend on the spin polarization parameters σ, the sum over the spin projections produces only the factor J (d) = 2 [d/2]−1 . The sum over the momenta and the energy transforms into an integral in the following way: where V ⊥ is the spatial volume of the (d − 1) dimensional hypersurface orthogonal to the electric field direction, x, and T is the time duration of the electric field. The total number of bosonic pairs created in all possible states follows from Eq. (4.8) at J (d) = 1. Both for fermions and bosons, the relative probabilities of an electron reflection, a pair creation, and the probability for a partial vacuum to remain a vacuum in a mode n can be expressed via differential mean numbers of created pairs N cr n , p n (+|+) = | 0, out| + a n (out) − a † n (in)|0, in where P v is defined by Eq. (4.1). The partial absolute probabilities of an electron reflection and a pair creation in a mode n are P n (+|+) = p n (+|+)p n v , P n (+ − |0) = p n (+ − |0)p n v , (4.10) respectively. The relative probabilities for a positron reflection p n (−|−) and a pair annihilation p n (0| − +) coincide with the probabilities p n (+|+) and p n (+ − |0), respectively. We recall, as it follows from the general consideration [15], that if there exists an inparticle in the Klein zone, it will be subjected to the total reflection. For example, it can be illustrated by a result following from Eqs. (4.9) and (4.10), the probability of reflection of a Dirac particle with given quantum numbers n, under the condition that all other partial vacua remain vacua, is P n (+|+) = 1. In the Dirac case, the presence of an in-particle with a given n ∈ Ω 3 disallows the pair creation from the vacuum in this state due to the Pauli principle. By the same reason, if an initial state is vacuum, there are only two possibilities in a cell of the space with given quantum number n, namely, this partial vacuum remains a vacuum, or with the probability P n (+ − |0) a pair with the quantum number n will be created. It is in agreement with the probability conservation law p n v + P n (+ − |0) = 1 that follows from Eqs. (4.9) and (4.10). Of course, pairs of bosons can be created from the vacuum in any already-occupied states. The inverse parameters k −1 1 , k −1 2 represent scales of growth and decay of the electric field in the intervals I and II, respectively. In particular, we have a small-gradient field at small values of both k 1 , k 2 → 0, obeying the conditions min (h 1 , h 2 ) ≫ max 1, m 2 /eE . (4.13) This case can be considered as a two-parameter regularization for an uniform electric field. That is a reason the Klein zone, Ω 3 , is of interest under the condition (4.13). Let us analyze how the numbers N cr n depend on the parameters p 0 and π ⊥ . By virtue of the symmetry properties of N cr n discussed above, one can only consider p 0 either positive or negative. Let us, for example, consider the interval of negative energies p 0 ≤ 0. In this case, taking into account that both π 0 (L) and π 0 (R) satisfy the inequalities given by Eq. (2.35) in the range Ω 3 , we see that π 0 (L) varies greatly while π 0 (R) is negative and very large, (4.14) It can be seen from the asymptotic behavior of a confluent hypergeometric function that N cr n is exponentially small, N cr n exp −2 h 2 |π 0 (R)| /k 2 , if p R ≪ |π 0 (R)| for large |π 0 (R)|. In this case, π ⊥ ∼ eEk −1 2 . Then the range of fixed π ⊥ is of interest, and in the following we assume that condition holds true, where any given number K ⊥ satisfies the inequality Using the asymptotic expressions of the confluent hypergeometric functions we find that the differential mean numbers of created pairs N cr n , given by Eq. (4.3), can be approximated by the forms (A6); see details in Appendix A. These forms are exponentially small if π 0 (L) ∼ π ⊥ . Then substantial value of N cr n are formed in the range where K is any given number and K ⊥ satisfies the inequalities (4.15) and (4.16). In this range we approximate the distributions (A6) by the formula both for bosons and fermions. Considering positive p 0 > 0, we find that N cr n can be approximated by forms (A8); see details in Appendix A. In this case, the substantial value of N cr n are formed in the range and has a form Consequently, the quantity N cr n is almost constant over the wide range of energies p 0 for any given λ satisfying Eq. (4.15). When h 1 , h 2 → ∞, one obtains the result in a constant uniform electric field [6,14]. The analysis presented above reveals that the dominant contributions for particle creation by a slowly varying field occurs in the ranges of large kinetic energies, whose differential quantities have the asymptotic forms (4.18) for p 0 < 0 and (4.20) for p 0 > 0. Therefore, one may represent the total number (4.8) as Here n cr presents the total number density of pairs created per unit time and unit surface orthogonal to the electric field direction. As it is shown in Appendix A, the leading term reads n cr = r cr 1 where function G is given by Eq. (A11). The density r cr is known in the theory of constant uniform electric field as the pair-production rate (see the d dimensional case in Refs. [16,20]). The density given by Eq. (4.22) coincides with the number density of pairs created per unit space volume, N cr /V (d−1) , due to the uniform peak electric field given by a time-dependent potential A x (t); see Ref. [21]. We see that the dominant contributions to the number density n cr , given by Eq. (4.22), is proportional to the total energy of a pair created and then the magnitude of the potential step, π 0 (L) + |π 0 (R)| = U. This magnitude is equal to a work done on a charged particle by the electric field under consideration. The same behavior we see for the number density n cr of pairs created due to the small-gradient potential steps of the other known exactly solvable cases: the Sauter field [15] and the L-constant electric field [16] with the step magnitudes U S = 2eEL S and U L = eEL, respectively. In these cases we have n cr = L S δr cr for Sauter field, n cr = Lr cr for L-constant field, (4.23) where L is the length of the applied constant field, δ = √ πΨ 1 2 , 2−d 2 ; π m 2 eE , and Ψ (a, b; x) is the confluent hypergeometric function [18]. All three cases can be considered as regularizations for an uniform electric field. This fact allows one to compare pair creation effects in such fields. Thus, for a given magnitude of the electric field E one can compare, for example, the pair creation effects in fields with equal step magnitude, or one can determine such step magnitudes for which particle creation effects are the same. In the latter case, equating the densities n cr for the Sauter field and for the peak field to the density n cr for the L-constant field, we find an effective length of the fields in both cases, L eff = L S δ for Sauter field, By the definition L eff = L for the L-constant field. One can say that the Sauter and the peak electric fields with the same L eff are equivalent to the L-constant field in pair production. Using the above considerations and Eq. (4.9) we perform the summation (integration) in Eq. (4.1) and obtain the vacuum-to-vacuum probability, (4.25) where N cr = V ⊥ T n cr and n cr is given by Eq. (4.22). Previously, similar results were obtained for the Sauter field [15] and the L-constant fields [16] with the corresponding n cr , given by Eq. (4.23), and ǫ l = ǫ L l = 1 for L-constant field, In the examples considered before [15,16] and above, increasing and decreasing parts of the electric field are near symmetric. Here we consider an essentially asymmetric configuration of the step. We suppose that the field grow from zero to its maximum value at the origin x = 0 very rapidly (that is, k 1 is sufficiently large), while the value of parameter k 2 > 0 remains arbitrary and includes the case of a smooth decay. We assume that the corresponding asymptotic potential energy, U L , given by Eq. (2.9), define finite magnitude of the potential step ∆U 1 = −U L for increasing part of the field. Note that due to the invariance of the mean numbers N cr n under the simultaneous change k 1 ⇆ k 2 and π 0 (L) ⇆ −π 0 (R), one can easily transform this situation to the case with a large k 2 and arbitrary k 1 > 0. Let us assume that a sufficiently large k 1 satisfies the following inequalities at given ∆U 1 and π 0 (L) = p 0 + ∆U 1 : Making use of condition Eq. (5.1), we can approximately present |∆| 2 , given by Eq. (2.37), as and finally obtain for the ranges Ω 1 , Ω 3 , and Ω 5 . In an asymmetric case with k 1 ≫ k 2 , we have eE = k 1 ∆U 1 at given ∆U 1 that implies eE/k 2 ≫ ∆U 1 . Then one can disregard the term ∆U 1 in the leading-term-approximation of |g ( − | + )| −2 given by Eq. (5.3) for the ranges Ω 1 and Ω 5 , that is, outside of the Klein zone. Such an approximation does not depend on the details of the field growth at x < 0. We see that the relative reflection |R ζ,n | 2 and transition |T ζ,n | 2 probabilities in the leading-termapproximation are the same with ones produced due to the exponentially decaying electric field, given by the potential (2.10). Let us consider the most asymmetric case when Eqs. (5.3) hold and the parameter k 2 is sufficiently small, In this case the exponentially decaying field (2.10) is a small-gradient field and we are interesting in the Klein zone, Ω 3 , where N cr Taking into account that both π 0 (L) and π 0 (R) satisfy the inequalities given by Eq. (2.35) in the range Ω 3 , we see that π 0 (R) varies greatly, Note that in this range 2π ⊥ < eE/k 2 + ∆U 1 . It can be seen from the asymptotic behavior of a confluent hypergeometric function that N cr n is exponentially small if both |π 0 (R)| and π ⊥ are large ∼ eE/k 2 with π ⊥ / |π 0 (R)| ∼ 1 such that p R ≪ |π 0 (R)|. Then the range of fixed π ⊥ is of interest in the range (5.5) and we assume that the inequality (4.15) holds, in which K ⊥ is any given number satisfying the condition Using asymptotic expressions of the confluent hypergeometric functions, we find that the differential mean numbers of created pairs N cr n , given by Eq. (5.3), can be approximated by distributions that vary from Eq. (A14) to Eq. (A17); see details in Appendix A. The only distribution (A14) depends on ∆U 1 . However, the range of transverse momenta is quite tiny for this distribution. It implies that in the leading term approximation the total number N cr of pairs created does not depend on ∆U 1 and therefore does not feel peculiarities of the field growth at x < 0 in this approximation. We see that N cr n given by Eq. (A17) are exponentially small if |π 0 (R)| ∼ π ⊥ . Then the substantial value of N cr n are formed in the range where K is any given number and K ⊥ satisfies the inequalities (4.15) and (5.6). In this range |π 0 (R)| ≫ π ⊥ and the distributions (A17) is approximated by Eq. (A15) both for bosons and fermions. Thus, Eq. (A15) gives us the leading-term approximation for the substantial value of N cr n over all the range (5.7). Note that the same distribution takes place in a small-gradient field for p 0 > 0, see Eq. (4.20). Approximation (A15) does not depend on the details of the field growth at x < 0, therefore, it is the same as in the case of the exponentially decaying electric field, given by the potential (2.10). Using the above considerations, we can estimate dominant contributions to the number density n cr of pairs created by the very asymmetric peak as where r cr is given by Eq. Finally, we can see that the vacuum-to-vacuum probability is 9) where N cr = V ⊥ T n cr and n cr is given by Eq. (5.8) and µ is given by Eq. (4.25). As it was mentioned above, the form of N cr n does not depend on the details of the field growth at x < 0 in the range of dominant contribution. Therefore, calculations of total quantities in an exponentially decaying field are quite representative for a large class of the exponentially decaying electric fields switching on abruptly. VI. SHARP PEAK The choosing certain parameters of the peak field, one can obtain sharp gradient fields that exist only in a small area in a vicinity of the origin x = 0. The latter fields grows and/or decays rapidly near the point x = 0. Let us consider large parameters k 1 , k 2 → ∞ with a fixed ratio k 1 /k 2 . We assume that the corresponding asymptotic potential energies, U R and U L , given by Eq. (2.9), define finite magnitudes of the potential steps ∆U 1 and ∆U 2 for increasing and decreasing parts, respectively, and satisfy the following inequalities: In the ranges Ω 1 and Ω 5 the energy |p 0 | is not restricted from the above, that is why in what follows we consider only the subranges, where In the range Ω 3 for any given π ⊥ the absolute values of p R and p L are restricted from above, see (4.6). Therefore, condition (6.2) implies Eq. (6.3). This case corresponds to a very sharp peak of the electric field with a given step magnitude U =∆U 1 + ∆U 2 . At the same time this configuration imitates a sufficiently high rectangular potential step [the Klein step; see Ref. [15] for details and the resolution of the Klein paradox] and coincides with it as k 1 , k 2 → ∞. Thus, this potential step can be considered as a regularization of the Klein step. We have to compare this regularization with another one presented by the Sauter potential in Ref. [15]. In the case under consideration the confluent hypergeometric function can be approximated by the first two terms in Eq. (2.20), which are Φ (a, c; η) with c ≈ 1 and a ≈ (1 − χ) /2. Then in the ranges Ω 1 , Ω 3 , and Ω 5 , the coefficient |g given by Eq. (2.37) for fermions, can be presented in the leading-term approximation as This leading-term does not depend on k 1 and k 2 . Note that in the ranges Ω 1 and Ω 5 , the coefficient |g ( − | + )| −2 determinate the relative reflection |R ζ,n | 2 and transition |T ζ,n | 2 probabilities in the form (3.2) while in the range Ω 3 it gives the differential mean number of pairs created, N cr ranges Ω 1 and Ω 5 . For example, |g ( − | + )| −2 ≈ π 2 ⊥ /U 2 if π 0 (R) ≫ π 2 ⊥ . For bosons in the ranges Ω 1 , Ω 3 , and Ω 5 , the leading-term approximation of |g given by Eq. (2.41), is Taking into account that p L − p R > U in the ranges Ω 1 and Ω 5 , we obtain that In the range Ω 3 the difference p L − p R are restricted from above by Eq. (4.6) and can tend to zero. That is why the differential mean number of boson pairs created, N cr n = |g ( − | + )| −2 given by Eq. (6.5), can be large. It has a maximum, N cr n = 4 p L p R /b 2 → ∞ at p L − p R = 0. This is an indication of a big backreaction effect at p L − p R → 0. In contrast with the Fermi case the k 1 , k 2 -dependent term b 2 in Eq. (6.5) can be neglected only in the range where Under the latter condition, one obtain Thus, we see that the concept of a sharp peak in the scalar QED is limited by the condition min (∆U 1 /k 1 , ∆U 2 /k 2 ) 1 for the fields under consideration. We do not see similar problem in the spinor QED. If k 1 = k 2 (in this case ∆U 2 = ∆U 1 = U/2), we can compare the above results with the regularization of the Klein step by the Sauter potential; see Ref. [15]. We see that both regularizations are in agreement for bosons under condition (6.7). Both regularizations are in agreement for fermions in the range Ω 3 if p L − p R ≪ U. For fermions in the ranges In the nonrelativistic subrange, |π 0 (R/L)| ≫ U, the leading-term in Eq. (6.9) has a form given by Eq. (6.6), that is, it is the same for fermions and bosons and both regularizations are in agreement. To compare our exact results with results of the nonrelativistic consideration for a noncritical rectangular step, U <2m, (in this case the range Ω 3 does not exist) obtained in any textbook for one dimensional quantum motion (e.g., see [22]), one set p ⊥ = 0, then π ⊥ = m, π 0 (L) = p 0 = m + E, and π 0 (R) = p 0 − U = m + E − U. VII. CONCLUDING REMARKS We have presented new exactly solvable cases available in the nonperurbative QED with x-electric potential steps that were formulated recently in Ref. [15]. In particular, we have considered in details three new configurations of x-electric potential steps: a smooth peak, a strongly asymmetric peak, and a sharp peak. Thus, together, with two recently presented exactly solvable cases available in the QED with the steps (QED with the Sauter field [15] and with a constant electric field between two capacitor plates [16]), the most important physically exactly solvable cases in such QED are described explicitly at present. We note that varying parameters defining these steps it is possible to imitate, at least qualitatively, a wide class of physically actual configurations of x-electric potential steps (constant electromagnetic inhomogeneous fields) and calculate nonperturbatively various quantum vacuum effects in such fields. In these subranges we have for τ −1 We see that τ 1 − 1 → 0 and τ 2 − 1 → 0 in the range (a), while |τ 1 − 1| is some finite number in the range (c), and |τ 2 − 1| some finite number in the ranges (c) and (d). In the range (b) these quantities vary from their values in the ranges (a) and (c). We choose χ = 1 for convenience in the Fermi case. In the range (a) we can use the asymptotic expression of the confluent hypergeometric function given by Eq. (B1) in the Appendix B. Using Eqs. (B6) and (B7), we finally find the leading term as for fermions and bosons, where max |Z 1 | g −1 2 . This expression in the leading order coincide with the one for the case of uniform constant field [6,14], In the range (c), the confluent hypergeometric function Φ (1 − a 2 , 2 − c 2 ; −ih 2 ) is approximated by Eq. (B8) and the function Φ (a 1 , c 1 ; ih 1 ) is approximated by Eq. (B9) given in the Appendix B. Then we find that where max |Z 1 | −1 g 1 /h 1 and max |Z 2 | −1 g −1 2 . Using the asymptotic expression Eq. (B1) and taking into account Eq. (A3) and (A5), we can estimate that N cr n ∼ e −πλ in the range (b). Considering positive p 0 > 0 and using the inequalities (2.35) in the range Ω 3 , we see that negative π 0 (R) varies greatly while π 0 (L) is positive and very large, Taking into account that exact N cr n and its range of formation is invariant under the simultaneous exchange k 1 ⇆ k 2 and π 0 (L) ⇆ −π 0 (R), we find for p 0 > 0 that the differential mean numbers in the leading-order approximation are Let us find the dominant contributions to the number density n cr given by Eq. (4.21). Using the variable changes, we respectively represent the quantities I (1) p ⊥ as where Assuming min (m/k 1 ) 2 , (m/k 2 ) 2 ≫ K we see that the leading term in I p ⊥ (4.21) takes the following final form: where and Γ (−α, x) is the incomplete gamma function. Neglecting an exponentially small contribution, one can extend the integration limit over p ⊥ in Eq. (4.21) from √ λ < K ⊥ to √ λ < ∞. Then calculating the Gaussian integral, we finally obtain the expression (4.22). In the range (a) if it exists, the confluent hypergeometric function Φ (1 − a 2 , 2 − c 2 ; −η 2 ) is approximated by Eq. (B8) given in Appendix B. In this range the differential mean numbers in the leading-order approximation are very small, where max |Z 2 | −1 ∼ ε √ h 2 −1 . In the range (c), τ 2 − 1 → 0 and, using Eqs. (B2), (B3) and (B4) from Appendix B we find that In the range (e) parameters η 2 and c 2 are large with a 2 fixed and τ 2 > 1 with arg (2 − c 2 ) < 0. In this case, using the asymptotic expression of the confluent hypergeometric function given by Eq. (B9) in Appendix, we find that both for fermions and bosons, where Z 2 is given by Eq. (B2). We note that modulus |Z 2 | −1 varies from
2017-11-11T12:25:31.000Z
2017-09-20T00:00:00.000
{ "year": 2017, "sha1": "0cb5f03d7246e8852e119cab1ae369d184ed55ad", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.06997", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0cb5f03d7246e8852e119cab1ae369d184ed55ad", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
2259656
pes2o/s2orc
v3-fos-license
Encoding strategy affects false recall and recognition: Evidence from categorical study material The present research investigated memory vulnerability to distortions. Different encoding strategies were used when categorized lists were studied. The authors assumed that an imagery strategy would be responsible for decreasing false memories more than a word-whispering strategy, which is consistent with the model of semantic access and previous research in the Deese-Roediger-McDermott paradigm (the DRM paradigm; Deese, 1959; Roediger & McDermott, 1995). A normative study of category lists and 4 experiments were conducted to verify the memory vulnerability to different encoding strategies (imagery, word-whispering, control). Half of subjects recalled and half recognized previously studied words. The results revealed a marked reduction in false recognition and recall after imagery encoding, relative to after word-whispering encoding. , those that result from misleading post-event information (e.g., Davis & Loftus, 2007), and those that arise from associative memory structure (Deese, 1959;Dewhurst & Anderson, 1999;Roediger & McDermott, 1995), have been observed across different paradigms. In the present study, we focus on associative errors and test how different mnemonic strategies lead to different levels of memory accuracy. The ease with which associative false memories may be induced is shown in one of the most heavily investigated procedures for studying memory confusions, the Deese-Roediger-McDermott paradigm (the DRM paradigm; Deese, 1959;Roediger & McDermott, 1995). Deese (1959) presented participants with lists of associated words and then asked them to recall the words. Apart from accurate recall, the participants also falsely recalled words that were strongly semantically related to those on the lists but were not presented (so-called critical lures). Roediger and McDermott (1995) replicated and extended Deese's work and showed that the probability of erroneous recall of critical lures was similar to the probability of correct recall of words from the middle serial positions of the studied lists. The effect of false memory caused by associative memory structure is also visible in the category repetition procedure which uses words that belong to the same category (Dewhurst, 2001;Dewhurst & Anderson, 1999). Dewhurst and Anderson (1999) showed participants one, four, or eight words from the same semantic category and revealed that the rate of falsely recognized critical lures increased with the number of words studied. Moreover, they compared the quality of memory distortions by asking participants to make remember-know judgments for each item they classified as old. Remember-know judgments reflect the subjective state of awareness that accompanies episodic memory retrieval. This procedure was introduced by Tulving (1985). Participants give a remember response if they can recollect details of the item's study presentation, and a know response if the item feels familiar, but they AdvAnces in cognitive Psychology reseArch Article http://www.ac-psych.org 2013 • volume 9(1) • 44-52 45 cannot consciously recollect its earlier presentation. It appeared that false remember and know responses to the non-studied category items increased with the number of items from the same category that were presented at encoding (remember judgments were at a rate of .09 for eight category members). To some extent, these findings are consistent with those obtained by Roediger and McDermott (1995), who found that their participants had specific recollections of the critical lures. In their study, remember judgments had a probability between .38 and .58 depending on study condition. The discrepancy in remember rates toward critical lures, between Dewhurst and Anderson's (1999) and Roediger and McDermott's (1995) studies, stems from different materials (category lists vs. the DRM lists) and from different procedures they used. However, increase of remember responses with the number of items presented at encoding clearly shows that generation of words' associates takes place at encoding. Several researchers have tried to describe the nature of associative memory errors by using different manipulations during encoding. Gallo and Roediger (2002) as well as McDermott and Watson (2001) found that in the DRM paradigm, slower presentation of the list decreased the number of old responses to critical lures. This may give subjects time to process items more deeply, leading them to code more item-specific information. At retrieval, the distinction between the real experience and thoughts is more elaborated and this additional information results in a more accurate source memory (Johnson, Hashtroudi, & Lindsay, 1993). A decrease in false alarms in the DRM paradigm was obtained also when more distinct perceptual information was presented at encoding (Arndt, 2010;Israel & Schacter, 1997) or simply when the modality of the presentation of the lists switched from auditory to visual (Gallo, McDermott, Percer, & Roediger, 2001;Smith & Hunt, 1998), suggesting that critical lures lack the perceptual features that are typical of the studied items. The main explanation for this effect is based on the distinctiveness heuristic, which is defined as a metamemorial process that subjects may use at the time of retrieval to help decide whether a test item has been studied (Israel & Schacter, 1997). More distinctive information (or processing information in a distinctive manner) increases encoding of item-specific information (Howe, 2006). A relatively modest number of studies showed also that a reduction in false recall and recognition could be obtained by creating mental images of the words presented during the study. This strategy was adopted from therapeutic methods of guided imagery and mnemonic techniques (Newstead & Newstead, 1998), and it is argued that it might increase the distinctiveness of encoded material. In one of these studies, Foley, Wozniak, and Gillum (2006) explicitly asked participants to generate images of presented objects (experimental condition), or to implicitly elicit images by describing a function of each object (control condition). The explicit instruction had a strong effect on false memory rates. Gunter, Bodner, and Azad (2007) also found that, in general, utilizing imagery reduced false alarms and led to greater accuracy. Foley, Hughes, Librot, and Paysnick (2009) revealed the effects of false memory reduction on images of individual items as well as integrated subsets. Although most of the studies suggested the beneficial influence of imagery encoding on memory performance, Newstead and Newstead (1998) did not find statistically significant differences between the imagery encoding group and the control group. The authors argued that, although imagery is an effective mnemonic strategy, it works only when highly bizarre images are created or when it is used systematically as part of a more general strategy (e.g., a so-called pegword technique or the method of loci). Also, Burns, Jenkins, and Dean (2007, Experiment 2) did not show any decrease of false recall or increase of studied items recall after relational imagery encoding. However, when an item-specific imagery task was applied, recall of the studied items increased and recall of the critical lures decreased compared to control and relational imagery groups. These findings are clearly at odds with one another. The discrepancy might result on the one hand from slight differences in instructing participants to create images (see Burns et al., 2007) or, on the other hand, from differences in the ability to generate images. Mental images are internally-generated, and their quality will differ between individuals (Richardson, 1977). Although it is still unclear how to elicit the most vivid images, no one disputes that the role of imagery in reducing associative memory errors is crucial. Despite existing evidence that imagery affects memory, the research so far has not focused on methods that suppress mental images. Such a comparison would clearly show the advantage of the imagery strategy itself. Moreover, it may also constitute a new interpretative framework. Therefore, the main goal of the present studies was to verify how the imagery strategy affects memory accuracy as compared either to the condition in which creating images is limited or to the condition in which participants do not receive any particular instruction. For the purpose of these studies, a new, word-whispering technique was developed. This technique is based on reading words quickly or repeatedly, without saying them out loud, but only whispering them. We assume that in these cases, the central executive (Baddeley & Hitch, 1974) gives priority to the verbal task, and the ability to create images may be constrained. Although there is no explicit evidence that this technique suppresses images entirely, it possibly makes them much more difficult to generate. This hypothesis is in accordance with studies by Brooks (1967), who reported a conflict between reading messages and imagining the scenes described by those messages. He stated that reading interfered with visualization and described this phenomenon as a selective interference. This interference arises when the performance of two similar tasks requires engagement of the same modality (e.g., written presentation and visual activity; for a review, see De Beni & Moe, 2003;De Beni, Moe, & Cornoldi, 1997). A number of studies have also investigated the effect of word production on memory (e.g., Conway & Gathercole, 1987;Gathercole & Conway, 1988;MacLeod, Gopie, Hourihan, Neary, & Ozubko, 2010) and found superior retention of words that were said aloud compared to those that were read silently. MacLeod et al. (2010), for example, differentiated between saying words aloud, mouthing, and silent reading and revealed the benefit of saying words aloud and mouthing over silent reading. However, the production effect obtained by saying words AdvAnces in cognitive Psychology reseArch Article http://www.ac-psych.org 2013 • volume 9(1) • 44-52 46 aloud or mouthing them is limited to within-subject designs (MacLeod et al., 2010). The authors argued that the advantage of word production was due to enhanced word distinctiveness in mixed-lists designs. In the present studies, a between-subjects design is utilized to reduce this beneficial effect of vocalization. The findings of previous experiments led us to conduct four experiments that tested the influence of encoding strategy on memory performance. Although previous research has engaged the DRM procedure, we utilized the categorical study lists (Park, Shobe, & Kihlstrom, 2005). We decided to use this stimuli material because words are not only associated but also equally concrete as compared to DRM lists. Thus, we see an advantage in using this kind of stimuli when imagery instructions are tested because it is much more probable that after being asked to create an image participants will simply "mentally see" the referent of the word as compared to the DRM stimuli which are not equally concrete. In the previous studies that used the DRM list and imagery instructions, the role of imagery was less certain. It cannot be said that participants create real images of abstract words (like of occupation, awake, etc.). Thus it is not clear what other processes may be engaged following imagery instructions in the DRM paradigm. We also assume that engaging the phonological loop (Baddeley & Hitch, 1974) by whispering all the words from the studied lists may to some extent interfere with the visualization of these words and, thus, with imagery. Therefore, we hypothesize that imagery encoding will lead to an increase of memory accuracy and will outperform word-whispering encoding and encoding without engaging any specific strategy (control condition). The accuracy of these strategies will be tested both for recognition memory, that is an ability to correctly decide whether a person has encountered a stimulus previously in a particular context as well as for free recall task, in which participants are presented with a sequence of items and subsequently are asked to recall them in any order (Baddeley, Eysenck, & Anderson, 2009). ParticiPants 2 Forty-two undergraduate students participated in the study. Their ages were between 21 and 27 years (M = 21.74, SD = 1.29). There were 25 women and 17 men. They were tested during the regular classes and were not rewarded for the participation. Materials The first phase of the research was to prepare stimulus materials. As the participants were not native English speakers, the English-language word association norms (e.g., Russell & Jenkins, 1954) and their simple translation into Polish language were not appropriate because of possible cultural differences. The aim of this normative study was to elaborate lists which contained the most representative words in six given categories (fruits, vegetables, clothes, animals, kitchen equipment, and computer equipment). Moreover, the words should have comparable imaginability. Twenty-six undergraduates, who did not participate in further experiments, were asked to choose the prototype word for each category. Then, the words that occurred most often were chosen to create the category lists. The stimulus materials for the studies consisted of words from six categories that were prepared in the normative study. Each word list consisted of 12 common and easy to visualize instances of the category, excluding the most common instance, which served as a critical lure. Each participant received six lists to study. The recognition test that was created was similar to that by Roediger and McDermott's (1995) and it included 42 items: 12 studied and 30 non-studied words. Within the non-studied items there were three types of word: six critical lures (prototypes, e.g., apple), 12 foil words weakly related to the lists (two per list; e.g., orchard), and 12 foil words unrelated to any items in the six categories (e.g., fan). The weakly related words were randomly drawn from those that appeared below 13th position in the association frequency rankings in the normative study. Each block of tests started with a studied word. The last word was the critical lure. The rest of the items were randomly mixed. Procedure The subjects were randomly assigned and tested in small groups. They were seated so that they would not disturb one another during the encoding phase. The participants were told that the experiment was designed to test their memory and that they would be presented with several 12-word lists to remember. Each subject received a booklet with the instructions followed by the categorical study lists. Half of the subjects received imagery instruction, the other half were given word-whispering instructions. In the imagery condition, the subjects were told to read the words carefully only once, to create a vivid image of the referent of each word, and to memorize it. In the word-whispering condition, the subjects were told to quickly read each word once and whisper it quietly and to memorize it. As the subjects were not given any time limit for that task and they studied the lists at their own pace, the average time necessary for completing the task was measured. The words appeared in the booklet in 12 points Times New Romans black font. After reading the lists, there was a retention interval lasting 2 min. The subjects were asked to solve mathematical tasks using their own sheets and pencils. After completing these, the subjects received sheets with a recognition test and they were asked to indicate by signing whether each item was old (it had been seen earlier on one of the lists) or new (it had not appeared on the study lists). All words appeared in the same font as during the encoding phase and were randomly intermixed. There was no time limit for this task. Results T-tests were utilized to measure the difference between the imagery and word-whispering groups in their reactions to different types of items. Our primary interest was in comparing the rate of false alarms involving critical lures. The results are listed in Table 1. Critical lures were incorrectly judged as old much less often in the imagery than in the word-whispering encoding condition, t(40) = 3.49, p < .001, d = 1.1. The participants in the imagery group were also more accurate in responding old to studied items than those in the word-whispering group, t(40) = -2.54, p < .05, d = 0.80. AdvAnces in cognitive Psychology The difference between these two groups in the false alarms rate towards related and unrelated foil items was also tested. Both rates in all conditions were close to zero, but the imagery group was significantly less prone to judge related foil words as studied than the word-whispering one, t(40) = 2.42, p < .005, d = 0.79. A similar pattern of results was also obtained for unrelated foil items, t(40) = 2.63, p < .005, d = 0.73. In this experiment, two different mnemonic strategies were provided. In general, it is evident that the imagery strategy makes people less prone to memory distortions and consequently makes their recognition more accurate. ExpErImEnt 2 In Experiment 1, we tested (false) recognition. In Experiment 2, we tested whether imagery had a similar influence on (false) recall. Although Deese (1959) showed a reliable effect of false recall, most research showed that subjects were accurate in recall (e.g., Roediger & Payne, 1985). ParticiPants Fifty undergraduate students participated in the study during regular classes. Their ages were between 21 and 27 years (M = 21.5, SD = 1.16). There were 31 women and 19 men. They were not rewarded for the participation. Materials and Procedure This experiment used the same words from six categories as the previous experiment. The same two conditions -imagery and wordwhispering -were provided. The instructions and the mathematical ability test were also the same as in the previous experiment. In the second phase of the experiment, participants were asked to recall as many previously memorized words as possible and to write them down. The time for this task was 2 min. As the recall test was conducted after encoding the whole set of lists rather than after each list (see Roediger & McDermott, 1995), the participants were reminded of the names of the categories. Results The rate of correctly recalled words was significantly higher in the The results confirm the hypothesis that creating a vivid image of an item is a possible way to decrease false recall. Discussion The results from Experiments 1 and 2 are consistent with those obtained in previous studies that focus on associative memory illusions (e.g., Roediger & McDermott, 1995); however, the rate of falsely recalled and recognized items was rather low. This was probably because we utilized category lists of words in this research, and the recall test, instead of being provided after studying each list, was provided after the whole set of lists. 3 The significant difference between the wordwhispering and imagery conditions, however, implies that memory sensitivity to distortions changes along with the way of encoding. Thus it suggests that the material was elaborated properly. As stated above, the subjects memorized the lists at their own pace. The average encoding time 4 for the imagery group (approximately 4.5 min) was longer than for the word-whispering group (approximately 3 min). Thus, the time taken for encoding could be the reason for the different results in these groups. The research described above, as well as that by Foley et al. (2006Foley et al. ( , 2009 Basing on the average time necessary to memorize the lists in Experiments 1 and 2, we determined that a presentation lasting 4 s for each word would be long enough, either to create a vivid image or to activate a purely verbal representation of each word. This was based on the average time necessary to memorize the lists in Experiments 1 and 2. It is also consistent with the procedure applied by McCabe and Smith (2002), who used 2 and 4 s for encoding each word in their experiment. Moreover, in the subsequent experiments we used a control group that was given no instruction on how to encode the lists. ParticiPants Seventy-six undergraduates participated in the study and were tested during regular classes. Their ages were between 19 and 27 years (M = 21.31, SD = 1.35). There were 45 women and 31 men. Fifty-one were assigned to the two experimental groups (25 to word-whispering condition and 26 to imagery condition) and 25 to the control group. They were not rewarded for the participation. Materials The same stimulus materials used for Experiments 1 and 2 were utilized in this study; however, the words were presented on a screen and their duration was controlled (4 s for each word with a 1-s break between words). The recognition test included 42 items and was identical to that used for Experiment 1. Procedure The subjects were randomly divided into three groups: two experimental groups and one control group. They were then tested in small groups and seated so that they could not disturb each other. The subjects were told that the experiment was designed to test their memory and that they would be presented with several 12-word lists to remember. In the imagery condition, the participants were instructed to create a vivid image of the referent of each word and to memorize it. They were also informed that they had 4 s to create each image. Between each word a blank screen appeared for 1 s. In the word-whispering condition, the subjects were asked to repeat words quickly, whispering them quietly until they disappeared (also 4 s). The procedure in the control condition was the same except that the participants were not given any instructions concerning the method of encoding. The study phase lasted 6 min. After having been presented with the lists, the subjects were asked to solve a mathematical problem. The time permitted for this task was 2 min. After completing this, the participants received a recognition test. The test consisted of 42 items, some of which had already been presented during the study phase and some not. Among the non-presented items there were critical lures, items weakly associated with the words from the lists (related foils), and items that were neither present nor associated (unrelated foils). Results Four one-way ANOVAs with the single between-participants variable encoding condition (with levels word-whispering, imagery, and control) and Duncan post hoc analyses were used to test the effect of the encoding strategy on (a) the false alarms rate for critical lures, (b) the false alarms rate for related foil items, (c) the false alarms rate for unrelated items, and (d) the correct recognition rate. First, the false alarms rate for critical lures was compared. As in Experiment 1, the false recognition of critical lures was found to be substantially lower in the imagery group than in the word-whispering group or the control group, F(2, 73) = 3.73, p < .05, η 2 = .09. Moreover, correct recognition was lower in the word-whispering condition than in the control condition, F(2, 73) = 3.04, p = .05, η 2 = .08. The results are displayed in Table 2. Neither the false alarms rate for related foil items, F(2, 73) = 0.75, ns, nor that for unrelated foil items, F(2, 73) = 0.18, ns, differed across the condition and they were close to zero. The results obtained by the imagery and word-whispering groups are consistent with those from Experiment 1 and confirm our hypothesis that imagining while encoding plays a crucial role in protecting against false memory. However, an unexpected result was obtained in the control group. The high rate of correctly recognized items (comparable to the imagery group), along with the high rate of erroneous old reactions to critical lures (comparable to the word whispering group), suggests that the subjects might have used specific encoding strategies ParticiPants Seventy-four undergraduates participated in the study. Their ages were between 21 and 27 years (M = 21.7, SD = 1.47). There were 50 women and 24 men. Fifty-two subjects were randomly assigned to the experimental groups (28 to word-whispering condition and 24 to imagery condition) and 22 subjects to the control group. The subjects were tested during regular classes and were not rewarded for the participation. Materials and Procedure As in Experiment 3, in the experimental groups two conditions -imagery and word-whispering -were utilized between-subjects, and in the control group the participants were not given any particular encoding instruction. The instructions, duration of presentation, and mathematical ability test were the same as in the previous experiment. In the second phase of the experiment the subjects were asked to recall as many previously memorized words as possible and to write them down. They were again reminded of the names of the categories. Results Two one-way ANOVAs with the single between-participants variable encoding condition (with levels word-whispering, imagery) were conducted between-subjects to test the differences between the study conditions in correct and false recall. The rates of correctly recalled words in the imagery, word-whispering, and control groups were not statistically different, F(2, 71) = 0.75, ns, and amounted to .44 (SD = .1), .47 (SD = .13), and .44 (SD = .12), respectively. As in Experiment 2, all the erroneously recalled words were analysed and then classified by two trained coders to identify those that were the most associated with the list. None of the falsely recalled words was unrelated to any of the lists. In the word-whispering condition, the mean proportion of recalled critical lures (M = .03, SD = .03) was significantly higher than in the imagery condition (M = .006, SD = .01) and in the control condition, M = .01, SD = .01, F(2, 71) = 7.35, p < .001, η 2 = .17. The difference between the imagery group and the control group was not significant. These results partially confirm the hypothesis that vivid mental images may be responsible for fewer errors, which is consistent with previous research. The fact that there was no significant difference between the imagery group and the control group suggests that the spontaneous encoding of concrete words might to some extent evoke their images (Paivio, 1971). However, other specific processes may also be involved in items processing. GEnEral dIscussIon In the present studies, we investigated how imagery strategy affected memory performance and demonstrated its superiority over other methods, in which creating images was limited or not explicitly requested. Four experiments differing in retrieval condition (recall and recognition) and encoding duration (self-paced or fixed duration) were conducted to test these assumptions. The research contributes to the previous findings on reducing associative memory distortions by means of imagery techniques by adding data from categorical lists. The results obtained consequently suggest that the imagery encoding strategy leads to more accurate memory performance than the word-whispering strategy. This supports previous findings on imagery encoding (e.g., Foley et al., 2006Foley et al., , 2009Gunter et al., 2007), which assume imagery as a crucial factor in discrimination between studied and non-studied but strongly associated items. Moreover, imagery encoding may be treated as an active task leading to better memory (see Meijer & Van der Lubbe, 2011). It would mean that this kind of encoding engaged more active involvement because besides phonological and visual sensory codes participants also generated a pictorial code. However, the hypothesis that quick or repeated whispering of encoded words would suppress the items' visualization did not find full support. Experiments 1 and 3 revealed that in terms of hits, the imagery instruction led participants to be, in general, more accurate than the word-whispering group. This difference was, however, statistically significant only in Experiment 1. In Experiment 3, the proportion of hits did not significantly discriminate the participants applying imagery instructions from those in the word-whispering and control groups. This discrepancy between the experiments may stem from the different encoding duration set up in Experiments 1 and 3. In Experiment 1, subjects in both conditions were instructed to read words quickly and only once, thus the word-whispering group needed less time to encode the items than the participants in the imagery condition, where after reading each word they were supposed to create a mental image of its referent. Thus, the overall time necessary to encode lists in the word-whispering group of Experiment 1 was shorter than in the imagery group, as well as in Experiments 3 and 4. Therefore, in Experiment 1 not only the imagery strategy but also the longer encoding time could be responsible for accuracy. The influence of encoding duration is consistent with research by McDermott and Watson (2001) that showed that longer presentation duration led to a decrease of memory distortion. Although there is no significant difference in hits between the control and the imagery group in Experiment 3, a tendency to be more accurate is visible in the control group. This result is surprising and requires a comment. The participants in the control group were not instructed to apply any specific way of encoding, therefore it is highly possible that 4 s for encoding each word was long enough to use personally preferred mnemonic strategy which they were familiar with and which was oriented towards the best memorizing of the studied items, but not towards detecting critical lures that are automatically retrieved AdvAnces in cognitive Psychology reseArch Article http://www.ac-psych.org 2013 • volume 9(1) • 44-52 50 due to spreading activation (Collins & Loftus, 1975). This is reflected in the high rate of false alarms the control group demonstrated. One possible explanation lies in Paivio's (1971) dual coding theory, which states that mental images of encoded concrete items may be evoked spontaneously. Consequently, these items are processed separately in two different channels, creating separate representations for information processed in each of them, and next visual and verbal codes can be used during retrieval. In other words, the participants might have used their own memorizing strategies which for some of them might have been based on mental image creation and for others, on other processes. However, more research should be carried out to precisely identify these processes. There is also no significant difference between the word-whispering group and the imagery group in terms of the rate of hits, although the mean rate in the former group is lower. Therefore, we can notice a gradation with the word-whispering strategy as the least accurate strategy in terms of hits, and the control group as the most accurate. As we stated above, the participants in the latter group might have used personally preferred strategies to improve their memory for studied items. This implies that the imagery strategy might not be the best for everyone in maximizing accuracy. This may be caused by individual differences in participants' visualization abilities as well as by the reasons listed by Newstead and Newstead (1998), who pointed out that imagery might be efficient only when highly distinctive images are created or when used systematically as part of a more general strategy. The proportion of correctly recalled items is relatively low (see Roediger & McDermott, 1995). This might be an effect of the procedure utilized in both of the present recall experiments which provided the test after studying all six lists rather than after each of the lists. The hypothesis that imagery encoding has a beneficial influence on memory accuracy was confirmed also for recall. The results of Experiment 2 revealed a higher rate of correct recall in the imagery group than in the word-whispering group. However, in Experiment 4 the pattern of correct recall changed, showing no differences between groups. Moreover, the rate of correctly recalled items in Experiment 4 was substantially lower in the imagery condition and higher in the word-whispering condition compared with the corresponding condition in Experiment 2. This discrepancy between Experiments 2 and 4 could again be a consequence of the different encoding duration in both studies, as mentioned above, and suggests that in both recognition and recall, the time taken to encode items is a crucial factor that, along with different encoding strategies, modifies the correctness of responses and the sensitivity to critical lures. 5 Also in line with the hypothesis and previous studies (Foley et al., 2006(Foley et al., , 2009Gunter et al., 2007), in all experiments the lowest rate of false alarms was obtained in the imagery condition as compared to the word-whispering condition. It is argued that the participants used the distinctiveness heuristic at the time of the retrieval of information (Israel & Schacter, 1997). They encoded words visually and retrieved them in the same modality, thus they were probably able to use additional information from the image created at encoding. According to the source-monitoring theory (Johnson et al., 1993), if elements are similar and share common details this creates a difficulty in terms of remembering the appropriate source of the item. The visualized items are more detailed; therefore, during recognition, critical lures are ignored. This is because the critical lures do not have these perceptual elements and it is easier to reject them and treat them as items not presented in the lists. Because in all experiments we conducted the word-whispering group and the control group achieved a higher rate of memory errors comparing to the rate reached by the imagery group, this suggests that a vivid mental image is used at retrieval and helps in judging critical lures as new items. Although the word-whispering strategy of encoding led, in both recognition experiments, to a substantially higher rate of false alarms than imagery encoding, in Experiment 3 the participants in the wordwhispering group appeared to be susceptible to false alarms towards critical lures similarly to the control group. This does not seem to be consistent with the hypothesis stating that whispering of words suppresses the creation of mental images and decreases the false alarm rate compared to the control group. Although in recent research (Forrin, MacLeod, & Ozubko, 2012) the word-whispering strategy is considered to be one of the mnemonic strategies that improve memory for words relative to a silent word-reading strategy, little is said about its superiority in reducing associative memory errors. On the other hand, it is possible that whispering of words leads to spontaneous creation of images; however, they may not be as clear as when created following explicit imagery instruction. Thus, whispering of words does not preclude that critical lures may possess some perceptual features. However, better memory performance in the imagery condition may result from vivid encoding of studied items which is consequently helpful in monitoring the source of items in subsequent memory test. In other words, while encountering critical lures at test participants may be aware that they were not engaged in creating images for these items even if they experienced a slight image spontaneously. This problem should be addressed in future studies. A slightly different pattern of results regarding false recall was revealed in Experiment 4. In accordance with the hypothesis, it was shown that the highest rate of falsely recalled items was obtained in the word-whispering condition as compared to both the imagery and control groups. The word-whispering condition appeared to be the most susceptible to semantic intrusions. This probably occurred because this technique, by engaging a phonological loop, on the one hand did not give the subjects a chance to apply specific memory strategies familiar to them and, on the other hand, suppressed the processing of additional items (e.g., creating images). This remains in accordance with Brooks' (1967) selective interference theory. While memorizing, the subjects in the word-whispering condition were not able to process the items more extensively (e.g., by engaging visualization) because reading the words made them unable to convert words into any non-verbal form. Future studies should examine other techniques that might suppress spontaneous images. Such results might show more precisely the influence of mental images on semantic memory errors. While previous studies have used the DRM paradigm to show associative memory distortions, the current experiments applied
2016-05-04T20:20:58.661Z
2013-03-15T00:00:00.000
{ "year": 2013, "sha1": "e11baeae59e1200e574c033f19d798201033239e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e11baeae59e1200e574c033f19d798201033239e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
260271947
pes2o/s2orc
v3-fos-license
Carbazole and tetrahydro-carboline derivatives as dopamine D3 receptor antagonists with the multiple antipsychotic-like properties Dopamine D3 receptor (D3R) is implicated in multiple psychotic symptoms. Increasing the D3R selectivity over dopamine D2 receptor (D2R) would facilitate the antipsychotic treatments. Herein, novel carbazole and tetrahydro-carboline derivatives were reported as D3R selective ligands. Through a structure-based virtual screen, ZLG-25 (D3R Ki = 685 nmol/L; D2R Ki > 10,000 nmol/L) was identified as a novel D3R selective bitopic ligand with a carbazole scaffold. Scaffolds hopping led to the discovery of novel D3R-selective analogs with tetrahydro-β-carboline or tetrahydro-γ-carboline core. Further functional studies showed that most derivatives acted as hD3R-selective antagonists. Several lead compounds could dose-dependently inhibit the MK-801-induced hyperactivity. Additional investigation revealed that 23j and 36b could decrease the apomorphine-induced climbing without cataleptic reaction. Furthermore, 36b demonstrated unusual antidepressant-like activity in the forced swimming tests and the tail suspension tests, and alleviated the MK-801-induced disruption of novel object recognition in mice. Additionally, preliminary studies confirmed the favorable PK/PD profiles, no weight gain and limited serum prolactin levels in mice. These results revealed that 36b provided potential opportunities to new antipsychotic drugs with the multiple antipsychotic-like properties. Introduction G protein-coupled receptors (GPCRs) are the most prominent family of membrane proteins and are comprised of more than 800 members, of which the majority are implicated in the pathophysiology of neurodegenerative disorders 1e3 .Dopamine receptors belong to the class A GPCR family and mediate the physiological functions of dopamine, a neurotransmitter and hormone with a catecholamine structure.Based on the coupling to either Ga s/olf proteins, or Ga i/o proteins to stimulate or inhibit the production of the second messenger cAMP, respectively, dopamine receptors are categorized into two subfamilies: D 1 -like (D 1 R and D 5 R) and D 2 -like (D 2 R, D 3 R, and D 4 R) 4 .D 3 R belongs to the same D 2 -like receptor family as D 2 R, sharing a high level of homology with them.Since it was cloned and characterized, the D 3 R has been a target of therapeutic interest due to its relatively localized expression within mesolimbic neurocircuitry, including the nucleus accumbens, islands of Calleja, and ventral striatum 5e7 .The polymorphism of ser9gly in the first exon of D 3 R gene is associated with schizophrenia 8 .The level of D 3 R in the striatum increases in patients with schizophrenia and decreases with in those treated with antipsychotic drugs 9 . A number of studies employed a strategy for schizophrenia drug development that avoided EPS elicited with many antipsychotics is targeting D 3 R and avoiding the D 2 R effects 10 .As shown in Supporting Information Table S1, the diverse bitopic ligands with highly D 3 R-selectivity had been explored to develop novel D 3 R compounds with clinical ambitions, exploiting unique interactions at the D 3 R binding pocket 11e22 .Novel therapeutic applications of these known molecules with D 3 R-selectivity have been disclosed, which encouraged the continuous discovery of novel series of D 3 R selective ligands 23 .D 3 R-selective ligands in clinic and clinical trials were shown in Fig. 1A.S33138 was a preferential dopamine D 3 versus D 2 receptor antagonists identified preserving, enhancing cognitive function and increasing front cortical cholinergic transmission 24,25 .ABT-925 is a selective D 3 R antagonist with an approximately 100-fold higher in vitro affinity for dopamine D 3 versus D 2 receptors, which had been examined in a phase I trial for acute exacerbation of schizophrenia but ABT-925 failed in a phase II.The reason of the failure might be the insufficient occupancy to D₃ receptors at the doses used in this trail 26,27 .GSK598809, a D 3 R antagonist with >100-fold selectivity for D 3 > D 2 receptors, belonged to a novel series of 1,2,4triazol-3-yl-azabicyclo[3.1.0]hexaneswith high affinity and selectivity for the D 3 receptor and excellent pharmacokinetic profiles 28,29 .The treatment of substance addiction by GSK598809 had been examined in obese individuals and severe chronic smokers (i.e., NCT1039454 and NCT01188967 on Clinicaltrial.gov).However, the potential cardiovascular liability of GSK598809 at high doses increased the hurdle of clinical trails 30 .Buspirone was developed as a D 3 -preferring 5HT 1A antagonist studied in alcoholics and found to reduce anxiety, in particular during withdrawal (i.e., NCT00360191 and NCT00875836 on clinicaltrial.gov).Interestingly, selective D 3 R engagement of buspirone was only observed after oral dosing, while intramuscular administration was engaging also the D 2 R, indicating buspirone metabolites generate by the first passage through the liver could be the actual active moiety 31 .RGH188 (cariprazine, CRP) was characterized as a D 3 R preferential binding ligand with a 10-fold D 3 R/D 2 R selectivity.Notably, CRP exerted a cognitive-enhancing effect in the in vivo tests, which was thought to be related to its high affinity and preference for D 3 versus D 2 receptor 32 .F17464 was discovered as a unique >80fold D 3 over D 2 receptor preferential antagonist, which has been demonstrated selective D 3 receptor occupancy in a human positron emission tomography (PET) imaging study 33 .In a phase II schizophrenia acute exacerbation study, clinical antipsychotic efficacy with no weight gain, no extrapyramidal disorder observed 34 .Furthermore, F17464 could rescue valproate induced impairment in a rat social interaction model of autism, probably by increasing dopamine release in the prefrontal cortex and lateral forebrainedorsal striatum 35 .Although no highly selective D 3 ligands have been approved by US Food and Drug Administration (FDA), selective D 3 ligands have demonstrated many advantages in clinical trials for mood regulation, cognitive protection, and low side effects. However, the development of selective D 3 R ligands has been challenging due to the high sequence identity and homology.The D 2 R and D 3 R share 74% identity between their transmembrane domains (TMs) and 94% identity between their putative orthosteric binding sites (OBS), where the endogenous agonist dopamine binds 36 .To improve selectivity, most D 3 R ligands that share several well-known scaffolds with a primary pharmacophore, bind to the OBS and a secondary pharmacophore binding to the second binding pocket (SBP) or an allosteric binding site (ABS) 37 .GPCR ligands with such a scaffold have been depicted as bitopic binders, which can benefit high subtype affinity and selectivity 38,39 .In this study, we report the discovery of a novel series of D 3 R selective antagonists with desirable antipsychotic potency in vivo.From the initial hit compounds containing the carbazole scaffold, over 60 bitopic structural analogs were synthesized and evaluated for a comprehensive investigation of structureeactivity relationships (SAR), which led to the identification of compound 36b as a potent and blood brain barrier (BBB) penetrable D 3 R selective antagonists suitable for in vivo characterization.Furthermore, compound 36b exhibited potential antipsychotic activities in vivo with moderate safety profiles in psychotic side effects and hERG channel inhibition.In vivo, pharmacokinetics (PK) profiles and exquisite target selectivity of compound 36b were also satisfactory.Taken together, compound 36b was characterized as an excellent lead compound for further optimizations of the novel D 3 R selective antagonists with distinct antipsychotic-like effects in vivo. Structure-based virtual screening Our workflow of virtual screening is summarized in the protocol in Fig. 1B.A 3D shape-based similarity screening connected docking-based virtual screening approach was used in our study 40e42 .Three compounds with high affinity and selectivity for the D 3 R (R)-PG468, RGH-188, and GSK598809A were selected as the benchmark compounds for their representative secondary pharmacophore structures, which were to benefit high subtype selectivity.After preparing the co-crystal structure (PDB ID: 3pbl, Supporting Information Table S2) well, these three compounds were docked into the D 3 R using the induced-docking module of Maestro obtaining three binding conformations for each compound (Supporting Information Fig. S1).Aiming to seek more potent ligands, 3D shape-based similarity virtual screening approaches were employed to search more than 600,000 compounds from the SPECS chemical library, the ChemDiv chemical library, and the Chinese National Compound Library of Peking University (PKU-CNCL) databases by ROCS 3.1.2(OpenEye Scientific Software, Inc.) program.As a result, the top 3000 compounds were retained based on the 'Shape Tanimoto' score.Next, the Glide molecular docking module was used to screen the selected compounds from the 3D-similarity search, which were sorted by docking score, and all compounds within the top one percent were obtained.Because the residue Asp110 was essential for the binding affinity, compounds forming the ionic interaction with Asp110 were isolated using the PoseFilter module in the Maestro platform.Based on compound docking results and physicochemical properties, 65 compounds were purchased for further biological activity tests (Supporting Information Fig. S2).All active compounds passed the pan assay interference compounds (PAINS) test by PAINS-remover 43 . For the primary screen, a single high concentration (10 mmol/ L) of test compound was tested by radioligand binding assays on the CHO cells stably expressing hD 2 R and hD 3 R (Table S2), and those compounds that exhibited >75% inhibition were rescreened in full concentrationÀresponse format to estimate their affinity (K i ) values.Risperidone (RPD) and CRP were used as the positive drugs.From the 65 selected compounds, ZLG-25, ZLG-61, and ZLG-62 showed >90% inhibition for the D 3 R at 10 mmol/L (Table S3).Further tests accessed the affinity values of ZLG-25 (K i Z 685 AE 4.5 nmol/L), ZLG-61 (K i Z 641.5 AE 171.8 nmol/ L), and ZLG-62 (K i Z 3067.0AE 11.0 nmol/L) for D 3 R (Fig. 1C).Then, the D 2 R K i of these compounds were also tested and found to be 10e20 fold higher than that to D 3 R.Moreover, representative molecular properties of all the tested compounds were predicted by the Maestro 11.5 Qikprop module.The CNS drug-like properties of the selected compounds were identified as favorable values defined as a logP < 3, polar surface area (PSA) < 90 A ˚2, and BBB permeability with the ADMET_BBB_Level< 3 44 .Among the selected compounds, ZLG-25 showed a superior selectivity, which provided a potential starting scaffold for structural optimization to improve biological activities and physicochemical properties (Supporting Information Table S4). Molecular modeling of ZLG-25 and design of bitopic ligands To explore the binding properties of ZLG-25 at D 3 R, the inducedfit docking (Fig. 2A-B) and molecular dynamic (MD) simulation (Supporting Information Fig. S3) were performed in Schrodinger 2018.1.The results of MD were well consistent with the docking results, depicting a bitopic binding mode.The carbazole moiety of ZLG-25 was found located at the OBS forming p-p interactions with Phe345 and Phe346.The protonated nitrogen atom in the linker chain formed a salt bridge with the carboxylate of Asp110 offering a key and strong interaction.The imidazole moiety was oriented toward the SBP constructed by extracellular loop1, loop2, helices I, helices II and VII, forming the ionic interaction with Glu90.Based on the scaffold of ZLG-25, a series of more typical bitopic molecules were expected to be obtained as D 3 R selective ligands by attaching a D 3 R-preferring SBP binding motif to the amine group (Fig. 2C).The carboxamide replaced the SBP binding moiety-linked aromatic group, which was thought to occupy more space in the SBP and provide more interactions than the imidazole moiety.The scaffold hopping or structural simplification strategy was further used to improve the rigid carbazole structure by moving the nitrogen atom in the linker chain to the ring B of the carbazole moiety 45 .In 'hopping I', the secondary amine was converted to the tertiary amine at position 4, obtaining the tetrahydro-b-carboline scaffold.Furthermore, 'hopping II' was processed by moving the tertiary amine to position 3 to obtain the tetrahydro-g-carboline scaffold by (Fig. 2C).Therefore, started from ZLG-25, a series of carbazole and tetrahydrocarboline derivatives were designed to discover novel D 3 R-selective ligands. Chemistry In Scheme 1A, carbazoles with different alkyl (methyl, ethyl, and isopropyl) groups were used as the starting materials.The formyl group at position 4 was incorporated in the solution of DMF and POCl 3 in 80%e95% yield.The subsequent incorporation of the linker moiety was achieved by the reductive amination reaction affording derivatives 3a--c, with adding NaBH(OAc) 3 for multiple times.However, benzaldehyde 7 was not obtained with the same procedures as 2a--c, but rather through Fisher indole synthesis reaction and oxidative dehydrogenation in one pot, obtaining intermediate 6 (Scheme 1B).It was found that if the reaction was carried out in the absence of acid environment in 100 C, the phenyl hydrazine must be with hydrochloride for the formation of NH 4 Cl.The aromatization by oxidation sequentially happened in one pot under O 2 .Intermediates 6 were oxidized to an aldehyde group offering intermediates 7. Derivatives 8 was obtained by a similar fashion as 3a--c. As shown in Scheme 1C, the cyclohexanone (4) and the methyl 4-hydrazineylbenzoate (9) was refluxed in AcOH to synthesize the intermediate 10 in high yield.Another two steps were proceeded for converting methyl formate intermediate to a formyl intermediate 12.A similar process was performed to obtain derivatives 13.To further understand the importance and effect of the SBP binding moiety; derivatives 17a-f were synthesized by replacement of the imidazole moiety with various aromatic rings (Scheme 1D).Compound 2b was reacted with the 3aminopropanol or 4-amino-1-butanol and reduced to obtain intermediates 14a/b.The secondary amines were sequentially protected with the Boc group, and sulfonated with benzenesulfonyl group affording intermediates 16a/b.Subsequent substitution reactions and N-Boc deprotection resulted into the desired products 17a--f. The derivatives 22a--c were accessed through a three-step synthesis outlined in Scheme 2A.The tert-butyl (3-aminopropyl) carbamate (18) was used to prepare intermediates 20a in the cooperation with dimethylcarbamic chloride and dimethylsulfamoyl chloride.Compounds 20b/c were obtained through the amide coupling in the presence of EDCI and DMAP, followed by the reductive amination to furnish the final products 22b/c.Additionally, a longer chain linker moiety was explored with the starting material 19 through the same three-step synthesis route to afford the desired final derivatives 23a--m (Scheme 2A).A cyclohexyl ring as linker was also investigated.As detailed in Scheme 2B, tert-butyl ((1R,4R)-4-aminocyclohexyl) carbamate (24) and 1H-indole-2carboxylic acid (25) were utilized to prepare the intermediate 26 and resulted in the final compound 27 through a similar two-step, one-pot synthesis methods.Two intermediates, 28a/b, were obtained to synthesize the 29a/b with tert-butyl (4-bromobutyl) carbamate in potassium carbonate.Treatment of 29a/b with TFA afforded amine 30a/b.To furnish the desired targets 31a--e in high yield, the amide coupling catalyzed by EDCI and DMAP was used (Scheme 2C). Tetrahydro-g-carboline derivatives 36a--k were prepared as mentioned in Scheme 3A, from the starting materials phenylhydrazine 32a/b and piperidine-4-one.Compounds 34a/b served as the critical intermediates for synthesizing compounds, of which the synthetic protocol was the same as the intermediate 10. Intermediates 35a/b were prepared with commercially available tert-butyl (4-bromobutyl)carbamate.Then, compounds 35a/b on treatment with CF 3 COOH in CH 2 Cl 2 afforded the primary amine intermediates to obtain the target compounds 36a--k by the same amide coupling reaction.Tetrahydro-b-carboline derivatives 42a--f were synthesized from the starting materials 2,3,4,9tetrahydro-1H-pyrido [3,4-b]indole (37) (Scheme 3B).The NH at piperidyl structure was selectively protected by the Boc group and then the NH at indole structure was alkylated by CH 3 I or CH 3 CH 2 I to offer intermediates 39a/b.After the secondary amine intermediates 40a/b afforded on treatment with CF 3 COOH in CH 2 Cl 2 , compounds 41a/b was obtained by the reaction with tertbutyl (4-bromobutyl)carbamate. Final compounds 42a--f were prepared by the same process as 36a--k. Structure-activity relationships study To increase the binding affinity of hit compound ZLG-25 at the D 3 R, the SAR study and scaffold hopping of the carbazole scaffold were investigated.More than 60 derivatives were synthesized and the binding affinities and selectivity profiles at D 2 R and D 3 R were identified.To explore the key role of carbazole moiety in a high selectivity for D 3 R over D 2 R, different alkyl substitutions on the 1-position nitrogen of the carbazole moiety and the impact of aromaticity on the carbazole nucleus were first examined.The linker and imidazole moiety were then replaced with other entities to increase the binding affinity at D 3 R as described above. Radioligand competition binding assays were performed by measuring the ligands' ability to compete with [ 3 H] spiperone for the CHO cells stably expressing hD 2 R and hD 3 R 9,39,46e49 .The results are expressed as the inhibition values at 10 mmol/L for all the derivatives presented in Tables 1e5.The K i values were tested for any compound with an inhibition above 75% for D 2 R or D 3 R.Following the discovery of novel structures with high selectivity for D 3 R, a structural analysis to understand the basis for this was initiated from the modification of carbazole moiety.As shown in Table 1, the alkylation at the N position of carbazole was explored, and the comparison of affinities to D 3 R showed that ethyl (3b) and isopropyl (3c) groups had improved potency compared to the methyl group (3c) and hydrogen (8).The binding mode of ZLG-25 (3b) displayed strong p-p interactions between the carbazole and the aromatic residues (Phe345/Phe346/His349).To verify the importance of aromaticity, 2,3,4,9-tetrahydro-1H-carbazole was introduced to the analogs (13), which suggested the decreased potency as expected.The linker chain composed of four carbon atoms (17a) showed comparative potency with ZLG-25 (Table 1).Subsequently, a survey of heterocyclic replacements for the imidazole moiety was performed, such as 1,2,3-triazole (17b), 1,3,4-triazole (17c), benzimidazole (17d), purine (17e) and 6-Clpurine (17f), among which only 17b exhibited an improved binding affinity compared to ZLG-25. As shown in Table 2, when the linker length was kept at threecarbon units, the replacement of the imidazole group by Novel antipsychotic dopamine D 3 receptor antagonists dimethylurea (22a) failed to give improved binding profiles.And, the aromatic carboxamide moieties (22b/c) were also explored but were found to be less potent versus ZLG-25.When the propyl linker was replaced with a butyl (23a) and cyclohexyl (27) linker, both compounds showed a significant improvement in D 3 R binding affinity (K i Z 62.6 nmol/L for 23a and K i Z 52.1 nmol/L for 27).Intending to improve the D 3 R binding affinity, we realized that the butyl linker would likely be suitable to explore the second binding region of bitopic compounds.Next, we investigated the replacement of the indole-2-carboxamide moiety of 23a with different aromatic carboxamide moieties (23b-m) to fit the SBP.Comparing with compound 23a, 1H-benzo[d]imidazole (23b/23d) exhibited decreased binding affinity at D 3 R (K i Z 577.6 nmol/L/ K i Z 125.1 nmol/L).The pyrazolo[1,5-a]pyridine (23c) showed a comparable D 3 R binding affinity (K i Z 45.7 nmol/L) to 23a.Moreover, 1,3-dihydro-2H-benzo[d]imidazole-2-one (23e) and benzo[d]oxazol-2(3H)-one (23f) were both used as they have been reported in numerous D 3 R selective compounds, showing the K i values of 99.3 and 95.1 nmol/L, respectively.Further replacements with larger moieties were also explored such as the 1,1 0 -biphenyl-4-carboxamide moiety (23g) which led to a decreased D 3 R affinity while 4 0 -acetyl-[1,1 0 -biphenyl]-4-carboxamide moiety (23h) and showed an increased D 3 R affinity (K i Z 24.2 nmol/L).When the substituent groups on the para-position of benzamide were converted into pyridyl groups (23i/j), the binding affinity was found to have an approximately 10-fold improvement for D 3 R versus the To investigate the potential interactions of N-4 as well as improved physicochemical properties, an ethyl or propyl group was introduced to the N atom on the linker of bitopic ligands.However, compared to the original compounds 23a/23c/23j, alkylated derivatives were found to have less potential in D 3 R binding affinity.For 23a, the alkylation of the benzyl N atom unexpectedly increased the D 2 R binding affinity (D 2 R K i Z 332.6 nmol/L for 31a and D 2 R K i Z 300.8 nmol/L for 31c) (Table 3).It is speculated that the alkylation of the benzyl N atom of 23a changed the bitopic binding mode by turning the 1Hindole-2-carboxamide moiety into an OBS binding part. Binding studies continued to evaluate the tetrahydro-carboline core structure to determine if the scaffold hopping was engaging D 3 R affinity and selectivity.Regarding superior binding structures for SBP, aromatic carboxamide moieties were also explored in the tetrahydro-carboline scaffold to obtain the bitopic ligands and binding affinities were shown in Tables 4 and 5. Replaced by 1H- indole-2-carbamate, two tetrahydro-g-carboline analogs (36a/ 36k) were found to have improved binding affinities (K i Z 10.6 and 15.5 nmol/L), and excellent D 3 R selectivity profiles of which N-methyl or N-ethyl substituent at the tetrahydro-carboline showed no distinct difference.The same SAR was found for the tetrahydro-b-carboline analogs (42a/b).Ligand 36c demonstrated a drastic decrease in D 3 R binding affinity.Interestingly, due to the excellent D 3 R binding properties observed with compounds 23l, quinoline-3-carbamate (36d) was also explored, however it was found to dramatically reduced D 3 R affinity.Almost all other aromatic carboxamide moieties replacements, as they are present in compound 36f-j, led to the improved D 3 R binding affinity, especially for 36f and 36i with an approximate K i value of 4 nmol/L.Ligand 36g (K i Z 29.0 nmol/L) was found to have a comparable binding affinity with 23j.However, for the tetrahydro-b-carboline analogs (42c/g/h), D 3 R affinities lost dramatically.Compared to the carbazole scaffold, tetrahydro-carboline scaffold linked the 9H-carbazole-3-carbamate moiety (36h, K i Z 19.6 nmol/L and 42d, K i Z 36.7 nmol/L) was more potential, affording an increase in D 3 R binding.Compound 36j (K i Z 14.2 nmol/L) with a tetrahydro-g-carboline scaffold linked the 1H-benzo[d]imidazole moiety showed better binding affinity to D 3 R than compounds with a tetrahydro-b-carboline scaffold (42e, K i Z 203.0 nmol/L and 42f, K i Z 300.0 nmol/L).The same SAR was obtained from the tetrahydro-g-carboline derivate 36i (K i Z 3.7 nmol/L) and tetrahydro-g-carboline derivates (42i, K i Z 112.0 nmol/L and 42j, K i Z 149.0 nmol/L). Functional study by measuring p-ERK1/2 mediated D 3 receptor signaling To access the fundamental activities of these novel D 3 R selective ligands, p-ERK1/2 as the key molecular in signal transduction was tested.To ensure the reliability of the ERK1/2 phosphorylation measurement assays to evaluate the functions to D 2/3 R, several typical D 2/3 R modulators were used as positive drugs.The results were shown in Supporting Information Fig. S4 that dopamine showed significantly increased level of pERK1/2 in CHO-hD 2 R and CHO-hD 3 R cells.Additionally, the full D 2/3 R agonist, quinpirole at 20 mmol/L, induced a full ERK1/2 phosphorylation with the similar effects as dopamine.PD128907 was recognized as a D 3 R-selective agonist with D 3 R K i of 1 nmol/L and D 2 R K i of 1183 nmol/L 50 .As a positive control, PD128907 robustly stimulated a full ERK1/2 phosphorylation in CHO-hD 3 R cells but not CHO-hD 2 R cells indicating the D 3 R selective agonism.As an antagonist or weak partial agonist, cariprazine and haloperidol did not exhibit significant effects to the ERK1/2 phosphorylation.GSK598809 (a highly D 3 R selective antagonist, D 3 R K i Z 2.9 nmol/L, D 2 R K i Z 2110 nmol/L) was measured the ERK1/2 phosphorylation to D 2 R and D 3 R and found no significant effects.As shown in Fig. 3A, dopamine significantly increased the level of pERK1/2 in hD 3 ReCHO cells, but compounds 23j, 36a, 36b and 36f were found no influence in the ERK1/2 phosphorylation to hD 3 R by the compounds alone.Competitive antagonism assays in the present of 20 mmol/L dopamine were also performed, which showed compounds 23j, 36a, 36b and 36f could dose-dependently inhibit the ERK1/2 phosphorylation mediated by D 3 R (Fig. 3C).From the results (Fig. 3B), we found that compounds 23j, 36a, and 36b showed weak partial agonism to D 2 R at a high level.These compounds could inhibit the ERK1/2 phosphorylation induced by dopamine but not eliminate (Fig. 3D).Compound 36f showed no functional affects to D 2 R. The selected compounds with excellent D 3 R binding affinity failed to stimulate ERK1/2 phosphorylation in CHO-hD 3 R cells, which corroborated that the carbazole or tetrahydro-carboline D 3 R selective ligands possessed no intrinsic agonistic activity and acted as D 3 R antagonists.All the results indicated 23j, 36a, 36b and 36f as the D 3 R-sellective antagonists respectively with the IC 50 value of 17.7, 154.6, 594.4,and 673.9 nmol/L. Binding modes analysis To explain the SARs of novel bitopic scaffolds, the representative compounds 23j, 36b, 36f and 42a, were docked into the OBS of the D 3 R (PDB ID: 3pbl).The OBS was a hydrophobic pocket surrounded by aromatic residues (Phe188, Phe345, Phe346, His349, and Tyr365), where the hydrophobic carbazole moiety of compound 23j deeply buried in (Fig. 4A).The aromatic carbazole structure of compound 23j formed p-p interactions with Phe345, Phe346, and His349, and its hydrogen atom of NH 2 þ at the benzyl position of the carbazole group formed an ionic hydrogen bond with Asp110.In the comparison with the carbazole moiety, the carboline region was similarly located at the OBS region The K i values were not calculated because the inhibition percentages at 10 mmol/L were too low. Novel antipsychotic dopamine D 3 receptor antagonists exhibiting fewer p-stacking interactions with Phe345, Phe346, and His349, which could be seen from 36b, 36f, and 42a (Fig. 4B, Supporting Information Fig. S5A and S5B).The hydrogen atom of NH þ at the 2-position of the carboline group formed an ionic hydrogen bond with Asp110 to stabilize the OBS binding conformation.For the SBP region, the 4-(pyridin-3-yl) benzamide moiety was predicted docked to the SBP region on the edge of the whole pocket, indicating the bitopic binding mode.The hydrogen atom of the amide structure formed a hydrogen bond with Glu90, but pyridin-3-yl group was almost exposed to water due to the long linear structure of 23j.The similar binding mode with 23j was observed in 36f, which [1,1 0 -biphenyl]-4-carboxamide moiety extended into solvents.1H-benzo[d]imidazole-2-carboxamide of 36b and 1H-indole-2-carboxamide of 42a both offered the hydrogen bond formed between NH and Glu90, suggesting that interactions with Glu90 played key roles in the bitopic binding mode. To verify the binding modes in a dynamic condition, 300 ns MD simulations for compounds 23j, 36b, and 36f and 42a were performed with Desmond software.Compound 23j exhibited unstable binding conformation with the fluctuant ligand RMSD (Supporting Information Fig. S6A) and therefore the entire MD of 300 ns was analyzed.The complexes of compound 36b were stable after 30 ns from the beginning of simulations (Fig. S6B), and then the trajectories from 50 to 150 ns were extracted and analyzed.The MD analysis of 23j were shown in Fig. 4C and E, indicating consistent results with docking.The stable interactions between Asp110 and NH 2 þ at the benzyl position of the carbazole group existed during 98% of the simulation time as the H-bond or ionic bond.The aromatic carbazole was predicted to form abundant hydrophobic interactions with Phe345, Phe346, and His349 in the 65%, 74%, and 85% stimulation time, respectively, indicating that the hydrophobic carbazole group could stabilize the OBS of hD 3 R.Nevertheless, the 4-(pyridin-3-yl)benzamide moiety of 23j seemed to broadly interact with the amino acids at the gorge entrance, such as Val86, Glu90, Tyr365, Ser366, Thr369, and Tyr373.The group embedded in the SBP span freely in the solvent in the absence of possible stabilizing interactions, which might explain the fluctuant RMSD of the whole system.The MD analysis of 36b showed (Fig. 4D and F), that the carboline region of 36b was well embedded in the OBS forming stable interactions with Asp110 during 75% of the stimulation time as the H-bond, ionic bond, or water-bridge.Compared with 23j, the ionic bond formed by 36b with Asp110 seemed more prominent in all its interactions.However, fewer hydrophobic interactions by p-p stacking were predicted for the carboline group than for the carbazole group.The same situation appeared to the SBP binding moiety of 36b viz., fewer interactions were observed during the extracted stimulation time.The SBP binding with 1H-benzo[d] imidazole-2-carboxamide significantly contributed to the stable conformation, especially the specific interactions with Glu90, which played a key role in about 60% time, such as H-bounds and water bridges.Unfortunately, the conformation of hD 3 R with 36f or 42a was constantly perturbed during the 300 ns MD, and it was difficult to analyze a more stable conformation (Fig. S6C and D). Binding selectivity profiles The interactions between representive compounds and other receptors related to psychotic disorders were evaluated, and a selectivity profile was created using additional receptors (including hDR The K i values were not calculated because the inhibition percentages at 10 mmol/L were too low. Novel antipsychotic dopamine D 3 receptor antagonists and 36b showed excellent in vitro selective profiles with a preferred affinity for the D 3 R. Pharmacokinetic profiles and brain penetration properties study Pharmacokinetics profiles of the selected compounds were extensively studied in ICR mice by i.v.administration at 3 mg/kg and p.o. administration at 10 mg/kg.As shown in Supporting Information Fig. S7, the plasma concentrations of each compound were monitored for 24 h.The related profiles were calculated and shown in Table 7.Both compounds showed a rapid distribution (T max Z 0.17 and 0.25 h for 23j and 36b) at a dose of 10 mg/kg (p.o.).Nevertheless, compound 36b showed a much more effective blood plasma concentration (C max Z 505.2 ng/mL) comparable to that of compound 23j (C max Z 78.8 ng/mL) for the p.o. administration.Similarly, the C max of the i.v.administration for compound 36b was found to be more than one-fold higher than 23j.Compounds 23j and 36b had the comparable half-life (t 1/2 Z 1.1 h) for 10 mg/kg p.o. administration.Finally, the oral bioavailability of 23j was rather low and did not reach a sustained pharmacologically relevant plasma concentration during an oral dosing period.Despite the high on-target potency and good selectivity for the carbazole scaffold, the results of 23j indicated its poor PK profile (high i.v.clearance and no oral bioavailability).After hopping to the tetrahydro-carboline structure, the oral bioavailability seemed to be greatly improved, as can be seen from compound 36b (F Z 39.7%), by which the lower i.v.clearance was accompanied, perhaps due by its lower first-pass metabolism.Further tests for their brain penetration properties were performed at 0.5 and 2.0 h, and the brain/plasma ratios were calculated.The results showed that the brain/plasma ratio of 23j was lower than one at 0.5 h after one single administration (3 mg/kg i.v.) and reached 1.5 at 2.0 h due to lower clearance in the brain.When analyzed after a single oral administration, the concentration of compound 23j in the brain was too low to test.The BBB permeability of compound 36b was found to be prominent after a single i.v.administration achieving a brain/plasma ratio high to 4.5 at 0.5 h, and this ratio was maintained at a high level above five until 2.0 h.Notably, the brain concentration of compound 36b rapidly reached 1553 ng/ mL, which was higher than the i.v.C max .Considering the brain/ plasma ratio after a single oral administration was calculated from 0.74 to 0.97, compound seemed to provide moderate brain exposure (from 310 to 180 ng/mL).As can be seen from the pharmacokinetic profiles of 23j, the carbazole derivatives may have a poor oral bioavailability.Then, in order to obtain candidate potential compounds in the next catalepsy tests and hyperactivity tests, i.p. administration was processed to evaluate the major therapeutic effects and side effects. hERG channel blockade and cytotoxicity tests Unwanted inhibition of the hERG channel can induce severe cardiac arrhythmias, such as long QT syndrome characterized by prolonged QT intervals and Torsades de pointes 51 .The inhibition of the hERG channel of the selected compounds was further tested by patch-clamp assays to predict cardiovascular toxicity (Supporting Information Fig. S8) 52 .As the results are shown in Table 8, RPD and compounds 23j, 36a, 36b, 36f were tested for their blockage to hERG showing the IC 50 as 96, 2620, 1410, 1750, and 46,950 nmol/L, respectively.Compound 36f (hERG IC 50 > 40 mmol/L) exhibited good safety profile in terms of cardiotoxicity.Compound 23j had a lower hERG inhibition than the others.Furthermore, 36a showed potential selectivity to D 3 R compared with hERG.Compound 36b was found to have higher hERG inhibition but still had a comparative advantage versus RPD (hERG IC 50 Z 96 nmol/L as tested at the same conditions).Next, further cytotoxicity tests to evaluate the safety of candidates were performed in HEK293T, BV2 and PC12 cells.The results were shown in Supporting Information Fig. S9 indicating that 36b and The K i values were not calculated because the inhibition percentages at 10 mmol/L were too low. Novel antipsychotic dopamine D 3 receptor antagonists 36f exhibited less effects on the viability of HEK293T and BV2 than that of RPD.In PC12 cells, 36b shown the best safety profiles.But compounds 23j and 36a unexpectedly exhibited significant cytotoxicity at high concentrations. Acute toxicity assays and catalepsy tests in mice One of the major obstacles to using antipsychotics is their propensity to produce extrapyramidal motor side effects.The catalepsy tests were used to detect the extrapyramidal side effects in mice during the antipsychotic drug discovery 53,54 .But before catalepsy tests, preliminary acute toxicity assays in mice were performed to observe the maximum tolerated dose for the tested compounds as before 55 .All mice were intraperitoneally injected with tested compounds and persistently observed for 24 h.The results were presented in Supporting Information Table S6, and compounds 36b and 36f were found well tolerant even at the highest concentration (600 mg/kg, i.p.).Compound 23j showed a dose-dependent lethal toxicity from 240 to 600 mg/kg.For the dose of 240 mg/kg or above, the serious death often occurs after 12 h.As the example of 36f, it took doses of 480 mg/kg or above to cause death in individual mice.Known the maximum tolerated dose for the tested compounds, catalepsy was then tested on each compound with increasing concentration gradients by i.p. administration 56 .The number of the mice with cataleptic reaction in each group was recorded.The ED 50 , minimum effective doses (MED), and peak effects (percent of cataleptic animals) were accordingly calculated.As shown in Table 8, RPD was used as the positive control drug and induced a significant cataleptic effect from 2.4 mg/kg.It was found that almost all mice exhibited catalepsy at the high concentrations of RPD, while the ED 50 was approximately 1.51 mg/kg, which was related with its solid antagonistic effect on the D 2 R. In contrast, compounds 23j, 36a, 36b and 36f did not elicit catalepsy under all tested doses (i.p.).The highest safe dose of 23j was found between 240 and 480 mg/ kg (i.p.) since 240 mg/kg (i.p.) induced no catalepsy but 480 mg/ kg (i.p.) was lethal for mice.480 mg/kg (i.p.) for 36a/b/f found to be safe, and no higher dosage was investigated, considering its effective dose of 10e30 mg/kg.These results showed that this series of D 3 R-seletive ligands had a high threshold for catalepsy, which was associated with its low affinity to D 2 R. MK-801-induced hyperactivity tests and apomorphineinduced climbing tests of selected compounds Schizophrenia is a heterogeneous disorder with positive symptoms (delusions, hallucinations, thought disorders), negative symptoms (anhedonia, avolition, social withdrawal, poverty of thought), and cognitive dysfunction 57,58 .But understanding of the biological origins of schizophrenia was still limited.Increased locomotor activity in response to psychotomimetic compounds such as apomorphine or noncompetitive N-methyl-D-aspartate (NMDA) glutamate receptor antagonists is commonly used as an indication of positive symptoms in schizophrenia 59 .It is well-known that, as an uncompetitive NMDA receptor antagonist, MK-801 could induce schizophrenic symptoms in healthy subjects and exacerbate existing psychoses in patients with schizophrenia 60 .So as the past research, MK-801-induced hyperactivity tests were used for relatively high-throughput evaluation of the candidate compounds in our study 47e49 .Selected compounds were tested in this model of ICR mice, before which the influences on the locomotion of selected compounds were firstly determined to avoid potential psychotic side effects.Spontaneous locomotor results recorded for 15 min indicated that all tested compounds (23j, 36a, 36b and 36f) showed no influences at 30 mg/kg i.p., and RPD (1 mg/kg) was tested as the positive drug showing a moderate inhibition (Fig. 5B).In the following tests, the process was shown in Fig. 5A indicating that compounds and vehicle were administrated i. positive drugs, RPD and CRP dose-dependently inhibited the MK-801-induced hyperactivity in our tests, and RPD showed more effective than CRP.Dose-dependent tests for 23j/36a/36b/36f were performed at the doses of 3, 10, and 30 mg/kg.For all the tested carbazole derivatives, the dose of 30 mg/kg was found to be effective in the inhibition of MK-801-induced hyperactivity but not the spontaneous locomotion.Comparably, RPD by 1 mg/kg resulted in a significant inhibition of both spontaneous locomotion and MK-801-induced hyperactivity.Administration of 10 mg/kg was found effective (P < 0.001) for all derivatives, but the dose of 3 mg/kg did not show the inhibition activity at compounds 23j and 36a.Nevertheless, 36b and 36f administered i.p. at 3 mg/kg reduced the distance of mice movement compared with the MK-801 group (P < 0.001), suggesting the most potent inhibition of voluntary activity.The antagonism of hyperlocomotion induced by dopamine receptor direct agonists (e.g., apomorphine) is used to evaluate the antipsychotic efficacy 61 .Apomorphine-induced climbing tests could determine the attenuation of climbing behavior induced by apomorphine in mice for identifying potential antipsychotic activity.In this test, RPD and CRP exhibited dose-dependent inhibition to the apomorphine-induced climbing behavior (Fig. 5I and J).The concentration gradient as three times as RPD was set for the treatment with compounds 23j and 36b, showing that both compounds could dose-dependently inhibit the apomorphine-induced hyperactivity (Fig. 5K and L).The results indicated that potent D 3 R antagonism could significantly decrease the effects of apomorphine linked to behavioral agitation, one positive psychotic symptom. Antidepressant effects of compound 36b Regarding the exact nature relationship between depression and negative symptoms in a 'non-affective' psychotic illness such as schizophrenia 62 , models of behavioral despair such as forced swimming test (FST) and tail suspension test (TST), were used to evaluate the negative symptom-like behavior in lots of reported works 63e65 .But due to limited antipsychotic drugs approved for depression, such as CRP, lurasidone, and quetiapine, there is a special significance to evaluating the antidepressant effects of antipsychotic compounds.In our study, considering more convincing safety, 36b was further selected for more animal models of antipsychotic-drug-like activity and administrated by p.o. because of its satisfactory oral bioavailability.We continued investigating the antidepressant effects of 36b using the FST and TST.The FST is widely used to study depressive-like behaviors in rodents; in this test, the rodent's immobility time and latency by first observed immobility reflect a measure of behavioral despair.TST is another measure of behavioral despair sensitive to a broad range of antidepressant drugs that makes it a suitable screening test.In our tests, duloxetine (DLX), considered one of the most effective antidepressant drugs in clinical practice, was used as the positive control.Administration of DLX caused a robust decrease in the immobility time in FST and TST, demonstrating its extraordinary antidepressant effect.Once intraperitoneal treatment with compound 36b dose-dependently decreased the immobility time in the TST compared with the control group (P < 0.05), but 3 mg/kg group did not show significant differences (Fig. 6A).For the latency by first observed TST immobility, only 30 mg/kg group was found to be prolonged Novel antipsychotic dopamine D 3 receptor antagonists (Fig. 6B).So as immobility time of FST, 30 mg/kg group showed activities, but lower dose groups did not (Fig. 6C).DLX and compound 36b failed to prolong the FST latency (Fig. 6D).The results indicated that compound 36b exhibited potential antidepressant-like activity. Effects of compound 36b on MK-801-induced novel object recognition (NOR) disruption in mice Cognitive deficit is considered a core feature of schizophrenia, and it lacks responsiveness to many current antipsychotic drugs.NOR is a form of visual-recognition memory dependent on animals' innate preference to investigate novel objects, which is decreased in schizophrenia patients because of their visualrecognition memory impairments 66 .NOR models have been widely used to assess procognitive effects as quick, and straightforward preclinical models.The current study found that MK-801 could induce cognitive impairment in schizophrenia, consistent with previous reports disrupted NOR 67 .During this study, compound 36b and RPD were administered intraperitoneally for four days before the acquisition trial.On the fifth day, 1 h after administration, the training trial was performed by recording the exploration time for the two identical objects A. Then, 1.5 h later, one of the objects A was replaced by a novel object B for the recognition trial.During the training trial, no difference was found among the vehicle, MK-801, RPD, or 36b groups in the total object exploration time (Fig. 6E).Moreover, the recognition index, which was calculated as the percentage of novel object interaction time relative to total interaction time during the retention trial, was indiscriminate for all groups (Fig. 6F).In the testing session (Fig. 6G), total interaction time to objects A and B was increased significantly for the MK-801 group compared with control due to MK-801-induced hyperactivity.As shown in Fig. 6H, the control group of mice spent more time exploring the novel object than the familiar object.In contrast, throughout the experiment, MK-801 (0.2 mg/kg, i.p.)treated mice spent less time exploring the novel object, indicating that MK-801 treatment resulted in a cognitive deficit.The deficit in NOR index induced by MK-801 was alleviated by 36b (30 mg/kg, i.p.) but not RPD (1 mg/kg).These results suggested that compound 36b improved cognitive ability during the NOR tasks in mice. Weight gain and serum prolactin The potential adverse effect profile was also assessed in terms of its ability to induce weight gain and high prolactin levels.Successive p.o. administrations of the compounds 23j and 36b at 30 mg/kg were performed for 30 days (one time/day).The positive group was administrated with 1 mg/kg RPD and control group was treated with vehicle in the same process.The Body weight for each mouse was recorded every day.No influence on body weight was observed for either compound compared with the control (Supporting Information Fig. S10A).Nevertheless, weight gain trends and the daily behavior of 23j and 36b administration groups were unaffected.The weight gain of RPD group grew obviously faster 15 days later from the beginning.Moreover, after 30 days of dosing, RPD raised the serum prolactin level but compounds 23j and 36b resulted in nonsignificant serum prolactin change (Fig. S10B).These results suggested that both compounds displayed a remarkable safety profile. Conclusions The D 2/3 R binding affinities and functional selectivity have been studied for decades to explore the benefits for treating the schizophrenia.Highly D 3 R selective antagonists exhibited some promising profiles in mood regulation, cognitive protection, and low side effects.We aimed to develop novel D 3 R selective ligands with improved in vivo activity and drug properties for further development as clinical antipsychotics.Starting with virtual screens, a series of carbazole and tetrahydro-carboline derivatives were obtained through bitopic design and scaffold hopping strategy.A large proportion of compounds with the carbazole scaffold exhibited excellent binding affinities for the D 3 R with good selectivity against D 2 R. The tetrahydro-gcarboline derivatives were found improved in D 3 R binding affinities, but the tetrahydro-b-carboline showed less potential.In the functional studies, D 3 R antagonism and weak D 2 R partial agonism were identified for the novel ligands.Among the derivatives, compounds 23j and 36b were selected as our lead compounds, showing a moderate selectivity to D 3 R over 5-HT 1A , 5-HT 1B , and 5-HT 2A receptors and a weak affinity to the D 2 , 5-HT 2C , and 5-HT 6 receptors.Binding mode analysis verified the scaffold hopping strategy's feasibility and understanding of the bitopic binding to D 3 R.According to the data in MK-801induced hyperactivity and catalepsy tests, these new D 3 R selective antagonists were all effective in inhibiting the hyperactivity without cataleptic reactions.Further, compounds 23j and 36b dose-dependently reduced apomorphine-induced hyperactivity without cataleptic reaction.Compound 36b also possessed unusual antidepressant-like activity in the FST and the TST.Moreover, compound 36b alleviated the MK-801-induced disruption of novel object recognition in mice.Furthermore, the PK/PD studies of the compound 36b in mice displayed superior profiles, including moderate drug exposure and excellent brain penetration properties.In addition, compound 36b exhibited promising safety profiles in long-term drug administration without serum prolactin levels and weight gain change.Taken together, the results supported 36b as a suitable lead compound, and additional investigation for developing 36b into a promising pharmacotherapeutic for the treatment of psychoses is needed. Experimental 4.1.Virtual screening and induced-docking 4.1.1.Ligand-based virtual screening Pipeline Pilot 8.5 of Accelrys was used to perform all of the similarity searches 46 .The fingerprint ECFP_6 was generated for each structure of (R)-PG648, RGH-188, and GSK598809, then similarities were calculated using Tanimoto coefficient.Then, more than 9000 analogs were identified for further screening. 4.1.2.Structure-based virtual screening Dopamine D 3 receptor crystal structure (PDB ID: 3pbl) was selected for the study in silico.Afterward, the D 3 R structures was prepared by the Protein Preparation Wizard module in Schrodinger 10.2 software.The protein was assigned bond orders, added hydrogens, protonated, removed all crystallographic water molecules, and restrained minimization.Then, the eticlopride was docked into prepared model to check the reasonability for further docking screen.In next step, approximately more than 9000 compounds obtained from the similarity search were prepared to generate conformations by the LigPrep module of the Schrodinger suite (Schrodinger, NY, USA).Then, energy-minimized conformations were docked into the eticlopride binding site of the crystal structure.Then, three predicted binding poses were generated for each compound.According to the score obtained by the extra precision (XP) scoring function of the Glide module, 831 compounds (XP GScore<-9) were retained.The remaining 490 compounds were selected after structurally clustered into 26 clusters based on the Tanimoto coefficients computed using the ECFP_6 fingerprint.Lastly, 65 candidate compounds were purchased for further evaluation by radioligand competition binding assays. Induced-docking The prepared crystal structure (PDB ID: 3pbl) was used in the Induced Docking module with XP.For ligand preparation, conformations of ZLG-25, 23j and 36b were generated and energyminimized by the LigPrep module.Images depicting the proposed binding modes were generated using Maestro 11.5. Molecular dynamics simulations Following molecular docking, 200 ns MD simulations for ZLG-25, 300 ns MD simulations for 23j and 36b were performed ligandeprotein complex using the Desmond software 46 .According to the membrane position loading from the Orientations of Proteins in Membranes (OPM) database, D 3 R crystal structure (PDB ID: 3pbl) was accurately set up to the phospholipid bilayer 68 .Na þ and Cl À ions were added at the physiological concentration of 0.15 mol/L to ensure the overall neutrality of the systems.Simulations were conducted with an OPLS3 force field and a TIP3P explicit solvent model.We chose a 4.8 ps recording interval, and the NPT ensemble was employed with temperature fixed at 300 K and pressure at 1.01 bar.The integration time step was set at 2 fs.The model systems were relaxed using a six-step default protocol implemented in Desmond and utilized to prepare systems for production quality simulation.Default settings were used for all other parameters.The simulation interaction diagram analysis tool was used to monitor energetics, RMSD fluctuations, hydrogen-bond distances, angles, and van der Waals interactions over the simulation trajectories. Chemical synthesis All commercially available reagents and solvents were used without further purification.Reactions were monitored by thinlayer chromatography (TLC) on precoated glass silica gel plates (GF254, 0.25 mm, Yantai Xinde Chemical Co., Ltd.) using a CH 2 Cl 2 /MeOH/50% aq NH 4 OH system or an EtOAc/petroleum ether system.Column chromatographic purification was carried out using silica gel.Melting points were determined using an X-4 micro melting point apparatus.The radioligand competitive binding assay for each receptor was performed as follows.Compound 23j was dissolved in 50% (v/v) DMSO, and the compound concentration was 2  10 À3 mol/ L; dilution to the initial concentration of the new compound, 2  10 À4 mol/L, contained 5% DMSO.For one receptor binding assays, total binding (TB) was determined in the presence of the radioligand.Nonspecific binding (NB) was determined in the presence of the radioligand and competitive ligand for the related receptor, whereas compound binding (CB) was determined in the presence of the radioligand and compound 23j.Each specific binding (SB) was calculated as the total binding (TB) minus the nonspecific binding (NB) at a particular concentration of radioligand.Each percentage of inhibition (%) was calculated as Eq. ( 1): Percentage of inhibition (%) Z [(TB-CB)/(TB-NB)]  100 (1) Blank binding experiments contained 0.25% (v/v) DMSO were performed; DMSO had no effect.All compounds were tested at least three times over a 6-fold concentration range (10 À5 to 10 À10 mol/L).IC 50 values were determined by nonlinear regression analysis with fitting to the Hill equation curve.K i values were calculated using the Cheng and Prussoff equation as Eq. ( 2): where C represents the concentration of the hot ligand used and K d is the receptor dissociation constant of each labeled ligand.The K i value was derived from at least three independent experiments.HTRF ratioZ(665 nm/620 nm)  10,000 All samples were tested in duplicates.The results were calculated as a percent of control after dividing the phospho-ERK1/2 by total-ERK1/2.Data for each group were averaged and presented as mean AE SEM.Data were assessed for normality (ShapiroeWilk) and homoscedasticity (Brown-Forsythe).Statistical significance was determined by one-way analysis of variance (ANOVA) with Tukey's multiple comparison post-test.Comparisons were considered statistically significant when P < 0.05. hERG affinity CHO cells were stably transfected with hERG cDNA and cultured in F12 medium (Gibco) supplemented with 10% (v/v) fetal bovine serum (FBS) and 0.5 mg/mL Geneticin (Invitrogen) at 37 C in a humidified environment (5% CO 2 /95% air).The cells were seeded out two days before reaching 70% confluency.Prior to use, the cells were washed in PBS and incubated with 5 mL Detachin (Genlantis) for 4e5 min at 37 C to detach cells from the culture dish.The harvested cells were re-suspended in F12 medium at a density of 2 million cells/mL.The cells were transferred to a QPatch instrument (Sophion Bioscience, Denmark) and allowed to recover for 20 min in the Qstir cell preparation station on the Qpatch-8 before experiment.The tail currents of hERG channel were evaluated using the Q-patch automated patch clamp platform (Sophion Bioscience, Denmark).The following solutions were used during patch-clamp recording (compositions in mmol/L): internal solution: KCl 120, CaCl 2 5.374, MgCl 2 1.75, KOH 31.25,EGTA 10, HEPES 10, Na 2 ATP 4, pH 7.2 (KOH); external solution: NaCl 145, KCl 4, MgCl 2 1, CaCl 2 2, HEPES 10, glucose 10, pH 7.4 (NaOH).All solutions were sterile filtered.Cells were clamped at À80 mV and hyperpolarized to À100 mV to monitor the change of series resistance.The voltage protocol for hERG ion channel started with a short (200 ms) À50 mV step to establish the baseline region.A depolarizing step was applied to the test potential of 20 mV for 2 s, and then the cell was depolarized to À50 mV to evoke outward tail currents.Currents were filtered using the internal Bessel filter in Qpatch.Recording started in external solution.After this control period, 5 increasing concentrations of the test compounds were applied, each for approximately 4 min to record a complete concentrationeresponse curve.The last control period (saline) is used as baseline for data normalization.Cisapride (2 mmol/L) was applied as a reference inhibitor at the end of protocol.The sampling frequency is 2000 Hz.Data were acquired and analyzed using the PatchMaster software (HEKA).The compounds 23j, 36a, 36b and 36f were dissolved in extracellular solution to get different concentrations of solutions: 0.412, 1.23, 3.70, 11.1, 33.3, 100 mmol/L. Cytotoxicity test Cytotoxicity of selected compounds was determined using the cell counting kit-8 (CCK8) assay kit (Beyotime, Shanghai, China).Briefly, HEK293T, BV2, and PC12 were cultured in 96-well plates at a density of 8000 cells per well at 37 C overnight.Then, different concentrations of selected compounds were added into cells and incubated for 24 h.According to the manufacture's instruction, 10 mL of CCK8 detection reagent was added into 100 mL of medium per well.After 1 h, absorbance was read at 450 nm for CCK8 using Spark â Multimode Microplate Reader.Cell inhibition was calculated as relative absorbance compared to a DMSO-only control.And doseeeffect relationship curves were calculated for the IC 50 . 4.5.In vivo efficacy studies 4.5.1.Animals and compounds Male adult C57BL/6 mice (6e8 weeks old) and ICR mice (6e8 weeks old) were obtained from Beijing Vital River Lab Animal Technology Co., Ltd.All animals were housed in cages under artificial lighting from 7:00 AM to 7:00 PM, with free access to food and water.Animals were assigned to different experimental groups randomly, each kept in a separate cage.All experimental procedures were approved by the Peking University Institutional Animal Care and Use Committee.All compounds used for in vivo assays were produced as the formation of hydrochloride with the superior water solubility, and the final formulations were characterized by elemental analysis (Table S5).4.5.2.Catalepsy test 32,49 Mice were injected intraperitoneally with vehicle, RPD of 0.6, 1.2, 2.4, 4.8, 9.6 mg/kg, compound 23j of 15, 30, 60, 120, 240 mg/kg, 36a/b/f of 30, 60, 120, 240, 480 mg/kg.Catalepsy was tested individually 15, 30 and 45 min after injection.The test consisted in positioning the animal with its forepaws on the wood particle (3 cm in height) and recording how long it remained hanging onto the bar; the end point was 60 s and an all-or-none criterion was used.A mean immobility score of 30 s was used as the criterion for the presence of catalepsy.The number of the mice with positive reaction in each group was recorded.The ED 50 , minimum effective doses (MED), and maximal effects (percent of cataleptic animals) were accordingly calculated.4.5.3.Spontaneous locomotor test 47,49 ICR mice (10 mice in each group) were dosed with the vehicle and selected compounds (30 mg/kg) by intraperitoneal injection.Animals were placed in Plexiglas cages for evaluating locomotor activity.After the environmental adaptation for 30 min, the total locomotor distance of each animal was recorded for 15 min and automatically measured by the spontaneous activity video analysis system.All cages were changed after every test. MK-801-induced hyperactivity 47e49 ICR mice (divided into several groups, 10 mice in each group) were intraperitoneally injected with vehicle (vehicle group and MK group) or increasing doses of selected compounds (3, 10, 30 mg/kg).RPD and CRP were used as the positive drug with the dose of 0.1, 0.3, 1, 3 mg/kg.After 10 min, were intraperitoneally injected with normal saline, while other groups of mice were challenged with 0.3 mg/kg of MK-801 s.c.After injection, mice were immediately placed in Plexiglas cages for evaluating locomotor activity for 60 min.The total locomotor distance of the mice movement during 60 min was automatically recorded and measured by the spontaneous activity video analysis system.All cages were changed after every test. Apomorphine-induced climbing 47e49 ICR mice were divided into several groups with 10 mice in each group.The vehicle group and APO group of mice were intraperitoneally injected with vehicle.The tested compounds groups were intraperitoneally injected with increasing doses of compound 23j and 36b (0.3, 1, 3, 10, 30 mg/kg).After 30 min, the model group and tested groups were challenged with 1.0 mg/kg of the apomorphine in 0.9% NaCl þ0.1% ascorbic acid by subcutaneous injection.After the injection of apomorphine, the mice immediately were placed in cylindrical wire cages (12 cm in diameter, 14 cm in height), and observed for climbing behavior at 10, 20, 30 min post dose.The climbing behavior was scored as follows: 4 paws on the cage floor with normal activity Z 0 score; 4 paws on the cage floor with an increase in activity or sniffing Z 1; 2 paws on the reseau occasionally Z 2; 4 paws on the reseau occasionally Z 3; 4 paws on the reseau all the time Z 4. RPD was used as the positive drug with the dose of 0.1, 0.3, 1, 3, 10 mg/kg.4.5.6.Tail suspension test 71 ICR mice (10 mice in each group) were orally dosed with vehicle, duloxetine (15 mg/kg) or increasing doses of 36b (3, 10, 30 mg/ kg) for 3 days.After the last administration for 30 min, the mice were suspended individually by the tail using adhesive tape (attached 2 cm from the tip of the tail) to a hook attached to a strain gauge and the behavior changes in 6 min of the mice were recorded by a camera system.Then, the latency time (the immobility first observed) was calculated and duration of immobility in the last 4 min of the recorded time was measured.4.5.7.Forced swimming test 71 ICR mice (10 mice in each group) were orally dosed with vehicle, duloxetine (15 mg/kg) or increasing doses of 36b (3, 10, 30 mg/ kg) for 3 days.After the last administration for 30 min, each mouse was placed in a clear cylindrical tank (40 cm tall  20 cm diameter), filled with water (30 cm tall) at 24 AE 2 C. Mice were judged as immobile when they float motionlessly.Mice were forced to swim freely for 6 min with record by a camera system, and then the latency time (the immobility first observed) was calculated.Then, the duration of immobility in the last 4 min was measured when analyzing.Water was changed after every test. Novel object recognition training and testing 49 The first day of the experiment is the adaptation period.The mice were put into the empty box from the center of the side.The mice were allowed to adapt to the environment for 10 min freely.The second day is learning phase.Two object A were put into the box, 10 cm close to the same side of the box, and the distance between the two objects is 40 cm.Then the mice were put individually into the box and given 5 min explore the area and objects.Memory retention was tested 1.5 h after training.One of the two object A was replaced by an object B, then put the mice individually into the box and gave them 5 min to explore.The motion trails of the mice were recorded by the image processing system of computer.The NDI was calculated by Eq. (4): NDI (%)Z(Novel object interaction time/Total interaction time)  100 (4) Mice were divided into 6 groups, 10 mice in each group.5 groups of mice were orally dosed with vehicle, RPD (1 mg/kg) or increasing doses of compound 36b (3, 10, 30 mg/kg) for 5 days, starting from 4 days before the adaption period.On the 5th day, the untreated group of mice and those dosed with RPD or compound 36b were administrated i.p. with MK-801 (0.2 mg/kg) 30 min before the experiment.4.5.9.Weight gain and serum prolactin Mice (10 mice in each group) were orally dosed with vehicle, 23j (30 mg/kg) or 36b (30 mg/kg) for 28 days.The weight of each mouse was recorded before intragastric administration every day.The mice were killed by decapitation 180 min after the last treatment.Blood samples (2 mL) were collected and centrifuged (300Âg for 30 min), and the serum prolactin of the resulting serum sample was determined by an ELISA kit from Elabscience cto.4.6.In vivo metabolism studies 4.6.1.Pharmacokinetics study in mice 71 The HPLC conditions were as follows: column, Diamonsil C18 (150 mm  4.6 mm, 5 mmol/L, 120 A); mobile phase, 0.1% formic acid in water/acetonitrile (Merck Company, Germany) (v/v, 0e8.0 min, 40:60); flow rate, 0.2 mL/min; column temperature, 40 C. UV detection, 254 nm.ICR mice (n Z 3/group) were dosed with compounds 23j/36b via the tail vein for i.v.administration (3 mg/kg) or p.o. administration (10 mg/kg).After the last administration, 80 mL of orbital blood was extracted at 0.25, 0.5, 1, 2, 3, 4, 5, 6, 8 and 24 h.After separated by centrifugation (rpm Z 18,000, 10 min), the plasma sample (30 mL) was prepared for high-pressure liquid chromatography/tandem mass spectrometry (LC-MS/MS) analysis by protein precipitation with acetonitrile (100 mL).The plasma samples were analyzed for drug and internal standard via an API 4000 Q trap mass spectrometer (Applied Biosystems, Foster City, CA, USA) coupled with a 1200 series HPLC system (Agilent, Santa Clara, CA, USA).Isocratic elution was used with 80% acetonitrile and 20% water with 0.1% formic acid to separate analytes.The total run time was 3 min, and the flow rate was 0.3 mL/min.4.6.2.BBB penetration study in mice At 0.5 and 2 h after the last administration, after euthanizing the mice using CO 2 gas, the blood was collected from the heart immediately and the plasma was treated the same way as above.The remaining blood was washed out from the circulation by performing cardiac perfusion with physiological saline containing 10 U/mL heparin.The brain was then removed from the skull and added to three volumes of PBS buffer per weight, homogenized, and stored at À20 C. The compound concentrations in plasma and brain samples were determined via the LC-MS/MS protocol. Statistical analysis All values are presented as mean AE SEM or mean AE SD.For the calculation of EC 50 , the variable slope model was used with Eq. ( 5): Differences between groups were analyzed by two-tailed Student's t test.One-way ANOVA followed by post hoc Bonferroni's or Donnet's multiple comparisons was used to compare more than two groups.Two-way ANOVA followed by post hoc Bonferroni's multiple comparisons test was used for comparison of a series of data collection among groups. Figure 1 ( Figure 1 (A) D 3 R-selective ligands in clinic and clinical trials.(B) General workflow of D 3 R ligands identification.(C) Chemical structures of 3 identified hits and the binding affinities. Figure 2 Figure 2 ProteinÀligand contacts histogram ZLG-25 (A) and corresponding 2D diagram (B); The structure design strategy in this work (C). Figure 3 Figure 3 Functional characterization of lead compounds at D 3 dopamine receptors by ERK1/2 phosphorylation measurement.(A) Agonist doseeresponse curves for ERK1/2 phosphorylation mediated by hD 3 R and (B) ERK1/2 phosphorylation mediated by hD 2 R; (C) Competitive antagonist doseeresponse curves for ERK1/2 phosphorylation mediated by hD 3 R and (D) ERK1/2 phosphorylation mediated by hD 2 R in the present of 20 mmol/L dopamine. Figure 4 Figure 4 Binding modes analysis of 23j and 36b.(A) Predicted binding modes of compound 23j with D 3 R and (B) predicted binding modes of 36b with D 3 R. Images depicting the proposed binding modes were generated using Maestro software.Protein is shown as a cartoon, and small molecules are shown as sticks.Hydrogen bonds, pÀp stacking interactions, and electrostatic interaction are depicted by yellow, purple, and blue dashed lines, respectively.Residues of D 3 R interacting with ligands are depicted by green sticks.(C) ProteinÀligand contacts histograms of D 3 R with 23j and (D) proteinÀligand contacts histograms of D 3 R with 36b.(E) Corresponding 2D interaction diagrams of 23j and (F) corresponding 2D interaction diagrams of 36b predicted through MD simulations; percentage suggests that for X% of the simulation time, the specific interaction is maintained. Figure 5 Figure 5 Evaluation of selected compounds in animal models of antipsychotic-drug-like activity.(A) The process of MK-801-induced hyperactivity tests and apomorphine-induced climbing tests.(B) Effects of RPD (1 mg/kg) and selected compounds (30 mg/kg) on spontaneous locomotor of mice.(CeH) Effects of RPD, CRP, and selected compounds in concentration gradients on MK-801-induced hyperlocomotion of mice.(IeL) Effects of RPD, CRP, and selected compounds in concentration gradients on apomorphine -induced hyperlocomotion of mice.Results are expressed as the means AE SEM of distance traveled (n Z 6e10/group).Statistical evaluation was performed by one-way ANOVA followed by Dunnett's test for multiple comparisons.## P < 0.005 and #### P < 0.0001 versus control group; ***P < 0.001 and ****P < 0.0001 versus vehicle group. Figure 6 Figure 6 Evaluation of compound 36b in animal models of behavioral despair and cognitive deficit.(AeB) Effects of compound 36b (3, 10, 30 mg/kg) in the ICR mice TST (n Z 8e10/group).(CeD) Effects of compound 36b (3, 10, 30 mg/kg) in the ICR mice FST (n Z 8e10/group).Effects of compound 36b (3, 10, 30 mg/kg) on MK-801-induced object recognition disruption in mice (n Z 8e10/group).(E) Exploration times in the training and (F) the recognition index in exploring the same objects were scored (n Z 8e10/group).(G) Exploration times in the acquisition trial and (H) the novelty discrimination index (NDI) in exploring a familiar and novel object during acquisition trials (after 24 h training) were scored (n Z 8e10/group).Results are expressed as the means AE SEM of distance traveled.Statistical evaluation was performed by one-way ANOVA followed by Dunnett's test for multiple comparisons.## P < 0.005 and #### P < 0.0001 versus control group; ***P < 0.001 and ****P < 0.0001 versus vehicle group. a K i values are taken from three experiments, expressed as means AE SEM.b Table 2 Binding affinities of compounds 22a-c and 23a-m at hD 2 and hD 3 receptors. a K i values are taken from three experiments, expressed as means AE SEM.bThe K i values were not calculated because the inhibition percentages at 10 mmol/L were too low. Table 3 Binding affinities of compounds 31a-e at hD 2 and hD 3 receptors. Table 4 Binding affinities of compounds 36a-k at hD 2 and hD 3 Receptors.The K i values were not calculated because the inhibition percentages at 10 mmol/L were too low. a K i values are taken from three experiments, expressed as means AE SEM.b Table 5 Binding affinities of compounds 42a-j at hD 2 and hD 3 receptors. a K i values are taken from three experiments, expressed as means AE SEM.b Table 6 Binding affinities of compounds 23j and 36b for inhibiting radioligand binding to antischizophrenic drugs targets. a K i (nmol/L) values for the indicated compounds were determined as described in the Experimental Section. Table 7 Pharmacokinetic and brain penetration properties of compound 23j and 36b in ICR mice (n Z 3/group). Table 8 hERG channel inhibition and catalepsy induced by RPD and selected compounds a . b All values were tested for three times.c Not tested.
2023-07-29T15:09:32.270Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "374bcc1657ad59ce3eafb000e8f313e51e426551", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.apsb.2023.07.024", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "103c00928cc8723a1608fcaa81e942606dc372eb", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
270632925
pes2o/s2orc
v3-fos-license
Kikuchi‐Fujimoto disease following SARS‐CoV‐2 infection: A rare disease with increased incidence during the COVID‐19 pandemic? Abstract Kikuchi‐Fujimoto Disease (KFD), also known as Kikuchi disease or Kikuchi histiocytic necrotizing lymphadenitis, is a rare and self‐limiting condition characterized by cervical lymphadenopathy and fever, primarily affecting young Asian adults. The aetiology of KFD remains unknown, although various infectious agents have been suggested as potential triggers. With the emergence of the COVID‐19 pandemic, cases of post‐COVID‐19 KFD and post‐COVID‐19 vaccine KFD have been reported. In this article, we present the first case of post‐COVID‐19 KFD in Hong Kong. A 24‐year‐old man developed fever and painful neck swelling 1 month after recovering from COVID‐19. Diagnostic evaluation, including ultrasound‐guided fine needle aspiration cytology (FNAC), confirmed the diagnosis of KFD. The patient's symptoms resolved spontaneously with supportive care. This case underscores the importance of considering KFD as a potential differential diagnosis in patients presenting with cervical lymphadenopathy and fever following COVID‐19 recovery or vaccination. CASE REPORT A 24 year old obese man with past medical histories of allergic rhinitis, eczema and appendicitis presented to our unit for fever and painful neck swelling in January 2023, 1 month after he was diagnosed with COVID-19 by rapid antigen test.His COVID-19 symptoms were mild in December 2022 and resolved without anti-viral drugs, but the fever persisted, which prompted him to seek medical help.Apart from fever, he also complained of a one-day history of painful neck swelling.He was otherwise well with no complaint of respiratory, urinary, gastrointestinal, rheumatological or neurological symptoms.Physical examination revealed enlarged bilateral tender cervical lymphadenopathy without hepatosplenomegaly nor enlarged tonsils.His cardiovascular, respiratory, and neurological examinations were unremarkable. Blood tests showed a raised lactate dehydrogenase level of 325 U/L (reference range ≤ 250 U/L) and a raised c-reactive protein level of 19 mg/L (refence range <5.0 mg/L).Other blood tests were unremarkable, including complete blood count with peripheral smear, renal function, liver function, procalcitonin, monospot test, immunoglobulin pattern, immunoglobulin G4 level and serum protein electrophoresis.Autoimmune markers including anti-nuclear antibody, antiextractable nuclear antigen antibody, anti-neutrophil cytoplasmic antibody, rheumatoid factor and anti-cyclic citrullinated peptide antibody levels were within normal ranges.The chest x-ray was clear, and the electrocardiogram showed sinus tachycardia of 130 beats per minute with no other abnormalities. Empirical treatment with Piperacillin/tazobactam was initiated with no improvement in fever nor bilateral cervical lymphadenopathy.The most prominent one was a right high cervical lymph node, measuring 1.72 cm  0.84 cm in diameter by ultrasound.Fine needle aspiration of the right high cervical lymph node was performed on the third day after admission (31st day after COVID-19 diagnosis), with a total of 3 passes made with a 22G BBraun Spinocan ® Quincke needle.The histopathology exam of the lymph node aspirate showed features of histiocytic necrotizing lymphadenopathy with proliferation of reactive large cells in a background of karyorrhectic debris and crescentic histiocytes, consistent with KFD.No fungus or acid-fast bacilli was identified (Figure 1). Naproxen 500 mg BD, a non-steroidal antiinflammatory drug, was initiated after the diagnosis of KFD.His fever subsided with reduction in cervical lymphadenopathy size and tenderness.He was subsequently discharged and was followed up for 1 year without development of other autoimmune diseases or recurrence of KFD. When diagnosing KFD, there are no specific diagnostic criteria available.Histological examination is essential for the diagnosis of KFD, and more importantly, to exclude more serious conditions such as lymphoma, metastasis, or tuberculous adenitis. 34Procedures used to obtain histological samples include excisional lymph node biopsy, fine-needle aspiration cytology (FNAC), and ultrasound-guided core biopsy. 35In recent years, the primary diagnostic modality for KFD has been ultrasound-guided core needle biopsy, which has shown a diagnostic accuracy of 95.6%. 36This procedure has become increasingly favoured over the previously recommended excisional biopsy for diagnostic purposes.Our case was diagnosed with ultrasound guided FNAC of the right cervical lymph node, which showed features of histiocytic necrotizing lymphadenopathy with proliferation of reactive large cells in a background of karyorrhectic debris and crescentic histiocytes, typical of KFD.However, literature have reported a lower diagnostic accuracy of 44.7% with FNAC, often attributed to tissue inadequacy. 36At the end of the day, if cytologic findings from FNAC are compatible with a diagnosis of KFD, patients do not need to undergo open biopsy for confirmation. 35FD typically follows a self-limiting course, as demonstrated in our case, with symptoms resolving spontaneously within 6 months. 2Supportive measures, including the use of analgesics and antipyretics, are often employed.Patients with severe disease may be treated with corticosteroids.Out of the 13 reported cases of post-COVID-19 KFD, one case 9 was complicated by upper airway obstruction due to bilateral F I G U R E 1 Histopathology slide of the fine needle aspirate of the right cervical lymph node showed features of histiocytic necrotizing lymphadenopathy with proliferation of reactive large cells in a background of karyorrhectic debris and crescentic histiocytes.cervical lymphadenopathy and required urgent tracheostomy with neck dissection.][11][12][13][14][15][16][17] In summary, we presented a case of KFD with a typical presentation of fever and cervical lymphadenopathy, diagnosed through ultrasound-guided FNAC, 1 month after COVID-19 recovery.This case serves as a reminder to clinicians and pathologists to consider KFD as a potential differential diagnosis in patients who exhibit cervical lymphadenopathy and fever following COVID-19 recovery or COVID-19 vaccination. AUTHOR CONTRIBUTIONS Cheuk Cheung Derek Leung, Hiu Ching Christy Chan and Ming Chiu Chan drafted the work.Yu Hong Chan, Man Ying Ho, Chun Hoi Chen and Ching Man Ngai reviewed it critically for important intellectual content.Yiu Cheong Yeung was in charge of final approval of the version to be published.
2024-06-21T15:14:10.684Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "bb75912d18677fb3f8c09d9ffe746cb689054dc0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/rcr2.1414", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "509262a7b2fb4fc83db302a670b2b3f88e9569b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212678779
pes2o/s2orc
v3-fos-license
Continuing professional development requirements for UK health professionals: a scoping review Objectives This paper sets out to establish the numbers and titles of regulated healthcare professionals in the UK and uses a review of how continuing professional development (CPD) for health professionals is described internationally to characterise the postqualification training required of UK professions by their regulators. It compares these standards across the professions and considers them against the best practice evidence and current definitions of CPD. Design A scoping review. Search strategy We conducted a search of UK health and social care regulators’ websites to establish a list of regulated professional titles, obtain numbers of registrants and identify documents detailing CPD policy. We searched Applied Social Sciences Index and Abstracs (ASSIA), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, EMCare and Scopus Life Sciences, Health Sciences, Physical Sciences and Social Sciences & Humanities databases to identify a list of common features used to describe CPD systems internationally and these were used to organise the review of CPD requirements for each profession. Results CPD is now mandatory for the approximately 1.5 million individuals registered to work under 32 regulated titles in the UK. Eight of the nine regulators do not mandate modes of CPD and there is little requirement to conduct interprofessional CPD. Overall 81% of those registered are required to engage in some form of reflection on their learning but only 35% are required to use a personal development plan while 26% have no requirement to engage in peer-to-peer learning. Conclusions Our review highlights the wide variation in the required characteristics of CPD being undertaken by UK health professionals and raises the possibility that CPD schemes are not fully incorporating the best practice. I did have some suggestions for the authors to consider: 1. As a medical educationalist working in Scotland I would challenge that the NHS Long Term Plan "emphasises the role that CPD will need to play in the evolution of the UK healthcare workforce". I think the NHS Long Term Plan likely only covers NHS England and that the other three nations in the UK have other influences. Page 2, line 2 The authors need to be mindful that health has been devolved to NI, Wales and Scotland and that different strategy documents are in play there 2. It might seem picky but I think the descriptive term 'doctors' usually means medical practitioners, but dentists and chiropractors are using this term commonly now, so I wonder if mention of this should be made. That is the term 'doctor' is being used to describe registered medical practitioners. The GMC makes wide use of 'doctors' on their website, but perhaps a short sentence emphasising what is meant by this word would be helpful 3. Page 2, line X in second paragraph (sorry printer lost the numbers on the side) should be regulators' websites (insertion of apostrophe) 4. Might be easier to replace 'registration numbers' with 'registrants' 5. Throughout the article 'multidisciplinary' is used. Do the authors mean this strictly speaking, or are they describing interprofessional learning? I think there needs to be clarity about what is being discussed here as multi-professional learning is different from inter-professional learning and both are different from multidisciplinary. The centre for advancement in interprofessional education may help here https://www.caipe.org/ the tenet of the argument in relation to each of the UK health professions compared. The authors need to proof read the article to ensure abbreviations are in full the first time they are used, including the title, abstract, where it may be easier for readers to have these in full and introduce the acronyms in the main body. For example P1L3, UK; P2L11 NHS; L16 UK; L18 CPD etc. as this would improve readability. Use of the full word at the beginning of sentences eg P2L43 82%. The use of capital letters where they are unnecessary also reduces readability eg P2L31 Health and Social Care; P2L52, P3L38, P12L3, P12L13 etc Personal Development Plan or PDP. This phrase needs to consistently become an abbreviation or used without capital letters. The use of colloquial language needs to be curtailed as an international readership may not understand the terminology eg P2L25 "in the light of"; P4L8, L31 "sets out"; P4L54 "in the main". The use of second 'we' and third person could be altered to third person to improve flow of the article. REVIEWER Craig M Campbell Royal College of Physicians and Surgeons of Canada Canada REVIEW RETURNED 02-Sep-2019 GENERAL COMMENTS The manuscript attempts to contrast and compare CPD requirements across 31 regulated health professions within the United Kingdom. Most of the data appears to be abstracted from websites or documents published on websites to inform a list of common features (or dissimilarities) across systems. There was no defined research question or a series of hypotheses that drove the data collection or analysis. I would recommend the authors consider using a more formal scoping review methodology to summarize many of the findings. Otherwise -it appeared to focus was much more descriptive or explorative in nature. Introduction -there was a very long preamble related to the challenges of delivering health services, the NHS long term plan and the general regulation of health professions within the UK. Although the link to the role of CPD in the implementation of the NHS long term plan was useful -this section should be reduced to focus on the need to understand the CPD requirements within the current NHS long term plan. The notion that this article had anything to do with "ensuring that regulators can respond faster to changing health care delivery and workforce" was a stretch! Methods -this section did not describe the inclusion criteria (other than English language publications) and did not discuss or expand on how the authors were able to reduce from 250 papers to 48 papers that were selected for detailed data abstraction. There was no methodology described to guide this process -a scoping review strategy would have been clearly acceptable to describe the current literature and identify potential gaps that require further exploration Results -the key data was summarized in Table 2. Most of these categories provided very basic data such as cycle length, hours or credit requirements, accreditation requirements etc. The two columns detailing the requirement for a personal development plan and multi-disciplinary CPD (which I inferred was akin to interprofessional continuing professional development) were the most intriguting to me and should be the primary focus for any revisions to the is manuscript The notion of 'learning with peers' although important is a bit more complex than formal group learning and could have been expanded to discuss small group learning in workplaces -such as rounds, journal clubs or other inter-profession team based activities -supported perhaps by other QI or patient safety or KT strategies. The details of the construct or expectations of a PDP would be very helpful to the literature rather than simply having completed one. I would encourage the authors to consider a more focused comparative on the details of the PDP and whether or not the plan must be stimulated in part by practice data or feedback. The lack of any comparison on whether the educational process must include an analysis or auctioning of practice based data was an important oversight in my view. Even describing the lack of requirements would have been better than no discussing this at all. In the future I would recommend that multidisciplinary be replaced with interprofessional CPD as a specific focus Discussion. I was not sure of why a discussion on the differences with the parliamentary brief or how to determine the number of health professional practicing in a specific domain was included. It made the paper to be too specific to the UK to be helpful to other countries or CPD systems. Some of the statements in this section raised concerns -particularly around the "move to CPD has been driven by the suggestion that traditional CE learning activities have a limited effect of practitioner behaviours and patient outcomes" There was no reference to Cervero's synthesis of systematic reviews in JCEHP in 2015 that summarized 31 systematic reviews focused on these outcomes. Although the impact on patient outcomes is smaller than on physician behaviours -there are important impacts -depending on whether the educational activity was a response to an identified professional practice gap. The discussion section should have focused more on the almost total lack of any implemented CPD accreditation system (with one exception) or the need for CPD documentation to focus on not just the plan for improvement but on whether the plan resulted in improvement. There was an appropriate focus on the need to focus on the intent and meaning of inter-professional CPD as one means to focus CPD more on the work place to enhance the health outcomes experienced by patients. Limitations -this section should expand on the limitation of drawing conclusions from documents that are populated on websites as describing minimal requirements. The authors did not attempt to understand the vision, goals or purpose of these systems in light of the requirements that have been developed to date. VERSION 1 -AUTHOR RESPONSE Reviewer: 1 Reviewer Name: David Cunningham As a medical educationalist working in Scotland I would challenge that the NHS Long Term Plan "emphasises the role that CPD will need to play in the evolution of the UK healthcare workforce". I think the NHS Long Term Plan likely only covers NHS England and that the other three nations in the UK have other influences. Page 2, line 2 The authors need to be mindful that health has been devolved to NI, Wales and Scotland and that different strategy documents are in play there. Authors response: We apologise for this statement, it is incorrect. We have simplified the introduction and revisited all the national health strategy documents quoted to draft a broader statement about their common aims of developing the workforce, highlighting the role that education, pre and post qualification, will play in this. The new introduction reads: "Across the four nations of the United Kingdom (UK) national strategy documents identify the need for health and social care systems to adapt to the challenges of delivering services in the future with the aim of creating a more flexible, multidisciplinary workforce able to deliver new models of care with an increasing role for non-medical healthcare professions . Specific emphasises is made on the role of education, including continuing professional development (CPD), in the evolution of this work force with the stated aim of expanding multi-professional credentialing to allow for expansion of professional roles across medical and non-medical professions . In the UK standards of training for qualification and CPD for professionals are set by a range of profession specific regulators . There are currently 12 such regulators, nine of which regulate mainly health professions with the others regulating social care professions. These organisations are independent of government and derive their powers to regulate from primary and secondary legislation. Professionals working within the UK National Health Service (NHS) are currently expected to adhere to the standards set by their individual regulatory bodies and this includes meeting requirements for CPD . This system of professional regulation is currently under review by the Department of Health (the branch of the government of the UK concerned with the maintenance of public health) and regulators are being asked to ensure that pre-qualification training of new staff meets the need for a more flexible workforce, however as the NHS Long Term plan for England states, much of the development of the existing workforce will fall to continuing education and training (CET) or CPD programmes, unique to each professional group. There have been international surveys of CPD requirements for selected healthcare professions but there is no current analysis of these requirements for UK health professions. At a time of regulatory change, when the role of CPD in healthcare workforce evolution has been clearly highlighted, this review describes the features of CPD required of these health professionals by their regulators and considers if these requirements conform to best practice. By detailing these requirements for the whole UK healthcare workforce, we also hope to contribute to the broader understanding of how CPD systems are evolving in the UK and internationally." Page 7 line 19 to page 8 line 20. It might seem picky but I think the descriptive term 'doctors' usually means medical practitioners, but dentists and chiropractors are using this term commonly now, so I wonder if mention of this should be made. That is the term 'doctor' is being used to describe registered medical practitioners. The GMC makes wide use of 'doctors' on their website, but perhaps a short sentence emphasising what is meant by this word would be helpful Author response: Thank you for this. We are defining terms, so this was an important omission. We have included this sentence. "Medical practitioners are commonly described as doctors, although it is important to note that the protected title for a medical practitioner is "doctor of medicine"32. In this article we will use the term "doctor" to mean" doctor of medicine"." Page 12 line 8-11. And we have also changed the term "Doctors" to "Doctor of medicine" in Table 1. Page 11. Page 2, line X in second paragraph (sorry printer lost the numbers on the side) should be regulators' websites (insertion of apostrophe) Page 12. The authors talk about 'taking part in multidisciplinary CPD activities, that is learning with other professional disciplines'. I think this needs to be corrected. Authors response: We have now addressed this and corrected throughout as per your comment above. Page 13, above table 3. I think it should be "such as logbook templates" instead of "such a logbook" Authors response: Noted and amended "..documentation such as logbook templates" Page 17. Line 2. Overall, it may be useful to situate the UK within the global context as other countries such as the USA, Canada and Australia have well-established CPD requirements for the registered health professions. It seems the UK could learn from the approach of these countries and mention of the main issues that are discussed from the UK perspective could be useful. Authors response: We would agree that a discussion of how the UK requirements sit in the global context would of great interest and in response to your comment we have looked at how we could draw some useful comparisons. As there is an absence of detailed recent global surveys we would need to look at regulators individually and to do such a comparison justice we feel is beyond the space we have available. We do feel that you have highlight a limitation to our work and we have added the following to our discussion of the limitations of this paper. Page 27. Line 1-3 "It would also be of great interest to place the findings of this review in a global context comparing the detail of other well established CPD systems for health professionals, especially in the requirements for peer to peer learning, interprofessional learning and the use of PDPs." The article is UK centric and further explanation of the social care regulators needs to be clarified as does which ones were excluded from the analysis as this is not clear to the reader. Authors response: We have amended the paragraph on P9, Line 21-22, to give some examples of the types of workers regulated by the excluded bodies. " The websites of the 12 health and social care regulators were identified using the Google search engine. Three of those regulators (Care Council for Wales, Northern Ireland Social Care Council and the Scottish Social Services Council) which solely regulate social care workers and professionals, such as adult home care workers and managers, childcare workers and managers, and qualified social workers and social care professionals and not healthcare professions were excluded from this analysis as they do not regulate healthcare professionals." The attrition and use of different numbers of regulators within the text without explanation is confusing. There is lengthy explanation of the tables of which there is no overt reference to Table 1. Authors response: Table 1 is referenced at the start of the results section P10, line 15-16. "This analysis identified 32 distinct healthcare professional titles. Table 1 details the names of the nine regulators, the professional titles they regulate, and the total number of individuals registered with each regulator in 2018/19. " The main findings could be highlighted, however a full explanation is not required as Tables 2-4 neatly presents the 'results' information. Authors response: We have considered this issue at length having drafted the text with and without detailed description. On balance we feel that the tables are very dense and the verbal description presented in the results section is a summary of the key points of interest that we feel the reader would struggle to conclude by just examining the tables. There is repetition within the article including the article summary that would benefit from rewording (P3L29-33). Authors response: We have redrafted the summary to hopefully offer a clearer focus on the main strengths and weaknesses of the review. "Our results show that ongoing post qualification training, termed continuing professional development (CPD) by United Kingdom (UK) healthcare regulators, is now a mandatory requirement for all regulated healthcare professionals in the UK. We define which health professions are regulated in the United Kingdom and the numbers registered under these titles. Eight out of the nine regulators do not mandate modes of CPD to be undertaken or require individual CPD activities to be pre accredited. There is only partial adoption of potentially more effective modalities, such as peer-to-peer learning and use of personal development plans (PDPs) and very little requirement for interprofessional learning. A limitation of this review is the lack of detail about the individual CPD schemes undertaken by doctors of medicine, which uniquely for this profession are defined by medical colleges, faculties and employers. Their regulator, the General Medical Council, and the Academy of Medical Royal colleges issue broad guidelines on the characteristics of CPD that doctors need to complete, and we have assumed that these are followed by individual schemes. By making this assumption we have been able to comment on most of their CPD characteristics with the notable exception of the requirement for group learning. " Page 4 line 11 to page 5 Line 2. The opening sentence in the introduction (P4L8-15) mentions technology which is not mentioned in the abstract. If it is important enough to be the first sentence, perhaps technology needs mentioning in the abstract? Author response: This has now been removed as part of the redrafting of the introduction. Page 7 line 19 to Page 8 line 20. The methods section is vague and a flow chart explaining the search method could be useful, rather than only a couple of sentences. Where the information was sourced from is also not consistently clear eg P6L56 onwards. A table / flow chart could remediate this issue without changing the structure of the information in the method section. Authors response: We have included two supplementary files both detailing the modified Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist that we used. One file, "How are CPD systems characterised in the literature? A scoping review to inform a list of common features", describes this initial informative review and the second, "A targeted search of the websites of organisations regulating health professional in the UK", describes how the organisational sources were identified and chosen. We have referred to these two files in the methods section as "supplementary file 1" and "supplementary file 2". P9 line 16 and Page 10 line 11 respectively. In the discussion the use of the word regulators is vague and does not enhance comprehension of the tenet of the argument in relation to each of the UK health professions compared. Authors response: Our refocused introduction will hopefully define the role of the regulators more clearly. " In the UK standards of training for qualification and CPD for professionals are set by a range of profession specific regulators . There are currently 12 such regulators, nine of which regulate mainly health professions with the others regulating social care professions. These organisations are independent of government and derive their powers to regulate from primary and secondary legislation. Professionals working within the UK National Health Service (NHS) are currently expected to adhere to the standards set by their individual regulatory bodies and this includes meeting requirements for CPD ." Page 8 Line 1 to 7. Table 1 on page 10-11 hopefully makes clear the names of these regulators and the professions they regulate. The authors need to proofread the article to ensure abbreviations are in full the first time they are used, including the title, abstract, where it may be easier for readers to have these in full and introduce the acronyms in the main body. For example P1L3, UK; P2L11 NHS; L16 UK; L18 CPD etc. as this would improve readability. Authors response: We have revised the title to include the full terms. Page 1 Line 1 to 2. We have used the full term "United Kingdom" in the abstract but have introduced the acronym "CPD" after first usage as it occurs so often, and we feel it will aid the reader. Page3 Line 5 to Page 4 line 7. In the article summary we have used the full term on first usage for the "UK", "CPD" and "PDPs" and then introduced the acronyms. Page 5 line 6 to line 20. Use of the full word at the beginning of sentences eg P2L43 82%. The use of capital letters where they are unnecessary also reduces readability eg P2L31 Health and Social Care; P2L52, P3L38, P12L3, P12L13 etc Personal Development Plan or PDP. This phrase needs to consistently become an abbreviation or used without capital letters. Authors response: The sentence previously beginning with a number has been altered to read. "Overall 82% of those registered are required to engage in some form of reflection on their learning but only 35% are required to use a personal development plan to reflect on future learning needs while 26% have no requirement to engage in peer-to-peer learning." Page 3 line 23 to page 4 line 1-3. We have reviewed the text and removed capitals when they were used in the case of health and social care and numerously in the case of personal development plans. The use of colloquial language needs to be curtailed as an international readership may not understand the terminology eg P2L25 "in the light of"; P4L8, L31 "sets out"; P4L54 "in the main". The use of second 'we' and third person could be altered to third person to improve flow of the article. Authors response: Noted with thanks. The sentence containing "in the light of " has been changed to read "This review sets out to establish the numbers and titles of professionals regulated in the United Kingdom and to identify the characteristics of their post qualification training, comparing these standards across the professions and considering them against best practice evidence and current definitions of continuing professional development (CPD)." Page 3, line 7 to 11. The sentence containing "sets out" has been changed to read "Across the four nations of the United Kingdom (UK) national strategy documents identify the need.." Page7, line 19. The sentence containing "in the main" has been changed to read "There are currently 12 such regulators, nine of which regulate mainly health professions with the others regulating social care professions." Page 8, line 2. We have removed the use of "we" in every section bar the discussion, where we feel it conveys the more subjective nature of our discussion points. Reviewer: 3 Reviewer Name: Craig M Campbell Introduction -there was a very long preamble related to the challenges of delivering health services, the NHS long term plan and the general regulation of health professions within the UK. Although the link to the role of CPD in the implementation of the NHS long term plan was useful -this section should be reduced to focus on the need to understand the CPD requirements within the current NHS long term plan. The notion that this article had anything to do with "ensuring that regulators can respond faster to changing health care delivery and workforce" was a stretch! Authors response: We have rewritten to introduction to hopefully make clear our main aim, which is to scrutinise the CPD requirements given the key role that health planners have for CPD in the evolution of the healthcare workforce. We have removed mention of the challenges of delivering health care but have included a much-edited summary of the role for regulators given the potentially wide readership and other reviewer comments about the need the clarify this for the international audience. "Across the four nations of the United Kingdom (UK) national strategy documents identify the need for health and social care systems to adapt to the challenges of delivering services in the future with the aim of creating a more flexible, multidisciplinary workforce able to deliver new models of care with an increasing role for non-medical healthcare professions . Specific emphasises is made on the role of education, including continuing professional development (CPD), in the evolution of this work force with the stated aim of expanding multi-professional credentialing to allow for expansion of professional roles across medical and non-medical professions. In the UK standards of training for qualification and CPD for professionals are set by a range of profession specific regulators. There are currently 12 such regulators, nine of which regulate mainly health professions with the others regulating social care professions. These organisations are independent of government and derive their powers to regulate from primary and secondary legislation. Professionals working within the UK National Health Service (NHS) are currently expected to adhere to the standards set by their individual regulatory bodies and this includes meeting requirements for CPD . This system of professional regulation is currently under review by the Department of Health (the branch of the government of the UK concerned with the maintenance of public health) and regulators are being asked to ensure that pre-qualification training of new staff meets the need for a more flexible workforce, however as the NHS Long Term plan for England states, much of the development of the existing workforce will fall to continuing education and training (CET) or CPD programmes, unique to each professional group. There have been international surveys of CPD requirements for selected healthcare professions but there is no current analysis of these requirements for UK health professions. At a time of regulatory change, when the role of CPD in healthcare workforce evolution has been clearly highlighted, this review describes the features of CPD required of these health professionals by their regulators and considers if these requirements conform to best practice. By detailing these requirements for the whole UK healthcare workforce, we also hope to contribute to the broader understanding of how CPD systems are evolving in the UK and internationally." Page 7 line 19 to page 8 line 20. Methods -this section did not describe the inclusion criteria (other than English language publications) and did not discuss or expand on how the authors were able to reduce from 250 papers to 48 papers that were selected for detailed data abstraction. There was no methodology described to guide this process -a scoping review strategy would have been clearly acceptable to describe the current literature and identify potential gaps that require further exploration. Authors response: We have included two supplementary files both detailing the modified Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist that we used. One file, "How are CPD systems characterised in the literature? A scoping review to inform a list of common features", describes this initial informative review and the second, "A targeted search of the websites of organisations regulating health professional in the UK", describes how the organisational sources were identified and chosen. We have referred to these two files in the methods section as "supplementary file 1" and "supplementary file 2". P9 line 1 and Page 9 line 11. Results -the key data was summarized in Table 2. Most of these categories provided very basic data such as cycle length, hours or credit requirements, accreditation requirements etc. The two columns detailing the requirement for a personal development plan and multi-disciplinary CPD (which I inferred was akin to interprofessional continuing professional development) were the most intriguting to me and should be the primary focus for any revisions to the is manuscript Authors response: We have added a new table detailing PDPs discussed in more detail in our response below and we have used the term interprofessional throughout. The notion of 'learning with peers' although important is a bit more complex than formal group learning and could have been expanded to discuss small group learning in workplaces -such as rounds, journal clubs or other inter-profession team based activities -supported perhaps by other QI or patient safety or KT strategies. Authors response: We would agree that learning with peers is a topic in itself and we have hopefully highlighted the very basic nature of what is required of practitioners by their regulators. We have expanded the following sentence to highlight a wider definition and potential of learning with peers methods but we feel space limits a fuller discussion. "It has been suggested that modes of training that involve group or peer learning are more effective at influencing practitioner behaviour and this type of learning can encompass a wide range of activities beyond the lecture room, such as learning with peers in the workplace." Page 24, line 12 to 14. The details of the construct or expectations of a PDP would be very helpful to the literature rather than simply having completed one. I would encourage the authors to consider a more focused comparative on the details of the PDP and whether or not the plan must be stimulated in part by practice data or feedback. The lack of any comparison on whether the educational process must include an analysis or auctioning of practice-based data was an important oversight in my view. Even describing the lack of requirements would have been better than no discussing this at all. Author response: Thank you for this suggestion. We have looked at the documented requirements for PDPs when they are asked for and set these out in a new table with an explanation and discussion in the results and discussion section respectively. We have found that PDP use is almost all selfdirected and self-evaluated and consequently descriptions are vague. We have attempted to show some absolutes, was there pre planned learning, was it documented and did they reflect on it meeting the need. "Where a PDP is required the expectations vary considerably and are detailed in table 3, which shows where pre-planned learning was required, how this was informed, if it was documented and if, on completion, there was reflection on the learning meeting the need. In most cases learning goals are informed and set by the learner only, with the exception of the General Medical Council which uses the PDP as part of its reaccreditation procedure, requiring input into learning goals from an appraiser, quality data, significant events data and patient feedback. It should be noted that many of the regulators suggest that a PDP is informed by a variety of sources, but they do not require this to be documented. All require a documented action plan for all, or part of the planned learning but the degree of reflection on the plan on completion is variable." Page 15, line 20 to Page 16, line 2. "Further to this our analysis shows that the use of PDPs, recommended within UK health services for some time, is not universal and that when used they are mostly self-directed and self-evaluated. Only in medicine is the PDP informed by objective data and evaluated by an appraiser. Even though the inclusion of PDPs in formal appraisal has been recommended for allied health professionals in some extended roles we would suggest that most non-medical health professions are not fully utilizing the potential of PDPs as defined by accepted definitions." Page 24, line 4 to 9. In the future I would recommend that multidisciplinary be replaced with interprofessional CPD as a specific focus. Author response: We have now used the term interprofessional learning throughout as suggested by another reviewer. Discussion. I was not sure of why a discussion on the differences with the parliamentary brief or how to determine the number of health professional practicing in a specific domain was included. It made the paper to be too specific to the UK to be helpful to other countries or CPD systems. Author response. We have now removed the paragraph on page 21, that explains the differences with the parliamentary brief and the section on page 22, that discusses the complexities of determining total number of health professionals. Some of the statements in this section raised concerns -particularly around the "move to CPD has been driven by the suggestion that traditional CE learning activities have a limited effect of practitioner behaviours and patient outcomes" There was no reference to Cervero's synthesis of systematic reviews in JCEHP in 2015 that summarized 31 systematic reviews focused on these outcomes. Although the impact on patient outcomes is smaller than on physician behaviours -there are important impacts -depending on whether the educational activity was a response to an identified professional practice gap. Authors response: Thank you for highlighting this important review that we had missed. In our discussion we are not aiming to give full justice to the evolving debate around the effectiveness CE/CPD rather to give the reader a sense that post qualification training is now more often described as CPD with some sense of the aspirations of the CPD approach that are driving this. We have amended the following sentence which hopefully restores some balance. "This move to CPD has been driven by the suggestion that the positive effects of CE on practitioner behaviours and patient outcomes can be improved upon using the broader scope of CPD, although what constitutes effective CPD is still very much in question." Page 23, line 17 to 21. The discussion section should have focused more on the almost total lack of any implemented CPD accreditation system (with one exception) or the need for CPD documentation to focus on not just the plan for improvement but on whether the plan resulted in improvement. Authors response: We have redrafted the article summary and part of the discussion to emphasis the lack of CPD accreditation and the partial uptake of PDPs. "Our results show that ongoing post qualification training, termed continuing professional development (CPD) by United Kingdom (UK) healthcare regulators, is now a mandatory requirement for all regulated healthcare professionals in the UK. We define which health professions are regulated in the United Kingdom and the numbers registered under these titles. Eight out of the nine regulators do not mandate modes of CPD to be undertaken or require individual CPD activities to be pre accredited. There is only partial adoption of potentially more effective modalities, such as peer-to-peer learning and use of personal development plans (PDPs) and very little requirement for interprofessional learning. A limitation of this review is the lack of detail about the individual CPD schemes undertaken by Doctor of Medicine, which uniquely for this profession are defined by medical colleges, faculties and employers. Their regulator, the General Medical Council, and the Academy of Medical Royal colleges issue broad guidelines on the characteristics of CPD that doctors need to complete, and we have assumed that these are followed by individual schemes. By making this assumption we have been able to comment on most of their CPD characteristics with the notable exception of the requirement for group learning. " Page 5 line 6 to line 20.. "There is even less uniformity in the actual modes of CPD required, group or otherwise, with only one regulator, the General Optical Council setting out detailed guidelines for what is acceptable as CPD and then accrediting each activity before it happens. Individual colleges of medicine or faculties may accredit CPD, but this is not done by the regulator, the General Medical Council. The other seven regulators only suggest the modes of learning that are acceptable, and the onus is then on the registrant to ensure the CPD activity is of adequate quality and relevant to their learning needs. These regulators have therefore, only a limited insight and influence on the content, design and quality of the CPD being undertaken by their registrants basing their scrutiny of activities on the learners' records after the fact, as part of their verification processes. Given that the evidence base for what constitutes effective CPD is still developing it is possibly understandable that specified modes of CPD are not yet mandated by the regulators and that the considerable organisational challenge of accrediting all CPD activities before they occur has only been undertaken by one regulator." Page 24, line 18 to Page 25, line 5. There was an appropriate focus on the need to focus on the intent and meaning of inter-professional CPD as one means to focus CPD more on the work place to enhance the health outcomes experienced by patients. Authors response: Noted with thanks. Limitations -this section should expand on the limitation of drawing conclusions from documents that are populated on websites as describing minimal requirements. The authors did not attempt to understand the vision, goals or purpose of these systems in light of the requirements that have been developed to date. Authors response: We have added this statement which hopefully highlights that we are only considering minimum requirements for CPD. "A limitation of this review is that it only considers the mandatory minimum requirements on professionals for completion of CPD as a requirement of their registration. All the regulators provide a wealth of information and advice on the role of CPD and best practice, a detailed consideration of which would be valuable but beyond the scope of this paper." Page 26, line 11 to 14. GENERAL COMMENTS Thank you for the opportunity to review the revised manuscript. The authors have addressed the suggestions by the reviewers. However, in making changes, other minor issues are created that need remediation. There are a number of sentences that are long and complex that need to be simplified. GENERAL COMMENTS This paper has been extensively re-written to address a number of issues and concerns that were identified in a prior peer review. The methodology used is now clearly a scoping review to identify the common elements of national CPD systems in the United Kingdom across all regulated health professions. The supplementary files identified the search terms used for all databases, the articles that were selected to inform the search terms or stated characteristics of a CPD system and then the use of the PRISMA-check list to ensure that the data from websites or documents describing the characteristics of the CPD systems was complete. The authors appropriately limited the search to articles published after 1990 as the state of CPD systems has shifted dramatically, particularly over the past 15 years. Six of the regulators revised their CPD system since 2013 with very different outcomes in relation to the need for health care professionals to keep a PDP; engage in interprofessional CPD and to engage in learning outside of group learning activities. The reference list was substantially updated to include references that should guide the future development of CPD in the United Kingdom. What I appreciated about this research was the focus on specific characteristics of a CPD system rather than on the specific numbers of credits that are so particular to a system. This is helpful to other systems doing a comparison of their own requirements. Suggestions for revision. 1. Although It would have been helpful to know more about the role for assessment or the use of practice data to drive learning and continuous improvement of practice, this was partially addressed in the discussions on who sets the goals for the PDP and what data should be considered in setting a practice-specific learning plan. Even though the checklist used did not abstract this data from the sites reviewed a comment in the discussion would be helpful on this point. 2. The data suggests or implies that at least half of the requirements can be completed within practice or one's workplace. It was not clear to me whether the requirements for group learning were only focused on external conferences or courses or whether these would include work-place based peer to peer learning such as rounds, journal clubs or other small group sessions that are typically part of regularly scheduled series. Clarification in the table or in the discussion would be important 3. There was limited emphasis on the development and maintenance of a formal CPD accreditation system (activity or provider based systems). This was not mentioned adequately enough in the discussion section and deserves greater emphasis particularly given the potential future role for joint accreditation of team based CPD in support of IPE. 4. The lack of emphasis on interprofessional education as a requirement for learning and improvement was (in my view) one of the most important findings of this scoping review. What types of activities would constitute IPE or reflect interprofessional collaborative practice would be worth commenting on in the discussion. These terms are often used without a definition or without a clear conceptual understanding. Finally, given that scoping reviews are summarizes of 'what is' there is an opportunity to identify gaps in the literature that would be helpful for future research or development initiatives. I would ask the authors to further develop the discussion section to describe these gaps and what further research is required to address those gaps. For example, is the limited focus on IPE a lack of conceptual understanding; the lack of an accreditation system common to all health professions? What is the role for patients in a future CPD system and should such systems be designed to address their needs? How can CPD -as a component of implementation science -be best integrated with patient safety, quality improvement or knowledge transition strategies or programs within the work-place? What is the role for simulation in learning new things or applying new skills or procedures prior to performing these in practice? Being a bit more directive of the focus for future research and calls for systems to address these gaps would be helpful to the literature in general Reviewer Name: Carey Mather There are a number of sentences that are long and complex that need to be simplified. These include: P4L17 onwards; P6L2 onwards; P8L7 onwardsP22L3 onwards. The conclusion is three long sentences. Authors response: We have made the following revisions to reduce the sentence lengths. Page 5 Line 4-9 This system of professional regulation is currently under review by the Department of Health 9 (the branch of the UK government concerned with the maintenance of public health). Regulators are being asked to ensure that pre-qualification training of new staff meets the need for a more flexible workforce. As stated in the NHS Long Term plan for England, much of the development of the existing workforce will fall to continuing education and training (CET) or CPD programmes, unique to each professional group. Page 6 Line 13-17 Three of those regulators (Care Council for Wales, Northern Ireland Social Care Council and the Scottish Social Services Council) which solely regulate social professionals, such as adult home care workers, childcare workers, and qualified social workers were excluded from this analysis as they do not regulate healthcare professionals. Page 9 Line 1-5 In the case of Prosthetists and Orthotists the situation is more ambiguous as the titles listed describe two distinct roles 29.The undergraduate training for these roles is the same and an individual holding the qualification can carry out both regulated functions making alterations to CE marked protheses and making alterations to CE marked orthoses, the two titles were counted as one profession for the purpose of this analysis. The four sentences of the conclusion now read as follows. Page 24 Line 16-22 In 2019 there were 32 distinct healthcare professional titles regulated by nine statutory regulators. CPD is now a mandatory verified requirement for all of these professions but there is considerable variation in the characteristics of the CPD required of them with only one regulator accrediting CPD activities. There is only partial adoption of potentially more effective modalities, such as peer-to-peer learning and use of PDPs and very little requirement for interprofessional education. Reflection on learning undertaken is commonplace but reflection on future learning needs, a defining feature of CPD, is not a requirement for most UK health professionals. P10L35 the sentence ending 'this' needs clarifying. What is this? Clarification of the use of Z and S for this journal needs to be consistent eg P6L6 'scrutinized'. Exchange the word 'look' throughout the text for a more appropriate word. Authors response: We have amended the following: Page 11 Line 15 "but we cannot confirm this is the case." Page 19 Line 20 "In contrast to CE, CPD has a much broader ambition of developing a wider range of skills beyond those core skills needed for continuing practise, aiming to develop the individual across their whole career." Page 19 Line 22-23 "requires the practitioner to consider engaging in structured learning activities beyond those aimed at just addressing specific learning needs " Page 6 Line 2-3 "Only the title was searched, using Boolean operators AND and OR combined with truncation and phrase searches." Minor editing ie P4L6 'emphasises'?; P4L13 'Government'. If information is important it needs to be included in the body of the paragraph rather than in brackets. If it does not add to the meaning, the words need to be deleted P4L17-18. P10L7, is the mean 27 hours? Please indicate/clarify. Authors response: We have amended the following: Page 4 Line 17 "emphasis" Page 4 Line 24 "independent of Government and derive their powers to regulate from primary and secondary " Page 5 Line 4-5 "This system of professional regulation is currently under review by the Department of Health 9 (the branch of the UK government concerned with the maintenance of public health) and regulators…" We feel that the use of the bracket is justified in this case as it contains explanatory information helpful in understanding the sentence. Page 11 Line 7 "from 11.7 hours to 50 hours (mean 27 hours)." P10L9 onwards, removal of the extraneous 'the' in the list of regulators will improve readability. The tables and supplementary files are useful -Do the tables related to regulators require 'The' in each heading? Authors response: We have edited Page 11 Line 8-12 "Learning with peers is required by five regulators: General Chiropractic Council, General Osteopathic Council, Nursing and Midwifery Council, General Optical Council and the General Pharmaceutical Council. The Health and Care Professions Council, Pharmaceutical Society of Northern Ireland and General Dental Council do not yet require group or peer learning, but they do suggest it as a type of CPD activity. " Table 1 page 7-8, Table 2 page 10, Table 3 page 15, and Table 5 page 17. We have removed "The" from each regulator in Tables where they occur. P20L6 does a specific referenced 'government white paper' need capitalised letters (GWP)? Authors response: As far as we can tell White Paper does need to be capitalised when referred to as the subject. E.g. "The proposed regulatory framework, published in a White Paper today, will impose a statutory". We have changed it to read "government White Paper." Page 18 Line 24 I think government takes a lower case g in this situation. I was guided by this by the following excerpt about the use of capitals written by the Plain English Campaign. "Government If we are referring specifically to 'the Government' (for example, 'when the Government decides its policy'), we would use a capital 'G'. However, if we are referring to government in general (for example, 'national and local government'), or as an adjective (for example, 'many government departments'), we would use a lower case 'g'. " https://www.plainenglish.co.uk/files/capitalletters.pdf Reviewer Name: Craig M Campbell Although It would have been helpful to know more about the role for assessment or the use of practice data to drive learning and continuous improvement of practice, this was partially addressed in the discussions on who sets the goals for the PDP and what data should be considered in setting a practice-specific learning plan. Even though the checklist used did not abstract this data from the sites reviewed a comment in the discussion would be helpful on this point. Authors response: The professions outside of medicine have either not moved to a recognisable PDP model or are asking for elements of a PDP on a self-regulating basis. The detail offered in their guidance is vague and we have abstracted what we can definitively say about requirements. There may be more engagement with PDP's by individuals but that would require a detailed look beyond the regulator requirements which we have stressed are a minimum. We have amended the following sentence to recommend the role of objective data in driving learning. Page 20 Line 15-16 "Only in medicine is it a requirement that the PDP is informed by objective practice data and evaluated by an appraiser, a model other professions might consider moving towards to help drive learning and improve practise." The data suggests or implies that at least half of the requirements can be completed within practice or one's work-place. It was not clear to me whether the requirements for group learning were only focused on external conferences or courses or whether these would include work-place based peer to peer learning such as rounds, journal clubs or other small group sessions that are typically part of regularly scheduled series. Clarification in the table or in the discussion would be important Authors response: Again, the requirements for group learning are very open, requiring just learning "with others". To help highlight this we have added the following sentence. Page 21 line 4-6 "The assumption can be made that these activities could occur in the workplace in the form of, for example, small group sessions or rounds or through attendance of external events such as conferences." There was limited emphasis on the development and maintenance of a formal CPD accreditation system (activity or provider based systems). This was not mentioned adequately enough in the discussion section and deserves greater emphasis particularly given the potential future role for joint accreditation of team based CPD in support of IPE. Authors response: We have amended the following paragraph. Page 21 Line 17-23 Given that the evidence base for what constitutes effective CPD is still developing 96,97, it is possibly understandable that specified modes of CPD are not yet mandated by the regulators but as our review shows CPD is now a mandatory component of revalidation and is likely to become central to future accreditation of multidisciplinary teams 81. Consequently, the considerable organisational challenge of accrediting all CPD activities before they occur may become a necessity if content and quality are to be assured. The lack of emphasis on interprofessional education as a requirement for learning and improvement was (in my view) one of the most important findings of this scoping review. What types of activities would constitute IPE or reflect interprofessional collaborative practice would be worth commenting on in the discussion. These terms are often used without a definition or without a clear conceptual understanding. Authors response: We have added the following to hopefully characterise IPE for the reader and highlighting that this evidence base is not being incorporated. Page 22 line 24 to Page 23 line 6. "A 2016 review 100 clearly defined IPE and considered its effects across a wide range of activities: class based courses, simulation, clinical settings and online learning environments. It summarised the evidence of a positive effect of IPE on learner attitudes/perceptions as well as collaborative knowledge/skills and suggested potential benefits in collaborative behaviours and service improvement. The lack of required interprofessional learning we have identified means that current CPD may not be incorporating this growing evidence base or contributing to the development of multidisciplinary working and integrated care. " I would ask the authors to further develop the discussion section to describe these gaps and what further research is required to address those gaps. For example, is the limited focus on IPE a lack of conceptual understanding; the lack of an accreditation system common to all health professions? What is the role for patients in a future CPD system and should such systems be designed to address their needs? How can CPD -as a component of implementation science -be best integrated with patient safety, quality improvement or knowledge transition strategies or programs within the workplace? What is the role for simulation in learning new things or applying new skills or procedures prior to performing these in practice? questions. The literature search -despite the extensive search strategies developed for multiple databases did not include several descriptions of CPD systems in the USA and Canada -which may have been informative to the inclusion of or use of practice data or other assessment options (of competence, performance or health outcomes) to guide learning and the role for feedback in framing future learning plans. This issue was address in part in the Discussion section but was not an explicit part of the data abstraction process. That said, this review has a number of strengths in documenting variation across multiple health professions within the United Kingdom. The authors have responded to previous feedback in this latest version of the manuscript including an expansion of the text and references related to interprofessional health education and the role for team-based learning and improvement for enhancing patient care. Although I am not convinced that the authors used a scoping review methodology to drive the data abstraction process -as the details of how many steps in the scoping review process were not detailed -the authors have compiled some helpful data that should enable reflection on the role of various educational strategies within systems of CPD. I would ask the authors to: 1. Supplement their overall objective with a set of research questions related to the key elements of table 1 2. Develop a paragraph or two on what is missing from the literature -the gaps they identified that could serve as the focus for future research. 3. Be more explicit on how the scoping review methodology was used to define the key elements of international CPD systems that was then utilized to compare alignment with UK health professions. I do think there is enough value in this paper for publication. I am simply attempting to address the scientific basis for the authors conclusions in the hope that this will be helpful to them in future research initiatives VERSION 3 -AUTHOR RESPONSE We would like to say how much we appreciate the time taken by the reviewer to provide these further comments and for his support in developing this manuscript. We have addressed the requests made by the reviewer point by point below. We provide clean and tracked changes versions of the revised manuscript. Our responses below are prefaced by "Authors' Response" and shown in blue to distinguish from the Reviewer's comments. Line numbers we mention in responses refer to manuscript version with tracked changes. 1. Supplement their overall objective with a set of research questions related to the key elements of table 1 Author response: We have redrafted the objectives as set out in the abstract. "This paper sets out to establish the numbers and titles of regulated healthcare professionals in the United Kingdom and uses a review of how CPD for health professionals is described internationally to characterise the post qualification training required of UK professions by their regulators. It compares these standards across the professions and considers them against best practice evidence and current definitions of continuing professional development (CPD)." Page 2 Line 3-7 We have amended the methods section to include a research question for the review of CPD characteristics. "The published literature was consulted using a modified Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist to answer the question "What are the characteristics of post qualification training systems for health care professionals as described by the literature?"." Page 5 Line 16-18 2. Develop a paragraph or two on what is missing from the literature -the gaps they identified that could serve as the focus for future research. Author response: The primary purpose of this review was to identify the CPD requirements made by the professional bodies and in discussing these we have identified some areas that we feel would benefit form more investigation, such as the use of IPE. We have modified the following paragraph to make these suggestions more explicit. "This review indicated there are significant areas where there are gaps in the research. Further investigation into the adoption and effects of IPE across the UK health system is needed if the current policy aspirations for the development of multidisciplinary team working are to be informed by evidence. As current CPD requirements evolve, research is needed to inform regulators on how planned learning can be integrated into evolving systems for patient safety, workplace learning, quality improvement and multi credentialing. The challenge of understanding what constitutes effective CPD from the patient, practitioner and health system perspective will need to acknowledge the planned digital future for the NHS workforce103 where simulation and virtual learning environments will become more common and new skills will be needed to work in more technologically enabled services." Page 22 Line 5-6 3. Be more explicit on how the scoping review methodology was used to define the key elements of international CPD systems that was then utilized to compare alignment with UK health professions. Author response: We have added a research question into the objectives of Supplementary File 1. We provided the information as supplementary files on the suggestion of the editor in the first set of comments dated 5/11/19. "Objectives The objective of this scoping review was to answer the question "What are the characteristics of post qualification training systems for health care professionals as described by the literature". The findings were used to develop a list of common features used when describing these CPD systems which can be applied to our characterisation of UK CPD requirements." Page 1 (Supp. File 1) Line 9-12 We have expanded the Synthesis section of Supplementary File 1 to detail the abstraction of common features and synthesis of them with the authors own understanding to compile the final list of CPD characteristics used to organise the findings of the review of regulatory requirements. "Synthesis of evidence. The following list of characteristics were abstracted from the sources of evidence. CPD requirement made Types of providers Period of CPD cycle Details of finance Time requirements Barriers to participation Modes of CPD Levels of participation Verification methods Content of CPD Period of CPD cycle Interprofessional learning Use of Personnel Development Plans The authors used these characteristics of CPD systems to inform their own understanding of how CPD systems are characterised. A list of common features was compiled that we envisaged would be applicable to the regulatory data sources we would be interrogating. Descriptors related to participation in CPD systems, such as discussions of barriers and levels of participation were excluded. Information on how CPD systems are financed was considered beyond the scope of this review. The following list of common features was used to organise the findings of the UK CPD requirements Features added by the authors are highlight by italics. Regulator Term used Completion of CPD required for registration Date current scheme adopted Length of CPD cycle(years) Total time requirement Group (peer to peer) learning requirement Modes of CPD required or suggested CPD accredited by the regulator Reflection required Personal development plan required Interprofessional CPD required In addition, the recording and verification of CPD by regulators was to be analysed under the following headings: Regulator CPD log submitted by all Online record of CE/CPD offered by regulator CPD record verification process Verification/ audit of CPD record Where PDP was required the following information was also presented: Who sets the learning goals and how are they informed? Learner only or with mandatory input from third parties: facilitators, appraisers, tutors, or colleagues Is there a documented CPD action plan? Is there a required reflection on the planned CPD meeting the learning need?" Page 11 Line 1 to Page 13 Line 1 of supplementary file 1
2020-03-12T10:25:06.876Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "697a74a3af24d031e73b6002b0d9f06ae502af2a", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/3/e032781.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b1e85c668ce4b1cae94ce2c7f3a21a8d18e84dc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229939071
pes2o/s2orc
v3-fos-license
MUC 15 Promotes Osteosarcoma Cell Proliferation, Migration and Invasion through Livin, MMP-2/MMP-9 and Wnt/β-Catenin Signal Pathway Objective: To investigate the high expression of MUC15 in promoting proliferation, migration and invasion in osteosarcoma (OS) cell and its potential mechanism. Methods: The expressions of MUC15 in OS patients were analyzed from GEO Datasets, tumor cell lines and clinical samples. The roles of MUC15 in OS were explored by CCK-8, flow cytometry, transwell and western blot assay, respectively. Results: MUC15 was highly expressed in osteosarcoma, and there was a significant negative correlation between MUC15 and the prognosis. Knockdown of MUC15 in HOS and U-2OS could promote tumor cell apoptosis, down-regulate the expression of MMP2/9, reduce the epithelial interstitial transition and silence the Wnt/b-Catenin signal pathway. Conclusion: The high-expression of MUC15 promotes the proliferation, migration and invasion of osteosarcoma through anti-apoptosis, increasing the invasive ability by epithelial interstitial transition, and activating the Wnt/b-Catenin signal pathway. Introduction Osteosarcoma (OS) is the most common primary bone malignancy, which has the highest incidence in children and adolescents. As a bone tumor with high malignant degree and poor prognosis, it seriously threats the physical and mental health of patients [1]. With an understanding of the biological characteristics of bone tumor, the improvement of surgical techniques, the establishment of clinical staging system, the use of new adjuvant chemotherapy and the development of imaging technology in the treatment process, the 5 year survival rate of OS increased from 20% to 65%-70% [2,3]. However, in the past two decades, there has been no further improvement in survival. Among the newly diagnosed patients with OS, about 20% of the patients have distant metastasis, and 90% of them are pulmonary metastasis. Once OS has metastasis and recurrence, even with new adjuvant chemotherapy, the 5 year survival rate of these patients is only 20% to 30% [4][5][6]. The recurrence and metastasis are main problems restricting the therapeutic effect of OS [7]. In view of the poor prognosis of OS, some researchers tried to explore the changes of its gene and molecular Ivyspring International Publisher mechanisms through individual diagnosis, such as gene sequencing and chemotherapeutic drug screening. Great progress has been reported that chromosome abnormality, tumor suppressor gene abnormality, transcription factor, growth factor, WWOX and miRNAs play important roles in the occurrence and development of OS [8][9][10][11][12][13][14][15][16]. However, the above research is still in its infancy in use, and there is no sufficient evidence to show that patients' benefit from it. At present, the etiology and pathogenesis of OS have not been fully elucidated [17]. Hence, it is necessary to do further understand the biological characteristics and pathogenesis of OS to promote the development of targeted therapy for primary and metastatic OS. Mucin (MUC) family is a kind of highly glycosylated proteins, which mainly provides lubrication and protective chemical barrier functions. In cancer research, MUC has many other special functions, involving tumor cell proliferation, apoptosis, migration, adhesion, invasion and drug resistance. MUC15, a cell transmembrane mucin, has been reported over-expressed in papillary thyroid carcinoma and colorectal carcinomas, which was negative to the prognosis [18,19]. The potential role of MUC15 in OS has never been reported. In this manuscript, we focused on the effects and molecular mechanisms of MUC15 that contribute to the progression and metastasis of OS. As a potential therapeutic target, MUC15 will have major implications for the treatment of OS. Patients and clinical samples This study enrolled 41 patients underwent surgery and diagnosed to be confirmed OS in Suzhou Ninth People's Hospital and the People's Hospital of Danyang. Without preoperative chemotherapy or radiotherapy treatment, the OS tissues and corresponding adjacent normal tissues were kept frozen in a refrigerator. Assessment of tumor cell apoptosis Apoptosis were quantified using Annexin V-FITC (Miltenyi, 130-092-052) to detect externalized phosphatidylserine and PI (Miltenyi, 130-092-052) to detect plasma membrane disruption. Tumor cells were firstly pretreated with DDP (10 mg/mL) for 24 h, and then assessed with Annexin V and PI in binding buffer for 15 min in the dark. Cells were collected using flow cytometer (BD Calibur) and the data were analyzed using FlowJo software. Cell proliferation assay Cell Counting Kit-8 (CCK-8) was used (MCE) according to the manufacturer's protocol, and cell proliferation was determinate by colorimetric assays. Transwell assays Cell invasion assay was performed by 8μm pore membranes Transwell plates (Corning) with Matrigel. 4 × 10 4 cells were planted into the upper chambers with serum free medium. The lower chamber was offered with medium containing 10% FBS as chemoattractant. After 48 hours incubating, cells in the upper chambers were wiped by a brush. Then the membrane was stained with 0.1% crystal violet and noted by an inverted microscope. The migration ability of cells was detected by transwell chamber without Matrigel, and the method of detection was the same as that of invasion test. Wound healing assay Cells were seeded in 6-well plates until allowed to reach confluence. Then each well was scraped to create a linear region devoid of cells with a pipette tip (10 μL). Cells cultured with DMEM medium (without FBS) for 24 hours. Then the healing of scratches was observed under microscope. Statistical analysis Each experiment in this study was repeated for 3 times independently. Data were expressed as Mean ± SEM. Statistically significance of differences was performed using GraphPad Pad Prism 5 and SPSS 22.0 by Student's t test and ANOVA. For survival assays, comparisons were analyzed by a Log-rank test. p<0.05 was deemed as statistically significant. GEO Datasets and clinical specimens analyze the high expression of MUC15 in osteosarcoma We first analyzed the expression of different gene between OS and normal bone tissues in GSE11416 chip form the GEO Datasets, and then screened the mRNA with more than 2-fold change of expression. MUC15 was found expressed significantly high in osteosarcoma tissues (p=0.0103, Fig. 1A). Then we further detected MUC15 expression in OS tumor cell lines and clinical samples to verify the results (Fig. 1B & C). At last, we evaluated the effect of MUC15 expression on the survival of patients with OS. The clinical data of 41 OS patients showed that MUC15 was significantly negatively to the survival of OS patients (p=0.0169, Fig. 1D, Fig. S1). These results demonstrated MUC15 expressed significant high in OS, which may be correlated with the poor prognosis. High-expression of MUC15 promotes proliferation, migration and invasion in OS cell Proliferation, migration and invasion are the key factors of tumor malignant transformation. To investigate the potential effect of MUC15 in OS cell, we knocked down the expression of MUC15 in HOS and U-2OS cell lines by using RNA interference technology ( Fig. 2A). At the cell line level, the effect of MUC15 expression in HOS and U-2OS proliferation, migration and invasion were detected by using CCK-8, flow cytometry, transwell and wound healing assay. The results showed that knockdown MUC15 of OS cells reduced the proliferation ratio, increased the apoptosis rate, and inhibited migration and invasion significantly (Fig. 2B-E). In summary, these data demonstrated that the high-expression of MUC15 plays a critical role in OS cells proliferation, migration and invasion and may be an important target for clinical treatment of OS. Mechanism of MUC15 on Osteosarcoma proliferation, migration and invasion In this study, the possible mechanisms of MUC15 in progression and metastasis of OS were discussed. First of all, the different-expression of the apoptosis-inhibiting protein Livin in MUC15 NC (high) and MUC15 KD (low) OS cell lines suggested MUC15 affecting cell proliferation and apoptosis through Livin (Fig. 3A). Secondly, as the depth of invasion, metastatic distance and vascular permeability of OS cells are related to the expression of MMPs and EMT related proteins [21,22]. Compared with MUC15 NC cells, MMP-9, MMP-2 and Vimentin proteins in MUC15 KD group was significantly down-regulated, while E-cadherin was up-regulated, which partly explained how MUC15 promotes the migration and invasion of OS cells (Fig. 3B & C). Lastly, activation of signal pathways is often abnormal in the occurrence and development of OS. Hence, we detected the Wnt/β-Catenin signaling pathway which was important in regulating the biological characteristics of tumor cells and the progression of the disease [23]. Western blot showed that the expression levels of b-Catenin and c-Myc (Wnt/b-Catenin signaling pathway-related proteins) in MUC15 KD OS cells were lower than that in MUC15 NC cells (Fig. 3D). These results suggested that MUC15 could promote OS cell proliferation, invasion and migration through resisting apoptosis, regulating the levels of MMPs and EMT related proteins, and activating Wnt/β-Catenin signal pathway. Discussion In recent years, with great progress of research and a series of new breakthroughs have been made in the treatment of OS, limb salvage therapy supported by new adjuvant chemotherapy has been widely carried out in clinic, while life quality of patients has also been greatly improved. However, for the shortage understanding of the pathogenesis of OS, the survival and the prognosis of this invasive bone tumor has hardly improved. Patients with recurrent or metastatic osteosarcoma are usually resistant to standard chemotherapy. When traditional surgery and chemotherapy can no longer effectively control metastasis, new treatment strategies are urgent to be explored [24][25][26]. With the development of molecular biology research, researches on immunotherapy, gene therapy and molecular targeted therapy provide more hopes for the treatment of OS. Of note, molecular targeted therapy has stronger accuracy, specificity, and fewer side effects. In this regard, numerous studies have worked on searching for new therapeutic targets. Recently, molecules involved in OS cell migration, invasion, angiogenesis, apoptosis, and proliferation have been demonstrated as reliable biomarkers and therapeutic targets, such as IGF-R, EGFR, VEGF, AURKA, and some miRNAs/lncRNAs [27][28][29]. Further research on the molecular targets and their mechanisms of osteosarcoma will hopefully provide new insights into the therapies. MUC, as well-known solid tumor antigens (especially MUC16) are routinely used for monitoring disease. However, there is still little functional information of other transmembrane MUC. In this study, we first demonstrated the high-expression of MUC15 in OS tumor cell lines and clinical samples. Subsequently, we investigated the role and mechanism of MUC15 in promoting OS proliferation, migration and invasion, although a recent study of MUC15 reported a tumor suppressing role in renal cell carcinoma. MUC15 also has been shown as an oncogene in the development of cancer and influence cellular growth, adhesion, invasion, metastasis and immunosuppression. These evidences indicate that it plays different roles in different types of cancers, which may because MUC15 involved in lots of biological functional regulations. Limitations of this research should not be ignored. Since antibodies were only specific to human, experiments could only be carried out at the cellular level. Molecular mechanism of MUC15 plays in vivo remains to be further confirmed. In addition, the relatively small sample size utilized likely contributes to the lack of significance in analyzing correlation with MUC15 expression and clinical characteristics (such as Enneking stage, age, gender, etc.). Future studies should incorporate more OS cell lines and evaluate more clinical cases. Targeted therapy strategies of MUC15 and its underlying mechanisms will provide a theoretical basis for the innovative therapies of OS with a bright future. Ethics committee approval and patient consent All of the experiments were approved by the Ethics Committee for Human Studies of Suzhou Ninth People's Hospital and The People's Hospital of Danyang and followed the Declaration of Helsinki. All participants signed written informed consents before the study.
2020-12-17T09:11:42.375Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "9a985095e7d13546137644f9a1bfc95e404ef325", "oa_license": "CCBY", "oa_url": "https://www.jcancer.org/v12p0467.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ff9096462464ac7d3c81c78516f9516640d43276", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
134364257
pes2o/s2orc
v3-fos-license
Towards a New Paradigm of Urban Water Infrastructure: Identifying Goals and Strategies to Support Multi-Benefit Municipal Wastewater Treatment : Over the past decade, water professionals have begun to focus on a new paradigm for urban water systems, which entails the recovery of resources from wastewater, the integration of engineered and natural systems, and coordination among agencies managing different facets of water systems. In the San Francisco Bay Area, planning for nutrient management serves as an exemplary model of this transition. We employed a variety of methodological approaches including stakeholder analysis, multi-criteria decision-making weight elicitation Introduction Throughout the world, researchers and practitioners have recognized the need to move towards a more sustainable paradigm for wastewater treatment and water management [1][2][3][4][5][6][7][8][9][10]. This new paradigm entails a shift in goals and expectations for municipal wastewater treatment by encouraging the recovery of water, energy, and nutrients from sewage, by employing natural systems for water treatment, and by coordinating among agencies managing different facets of water systems. The implication is that wastewater treatment plants should do more than meet their traditional objectives of protecting receiving water quality by removing organic matter, nutrients, and pathogens from sewage. In the United States, much of the existing municipal wastewater infrastructure is nearing the end of its design life [11]. In the next two decades, hundreds of billions of dollars will be needed to maintain wastewater systems, which amounts to an investment of approximately $830 per person in the United States [11][12][13]. Population growth, the sea level rising, and concerns about the impacts of nutrients and trace organic contaminants in wastewater may require additional investments [7,[14][15][16][17][18]. Historically, regulatory compliance has been a main driver for wastewater infrastructure planning [19]. Yet this traditional approach in which pollution problems identified by regulators are solved by retrofitting existing treatment systems may not be sufficient for transitioning urban water systems to a more sustainable state [20,21]. Instead institutional shifts that embed regulatory and political support for multi-benefit infrastructure early in planning processes may be more effective [22]. Furthermore, cooperative regional approaches to water management are often less expensive and more efficient [23] such as when preparing for uncertain future conditions [24]. Despite its potential benefits, many institutional impediments exist to implementing multi-benefit water infrastructure projects including a lack of coordination among institutions with different areas of expertise and jurisdiction, unclear roles and responsibilities of different agencies and stakeholders, poor communication, and a lack of long-term strategy [25]. Nutrient pollution exemplifies some of the key limitations of traditional wastewater infrastructure planning. Wastewater treatment facilities have historically enacted plant upgrades in response to regulatory concerns about the effects of nutrient pollution on receiving waters. These upgrades are generally energy intensive and expensive [26][27][28]. Upgrades frequently consist of the installation of treatment systems that employ nitrification and denitrification or biological nutrient removal [29]. Despite large capital investments, nutrient reductions do not always immediately improve conditions if water quality is severely impaired or if there are multiple pollution sources such as in Chesapeake Bay [28]. Additionally, changes to municipal water infrastructure require years or decades to plan, fundraise, and build. With unknown future conditions like those due to population growth/decline and climate change, investments in nutrient control may not always result in the desired ecological improvements [30]. In cases in which dynamic environmental conditions complicate decision-making about water infrastructure, multi-benefit technologies may hedge against the risks posed by a future uncertainty. For example, nutrient pollution may ultimately prove to be less problematic than expected if environmental conditions or population decrease. Irrespective of future conditions, a multi-benefit solution to address nutrient pollution that provides additional benefits of wildlife habitat, increased shoreline access, or resource recovery can be seen by stakeholders as a net benefit overall. Fundamentally, transitions to more sustainable wastewater systems require clear articulation of a long-term vision. This includes the sharing of ideas among stakeholders that define the specific goals sustainable water systems should meet and general agreement about the technologies that could support these goals [31]. Despite its importance, the development of this shared vision is often overlooked even in cases that take a deliberative approach to a multi-benefit infrastructure [32,33]. A comparison of stakeholders' goals with their professional and institutional mandates can shed light on some of the barriers to implementing multi-benefit water infrastructure projects. Case Study Background To characterize and develop the specific, regional goals that underlie a more sustainable vision of wastewater infrastructure, we analyzed a case study of planning for nutrient management in the San Francisco Bay Area, California. The southern reach of San Francisco Bay receives approximately 34,000 kg of nitrogen each year primarily from discharges from eleven municipal wastewater treatment plants [34][35][36]. These discharges make the San Francisco Bay one of the most heavily nutrient-laden estuaries in the nation in terms of concentration in Bay water [37]. Domestic sewage is the main nutrient source in municipal wastewater in locations such as the San Francisco Bay Area where industrial discharges are small [38]. During the second half of the 20th century, primary productivity in the San Francisco Bay was limited by sunlight penetration. Consequently, eutrophication was not as much of a concern in the Bay as it has been in other nutrient-rich aquatic ecosystems [37]. However, water managers are concerned that current nitrogen loads could soon result in poor water quality and impairment of the Bay's beneficial uses due to shifting environmental conditions like increasing water clarity, longer water stratification periods, and declining populations of invasive bivalves [39][40][41][42][43]. In the Bay Area, water managers are proactively addressing nutrient pollution before the ecological situation deteriorates. They are aware that infrastructure investments can take years to materialize and that changing environmental conditions may increase nutrient over-enrichment in the future. By proactively addressing nutrient loading, Bay Area water managers have more leeway to be visionary and to consider new paradigms for multi-benefit wastewater infrastructure than by reacting to acute impairment of water quality. As an initial step to address nutrient pollution and reduction strategies, dischargers, regulators, baylands stewards, and scientists in the region have established a stakeholder working group. It consists of a steering committee, a stakeholder advisory group, a technical working group, and a science team [44]. In 2014, the local regulator, which is the San Francisco Bay Regional Water Quality Control Board, implemented a watershed-wide nutrient-related permit for dischargers. It is valid until 2019 and it mandates that dischargers monitor nutrient loads in their effluent and annually fund scientific studies to assess nutrient effects on Bay ecology. Dischargers must also identify opportunities for removing nutrients from wastewater effluent [45]. Along with examining the potential for treatment plant upgrades to lower nutrients in wastewater effluent, the permit also specifies, "Dischargers may evaluate ways to reduce nutrient loading through alternative discharge scenarios such as water recycling or use of wetlands, in combination with, or in-lieu of, the upgrades to achieve similar levels of nutrient load reductions [45]." The language in the 2014 permit reflects the local sentiment that next-generation wastewater treatment could achieve more than just safe effluent discharge. This sentiment applies to water management more broadly in the region: regional strategic planning documents for water like the San Francisco Bay Area Integrated Regional Water Management Plan (IRWMP) mirror the desire for multi-benefit water infrastructure. For example, the IRWMP aims to "encourage implementation of integrated, multi-benefit projects", "reduce energy use and/or use renewable resources", "plan for and adapt to sea level rise", and "increase recycled water use" [46]. A regulator at the San Francisco Regional Water Quality Control Board explained in an interview: "We're not just going down this linear path to deal with nutrients. We've said from Day One that we want it to be more complicated than that because we want to make a wise decision in terms of the future of managing water and wastewater . . . we want to feel good about the decision we made 50 years from now." Nationally, there has been a push in recent years to address excessive nutrient loading into surface waters [47]. After the complicated and costly experience of trying to control nutrients in the Chesapeake Bay [27,28], many water managers across the country are looking to the Bay Area for guidance on how to proceed with nutrient management in a manner that encourages a long-term transition to multi-benefit water infrastructure. According to a regulator at the Environmental Protection Agency Region IX (EPA): "Most of the folks in DC who I've talked to about the San Francisco example view it as potentially . . . a national model on how to do this right." Therefore, the case of the Bay Area offers insight into nutrient management strategies nationwide as well as highlights opportunities and obstacles to transitioning to a new paradigm of multi-benefit urban water infrastructure more broadly. Since nutrient management is a global issue of great concern, the case is also of high interest internationally. Our case is especially interesting because the involved individuals have high motivation for developing multi-benefit infrastructure and have power within bureaucratic, historically slow-to-innovate regulatory agencies and wastewater utilities. By focusing on this important case study, our research aims to identify general strategies for planning for next-generation water systems that fulfill multiple goals. It does so by characterizing stakeholders' long-term objectives and by analyzing the social, institutional, and technical impediments to planning and implementing a multi-benefit wastewater infrastructure. It examines the ways in which current institutional structures and modes of decision-making help or hinder the transition to a new paradigm for urban water systems. It also investigates the possibility of new institutions, relationships, or processes that can support these objectives. By demonstrating the ways in which well-established techniques for eliciting context-specific goals and strategies with local stakeholders including stakeholder analysis, multi-criteria weight elicitation, and secondary document analysis can be employed as part of an integrative, mixed-method approach for making decisions about real-world environmental policy issues, we provide a replicable example to support planning for other multi-benefit water resources initiatives. Methods Overview To assess stakeholder perspectives on long-term goals for nutrient management, barriers to implementation of multi-benefit wastewater infrastructure, and suggestions for overcoming these barriers, we used a mixed-method approach. We proceeded in the following step-wise manner. 1. We conducted initial interviews with a broad set of stakeholders. These were designed to elicit perspectives on long-term goals for nutrient management in the region as well as potential management options. The results of interviews were integrated to provide objectives for "good nutrient management." 2. We conducted in-depth, follow-up interviews with a subset of the original stakeholder group. The interviews were designed to elicit the relative importance of different objectives to decision-making about nutrient management. These interviews built upon results from initial interviews. We used both a qualitative approach (in-depth explanations) as well as a method borrowed from Multi-Criteria Decision Analysis (MCDA) to elicit relative weights of objectives. 3. We included stakeholder/institutional analysis. Information from both sets of interviews was synthesized to understand stakeholder perspectives on barriers for implementation of wastewater infrastructure that met the diverse set of goals mentioned as well as strategies to overcome these barriers. 4. We conducted an analysis of regional planning documents (e.g., [46,48]), strategic water management plans at the utility and city scale [49][50][51][52][53][54], and official mission statements and job descriptions that were conducted to contextualize and triangulate interview responses. A comparison of official institutional documents with interview responses provided insight into institutional drivers and barriers to multi-benefit water infrastructure. Findings from the document analysis are presented in the discussion in relation to the results of stakeholder interviews. Initial Interviews Stratified sampling and snowball sampling were combined [55] to select stakeholders for first-round interviews. Stakeholders were initially identified based on their professional interest in nutrient loading in the San Francisco Bay including whether they were involved with the decision-making or would be affected by decisions made [56,57]. The selected group included water managers, baylands stewards, researchers, engineers, regulators, urban planners, flood control managers, and advocates for the coastal industry or the environment at local, regional, and federal scales [58]. Individuals within organizations were selected based on their professional involvement with the San Francisco Bay nutrient management, which is shown by their authorship of documents or presentations pertaining to the issue. If no one in an organization was closely affiliated with nutrient management, the person with the most responsibility for strategic planning was contacted using publicly available professional email addresses. A set of stakeholders with diverse professional roles who were operating on different scales (i.e., local, regional, and federal) were sampled. Once interviews commenced, snowball sampling [59,60] was used to identify other stakeholders. Participants were asked to rate their own influence over decision-making as well as how much decisions made about nutrients would affect them on a scale of 1-7. They also rated the influence and extent to which others would be affected. This information was used to determine the set of stakeholders involved and to better characterize the local social networks [55]. Multiple stakeholders from a single organization were contacted when they had distinct roles in the decision-making process about nutrient management and when they were identified by other stakeholders in snowball sampling. Several stakeholders represented more than one organization (e.g., one person was the director of an industrial advocacy group and on the board of a public wastewater utility). Of the 88 individuals contacted initially, 32 stakeholders (representing 29 different organizations) agreed to participate in an interview. They were categorized according to their professional role and their relevance to decision-making (see Supplemental Information, Table S1). We conducted these initial in-depth, semi-structured interviews with 32 stakeholders. We used open-ended questions to elicit information about their goals for "good nutrient management" in the San Francisco Bay. "Good nutrient management" was chosen as the primary management objective based on a previous study of sustainable water infrastructure planning in which stakeholders described goals for "good water supply and wastewater disposal infrastructure" [55,61]. We chose the phrase "nutrient management" (rather than "nutrient control") to reflect the language in the regional Nutrient Management Strategy [44]. These interviews yielded more than 60 goals for "good nutrient management" as a response to: "In your opinion, what are the most important goals for any nutrient management scheme or technology?", and "What are the most important goals for good nutrient management in San Francisco Bay?" (Table S2). Objectives concerned the process of managing nutrients (e.g., collaboration among people in different fields to develop a management plan and base regulatory limits on site-specific scientific evidence of effects) as well as goals characterizing the result of nutrient management (e.g., building systems that are resilient to a sea level rise or the result in good water quality). Goals that characterized the end result of good nutrient management based on the philosophy of "value focused thinking" [62][63][64][65] were emphasized. To reduce the number of fundamental objectives for ease of mental processing [66,67], similar goals were combined (e.g., "low costs" and "low initial capital investment"). Goals that had a more fundamental objective (e.g., "consider the low-hanging fruit for infrastructure upgrades" was deemed to be a means to "low initial capital investment") were eliminated [68]. One objective was added by the researchers ("ease of use of the nutrient control technology or system") since decision-makers tend not to articulate all objectives that are important to them for any decision [69]. This process yielded 13 separate goals. We created an objectives hierarchy from the final list of objectives by categorizing them into overarching categories. The sub-objectives describe the scope of different goals in each category [68]. Even though they were not included in the objectives hierarchy, the process-oriented goals are characterized in the discussion section of this paper. Initial interviews lasted 30-90 min and were conducted primarily one-on-one over the phone with the exception of four individuals from one organization who asked to be interviewed in person together. These four individuals filled out surveys with open-ended questions first to elicit individual preferences and points of view and then engaged in group discussion for the remainder of the two hour interview. Follow-Up Interviews Follow-up interviews were conducted with nine stakeholders and decision-makers (a subset of the original 32) who were closely involved in planning for nutrient management in the San Francisco Bay Area. We chose this subset by performing a cluster analysis based on each stakeholders' stated goals for nutrient management in the first interview (see [70]). From each of the seven resulting clusters, we contacted those stakeholders who we classified as being the most relevant to decision-making to participate in a second interview (on a scale of 1 to 4 with 1 being most engaged with or affected by decision-making about nutrient loading, Table S1). In follow-up interviews, stakeholders verbally confirmed the objectives' hierarchy by examining the list. Stakeholders were asked to explain whether they would endorse or oppose hypothetical options for nutrient management (i.e., wetlands for wastewater treatment or traditional upgrades). Their responses also were analyzed to confirm that all stated goals were represented in the objectives' hierarchy. Furthermore, in-depth explanations of the importance of each objective were elicited as well as the objectives' relative importance to decision-making from each stakeholders' perspective. Elicitation of the objectives' relative importance is standard practice in MCDA. We applied the popular Swing method [71,72] where interviewees assigned points (from 0-100) for the importance of improving each of the objectives from its worst to its best state. These point values were then confirmed by comparison to an initial ranking of the importance of each objective. Quantitative weights (on a scale of 0-1) were then calculated for each objective and each stakeholder by normalizing the points they had assigned. Weight elicitation requires the respondent to make trade-offs between achieving different objectives [73]. In order for weight elicitation to be most accurate, it is especially important to consider the range, i.e., the best-possible and worst-possible outcome of each objective [68]. These best-possible and worst-possible values were carefully prepared beforehand. They were derived from specific decision options about nutrient control that emerged from initial interviews and from relevant local documents on nutrient management (i.e., permits and planning documents [45,74]). These nutrient management options included: (i) doing nothing, (ii) building traditional wastewater treatment plant upgrades for nutrient control at each nutrient discharge location (i.e., biological nutrient removal), (iii) constructing shoreline wetlands downstream of nutrient discharge locations to remove nutrients from secondary wastewater effluent, (iv) increasing wastewater recycling (i.e., diversion of nutrient-laden effluent from the Bay), and (v) developing urine source-separation and treatment with reuse of nutrients as fertilizer. The development of the options is described in more detail in a companion paper, which uses a formal MCDA-process to find regional strategies for nutrient management in the San Francisco Bay Area [70]. Follow-up interviews were conducted in person and took 60 min to 120 min. All interview notes and recordings were transcribed and then coded using MaxQDA software (VERBI Software GmbH, Berlin, Germany). The research protocols and interview guidelines, were approved by the Institutional Review Board of the Committee for the Protection of Human Subjects at the University of California, Berkeley (protocol #2015-01-7091). All interview participants gave informed consent for inclusion before they participated in an interview. Stakeholder/Institutional Analysis Interview questions eliciting information about stakeholders' relative decision-making power and influence in initial interviews were triangulated with documents about decision-making procedures for nutrients and for water quality regionally and federally. For example, some respondents indicated that the regulators at the US Environmental Protection Agency (EPA) had ultimate power over decision-making about nutrients, which was confirmed by documents on EPA's power to promulgate water quality standards [75]. Interview questions in which stakeholders described their institutional roles and constraints in initial interviews were triangulated with official job descriptions, organizational websites and mission statements, and regional and organizational strategic planning documents. For example, a discharger's statement explains that they were obligated to evaluate different options for nutrient control, which was confirmed in the official nutrient watershed permit [45]. Responses about barriers to multi-benefit infrastructure and strategies to overcome them emerged in different parts of the interviews. Some of these were elicited by asking about the process of decision-making in the initial interviews (e.g., "Tell me about the process of decision-making about nutrient management thus far. What have been some of the milestones in the process?"). Other barriers and strategies to overcome them emerged from elicitation of potential management options in the initial interviews (e.g., "How are people in the field talking about solving the nutrient problem? What do you think should be done, if anything?"). Still other barriers and strategies to overcome them were offered in the second follow-up interviews during discussion of the objectives and potential management options. Objectives for Good Nutrient Management Thirteen fundamental objectives for "good nutrient management" in San Francisco Bay were developed and grouped into five overarching categories (Figure 1). These objectives were developed to be as complete as possible (i.e., they take into account the most important factors influencing the decision) without redundancies (i.e., objectives do not have overlapping meaning) and are measurable (as accurately and unambiguously as possible) [68]. Water 2018, 10, x FOR PEER REVIEW 7 of 22 nutrient management thus far. What have been some of the milestones in the process?"). Other barriers and strategies to overcome them emerged from elicitation of potential management options in the initial interviews (e.g., "How are people in the field talking about solving the nutrient problem? What do you think should be done, if anything?"). Still other barriers and strategies to overcome them were offered in the second follow-up interviews during discussion of the objectives and potential management options. Objectives for Good Nutrient Management Thirteen fundamental objectives for "good nutrient management" in San Francisco Bay were developed and grouped into five overarching categories ( Figure 1). These objectives were developed to be as complete as possible (i.e., they take into account the most important factors influencing the decision) without redundancies (i.e., objectives do not have overlapping meaning) and are measurable (as accurately and unambiguously as possible) [68]. Descriptions of the objectives (in the order shown in Figure 1) are given below. Supporting quotations from stakeholders who described the importance of each objective are in the Supplemental Information (Table S3). Resilience to the sea level rise: Much of the Bay Area's wastewater treatment infrastructure is located at the shore of the Bay and is vulnerable to the sea level rise [16]. Developing resilience to the sea level rise while investing in wastewater infrastructure is important for many stakeholders. Flexible system adaptation: Good nutrient management should be able to adapt quickly and easily to shifting external conditions, to tightening regulations, and to other factors like population growth (or decline). If there is an indication that the Bay ecosystem is on the cusp of eutrophication, nutrient management strategies should be able to quickly adjust accordingly. Descriptions of the objectives (in the order shown in Figure 1) are given below. Supporting quotations from stakeholders who described the importance of each objective are in the Supplemental Information (Table S3). Resilience to the sea level rise: Much of the Bay Area's wastewater treatment infrastructure is located at the shore of the Bay and is vulnerable to the sea level rise [16]. Developing resilience to the sea level rise while investing in wastewater infrastructure is important for many stakeholders. Flexible system adaptation: Good nutrient management should be able to adapt quickly and easily to shifting external conditions, to tightening regulations, and to other factors like population growth (or decline). If there is an indication that the Bay ecosystem is on the cusp of eutrophication, nutrient management strategies should be able to quickly adjust accordingly. Minimize greenhouse gas emissions: Some options for nutrient management are energy intensive or require energy-intensive materials (e.g., cement) in their construction, which embody large amounts of greenhouse gasses in the system's life-cycle [76,77]. Maximize Bay water quality related to nutrients: Good nutrient management should prevent any deviation from ambient nutrient-related conditions that could impair the Bay's beneficial uses, which include biological goals like a fish habitat and spawning as well as human goals like recreation [78]. Maximize wetland habitat: The increased wetland habitat was seen by several stakeholders as a relevant goal for good nutrient management. Healthy wetland ecosystems are considered imperative for a thriving Bay ecosystem [79][80][81] because they provide habitats for rare, endangered, and migratory species as well as help increase shoreline resiliency to the sea level rise [82,83]. Increase useable water supply: After enduring a long drought between 2011-2017, water supply is at the forefront of many Bay Area water managers' thoughts. Stakeholders stated that, as they address nutrient-related concerns, wastewater utilities should concurrently consider ways to augment water supplies through increased recycling of wastewater for irrigation or potable uses [84]. Increase resource recovery: Currently, there is little economic incentive to recover and reuse nutrients. However, generating a potential revenue stream and contributing to a closed-loop nitrogen and/or phosphorus cycle by applying wastewater-derived nutrients as fertilizer to crops [1] were viewed as goals of nutrient management. Maximize removal of contaminants of emerging concern: Good nutrient management may also control other unregulated chemicals present in wastewater (e.g., pharmaceuticals, personal care products, or pesticides), which are not completely removed by most secondary wastewater treatment systems [14]. Public ease of use: The urban wastewater system is currently extremely easy for the public to use. Properties are directly connected to a sewer system that requires little to no maintenance by the public. To assess potential responses to source-separating toilets designed to recover nitrogen-rich urine from wastewater [85], the researchers added the "public ease of use" objective. This objective helps to differentiate between the existing plumbing system and a urine-separating system that might require adjustments by members of the public (e.g., men might be required to sit when urinating and source-separating toilets might require additional maintenance). Beautiful Bay and shoreline access: Controlling nutrient loading to the Bay is likely to incur significant public costs in the form of rate increases for wastewater treatment. To garner support for nutrient control spending, it is important that the public be able to appreciate their spending by improved shoreline access to aesthetically pleasing places on the Bay shoreline. Ease of permitting: Ease of permitting for nutrient control saves wastewater utility staff time and money. It also implies agreement among multiple stakeholders (wastewater managers and regulators) about the legitimacy of a nutrient management option (e.g., it reduces uncertainty about whether the option will be controversial or subject to delays and added requirements). Minimize initial capital investment, operations, and maintenance costs: By convention and due to the nature of public utilities, good nutrient management systems (like all urban water systems) should minimize costs. Technical reliability: Knowing with confidence that a wastewater treatment technology will perform in a reliable manner has historically been a leading decision criterion for wastewater engineers [29]. Not every stakeholder mentioned each of these goals in initial interviews (Table 1). However, when the goals mentioned by other stakeholders were presented as possibilities in follow-up interviews, most were considered as important to decision-making even by people who had not originally mentioned them. This finding underscores the importance of gathering a broad set of stakeholder goals and then weighing the relative importance of these goals in two separate steps since any individual stakeholder is unlikely to mention all the objectives that he or she actually takes into consideration in a decision context [86]. Table 1. Number of stakeholders who mentioned each goal for "good nutrient management" in initial interviews (of 32 total). Number of Stakeholders Resilience to sea level rise 4 Flexible system adaptation 4 Minimize greenhouse gas emissions 4 Maximize Bay water quality related to nutrients 24 Maximize wetland habitat 9 Increase useable water supply 13 Increase resource recovery 8 Maximize removal of contaminants of emerging concern 7 Public ease of use 1 Beautiful Bay and shoreline access 3 Ease of permitting 1 Minimize initial capital investment, operations, and maintenance costs 12 Technical reliability 3 The nine stakeholders who participated in the follow-up interview had differing opinions about the relative importance of each goal to decision-making about nutrient management ( Figure 2). It is notable that many less-traditional goals for nutrient management (like the provision of a wetland habitat increased resource recovery and increased shoreline access) were important to most stakeholders. There was wide variation in the importance of incorporating resilience to a sea level rise in decision-making with some stakeholders listing it as the most important criteria and others assessing it of no importance (for specific point values assigned to criteria, see Supplemental Information, Figure S1, and for individual stakeholder opinions for criteria, see Figure S2). The nine stakeholders who participated in the follow-up interview had differing opinions about the relative importance of each goal to decision-making about nutrient management (Figure 2). It is notable that many less-traditional goals for nutrient management (like the provision of a wetland habitat increased resource recovery and increased shoreline access) were important to most stakeholders. There was wide variation in the importance of incorporating resilience to a sea level rise in decision-making with some stakeholders listing it as the most important criteria and others assessing it of no importance (for specific point values assigned to criteria, see Supplemental Information, Figure S1, and for individual stakeholder opinions for criteria, see Figure S2). When grouped into main objectives for nutrient management, results varied depending on whether the average values per category or summed values within each category are presented (see Supplemental Information, Figure S3). This is because some categories like "Intergenerational Equity" have three sub-objectives (resilience to sea level rise, flexible system adaptation, and minimize greenhouse gas emissions) while others like "Ecosystem" have only two sub-objectives (maximize water quality and maximize wetland habitat). In both cases, preservation of the Bay ecosystem ranks among the most important main objectives and social support ranks the lowest. The 13 goals can be categorized into those that are in line with traditional wastewater infrastructure upgrades and those that are indicative of a new paradigm of increased expectations for multi-benefit wastewater treatment (Figure 1). These categorizations were made by document analysis as well as from stakeholder interviews. While some goals fall within the institutional purview of the stakeholders, others fall outside of their professional mandates. Traditional wastewater infrastructure goals tend to fall within the dischargers' institutional mandates: they must gain regulatory permission to use new technologies (ease of permitting) and comply with regulations like the Clean Water Act that protects water quality (maximize water quality). They must also be fiscally responsible with public funds (minimize costs) and consistently meet regulations (technical reliability). Regulators' mandates also support traditional wastewater infrastructure goals. they must develop permits that dischargers can meet (ease of permitting) and they must protect beneficial uses in the Bay (maximize water quality). Of the goals that are indicative of a new paradigm of wastewater infrastructure, several fall within the mandates of professionals who are usually not responsible for planning municipal wastewater treatment plant operations such as urban planners (beautiful bay and shoreline access), water supply agencies (increase potable water supply), and baylands stewards (maximize wetland habitat). In the San Francisco Bay case, some entities that operate municipal wastewater treatment plants are also responsible for the water supply (e.g., San Francisco Public Utilities Commission) and the region's nutrient stakeholder working group includes baylands stewards and scientists on its Steering Committee [44]. Thus, entities responsible for the goals of increasing potable water supply and maximizing wetland habitat are involved in the Bay Area nutrient issue. However, staff members usually responsible for the issue work in different divisions of the organization and may not have the ability to allocate resources from one part of the agency to another to solve the problem. Many of the goals stakeholders have for nutrient management do not fall within the institutional mandates of the stakeholders including flexible system adaption, resource recovery from wastewater, minimizing greenhouse gas emissions, shoreline access, and resilience to the sea level rise (Table 2). These goals are indicative of a new paradigm of wastewater infrastructure. The fact that they are being considered by representatives involved with nutrient management is indicative of the resolve of stakeholders to enact their vision of next-generation wastewater infrastructure. Impediments to Multi-Benefit Wastewater Infrastructure Planning and Implementation Despite strong sentiments among many stakeholders that nutrient control strategies should ideally provide additional benefits to the Bay, many stakeholders identified barriers to multi-benefit wastewater infrastructure planning and implementation. These perceived barriers fall into institutional, social, and technical categories (Table 3). Supporting quotations from stakeholders are included in the Supplemental Information (Table S4). Table 3. Perceived barriers to planning and implementation of multi-benefit wastewater systems. Institutional Leadership Who is in charge? There is concern that multi-benefit infrastructure projects would lack leadership because they bridge mandates of existing institutions. Another type of concern is that lack of institutional leadership would lead to conflicts because each institution is accountable to different board members and/or constituents. Can managers of separate organizations effectively collaborate? There is concern about the complexity of collaboration across institutions for wastewater treatment, water supply, habitat restoration, and others to implement multi-benefit projects. Project implementation depends on social networks that individuals have established because the institutional connections are lacking. Planning for a sea level rise is particularly challenging because no one agency is currently tasked with it. Permitting Can multi-benefit projects fit into existing regulatory permit structures? There is a difficulty for obtaining regulatory permits for multi-benefit projects primarily due to a lack of regulatory precedent for many of these systems (e.g., wetlands for wastewater treatment would likely vary seasonally in their nutrient removal efficacy) or for innovative technologies that have less of a track record. Risk tolerance Can decision makers tolerate the higher level of risk needed to adopt innovative technologies? There is a difficulty in adopting innovative multi-benefit technologies because of a strong value among wastewater utility managers for technologies that can reliably comply with regulations. Multi-benefit wastewater infrastructure projects that rely on natural systems for water treatment (e.g., constructed wetlands) or those that depend on the public to employ new technology (e.g., source-separating toilets) are inherently less reliable than traditional infrastructure where most ambient conditions are controlled. Public opinion For decentralized options, can the public agree to interact more with wastewater treatment? There is a concern that some multi-benefit technologies (e.g., urine source-separation with nutrients recovery) could require a behavior change from users. Citizens may have to shift from having little role in wastewater treatment (currently limited to flushing the toilet and paying a sewage bill) to taking a more active role. While some stakeholders found the idea repugnant, others thought there might be a learning curve with an education campaign. Public compliance How do we ensure compliance for technologies that require user responsibility? There is skepticism that the public can be relied upon to consistently participate in decentralized technologies like urine source separation. Effects on existing treatment How will new treatment options change the function of existing systems? There is concern that innovative technologies may change the composition of influent or effluent existing wastewater treatment plants. For example, decentralized or satellite water recycling technologies might result in less influence to municipal wastewater treatment plants. Strategies to Overcome Barriers to Multi-Benefit Wastewater Infrastructure Many stakeholders provided practical suggestions for overcoming some of the barriers to multi-benefit wastewater infrastructure planning and implementation. Each suggestion requires a set of stakeholders from particular roles to take action to overcome these barriers (Table 4). Supporting quotations can be found in the Supplemental Information (Table S5). Table 4. Suggested strategies to overcome barriers to multi-benefit wastewater infrastructure in the San Francisco Bay Area. N/A: No interview responses addressed how to overcome this barrier. Leadership N/A Collaboration Establish networking relationships among agencies, organizations, and water managers before decisions need to be made to support cross-sectoral problem-solving (e.g., through meetings to discuss regional water quality monitoring) All Conduct integrated assessments of the Bay's ecology (in addition to site-specific monitoring to ensure regulatory compliance) to lay the groundwork for holistic regional visioning and planning Discussion Our results suggest that, in addition to objectives for nutrient management pertaining to the traditional role of wastewater treatment (e.g., good water quality, technical reliability, and low costs), other objectives related to the development of a multi-benefit infrastructure are also prominent for many stakeholders in the Bay Area. However, it is noteworthy that not all stakeholders are interested in a new paradigm of wastewater infrastructure. For example, one stakeholder we interviewed primarily expressed goals related to traditional water infrastructure paradigms and was strongly averse to goals that were outside that scope (e.g., they gave no value to resilience to sea level rise and recovery of nutrients from wastewater and water supply). Defining the role of wastewater treatment in response to issues beyond nutrient pollution may be necessary before stakeholders choose regional solutions for nutrient management. Some of the broader goals stakeholders mentioned could arguably be cast as prudent engineering. For example, flexible system adaptation is not a mandate for dischargers, but it is considered good practice to build a wastewater treatment system that will be useful throughout a design life of three or four decades. Likewise, removing contaminants of emerging concern from wastewater could preempt a need to build additional treatment systems if compounds are regulated in the future [29]. Other less-traditional goals for nutrient management like resilience to the sea level rise, increasing the area of a wetland habitat, and reducing greenhouse gas emissions may improve wastewater utilities' public images by explicitly aligning their actions with local pro-environmental values. Improving utilities' "brand" in this way may help make it easier for them to gain the support of the community and to raise funds for projects [87]. Despite the benefits of achieving these broader objectives, it is notable that many of the goals reflective of a new paradigm of water infrastructure fall outside of stakeholders' institutional mandates. Dischargers are tasked with regulatory compliance and reliable service. Regulators must uphold state and federal rules for preventing the impairment of water bodies like the federal Clean Water Act and California's Porter-Cologne Act [88]. To conceptualize and implement next-generation water infrastructure, stakeholders may need to go beyond their professional and institutional mandates and think creatively about how to develop rules, collaborations, and decision-making processes that support their vision. Additionally, regional, state, or federal policy to indicate that multi-benefit water projects should take priority over single-purpose water systems when possible since it could help support the implementation of a new paradigm for water infrastructure. Regional enthusiasm for multi-benefit approaches in the Bay Area case may stem from the overall pro-environmental culture of the Bay Area, which is shown by the recent passage of a bill to raise a Bay Area parcel tax to fund wetland restoration [89]. The same enthusiasm may not exist elsewhere. At a national level, green infrastructure approaches are championed by the national Environmental Protection Agency [90] but may not be reflected in the perspectives of stakeholders in any particular locale. Lessons for Planning and Implementing Multi-Benefit Infrastructure Stakeholders pointed to the importance of having existing connections, trust, and communication channels in place between water managers, regulators, and ecological stewards that can be drawn upon in a decision-making context. These provide the foundation for the collaboration necessary for multi-benefit projects to be successful. The Regional Monitoring Program for Water Quality in the San Francisco Bay, which is a partnership between regulatory agencies and regulated utilities, has been important in this regard [91,92]. Regional monitoring also supports multi-benefit projects because it provides an integrated assessment of the Bay's ecology as opposed to the common site-specific monitoring to ensure regulatory compliance. The holistic view provided by regional monitoring, which tracks natural variability as well as cumulative impacts of human activity also allows managers to prioritize regional management actions and goals [93,94]. Bay Area dischargers also collaborate on other aspects of regional environmental stewardship. Their relationship is formalized through an advocacy organization called the Bay Area Clean Water Agencies (BACWA), which provides a unified voice for local wastewater utilities in regulatory and scientific settings [94]. Additionally, regional regulatory permits for total maximum daily loads for polychlorinated biphenyls and mercury currently exist and another for selenium is underway [91]. All of these require communication and collaboration between dischargers to meet these limits. When nutrients came to the forefront as a potential issue in the Bay, dischargers were able to use existing networks to coordinate their response. A wastewater treatment plant manager reported the importance of BACWA for organizing the formal nutrient stakeholder working group: "The (Nutrient Management Strategy group) was conceived, I think, of probably a few of us sitting around at BACWA just trying to figure out what's going in with nutrients . . . As we started to look and talk about it, we realized, for a number of reasons, this is way too big to take the typical approach." This collaborative approach exemplifies an important step in moving towards more sustainable water infrastructure including the development of a coalition of diverse actors who share a common vision and trigger institutional change [95]. The Bay Area's Nutrient Management Strategy is made up of a broad set of actors including nutrient dischargers (e.g., wastewater treatment plant managers, stormwater managers, and industrial dischargers), environmental advocates, regulatory organizations, and resource trustee agencies (e.g., Department of Fish and Wildlife) [44]. Another benefit of establishing these social networks is the possibility of collaboration between regulators and dischargers to support multi-benefit technologies. Traditional technologies are currently the simplest for regulators to permit because there is precedent for them and they fit neatly within institutional mandates. In contrast, multi-benefit technologies may challenge existing regulatory structures. For instance, constructed treatment wetlands may have seasonal variations in nutrient removal and may be subject to different rules concerning endangered species [74]. Open communication channels between technology developers, users, and regulators may help establish new policies and navigate the complexities of existing policies to facilitate the adoption of new multi-benefit technologies. Technological fixes are not the only potential solutions to nutrient control. Strong networks and partnerships between dischargers and agencies can also lay the groundwork for innovative strategies to manage nutrients like trading credits for nutrient discharge within the estuary [96]. Critics of integrated water management and multi-benefit water infrastructure argue that the complexities of considering multiple goals in a single water infrastructure project are too difficult for one agency to master and the hurdles of institutional collaboration are too great [97]. Yet, the Bay Area nutrient management case shows that, even without formalized institutional collaboration, individuals with strong motivation for a multi-benefit infrastructure have the capacity to gather the necessary communication and teamwork. These social networks underpin the "collaborative advocacy coalitions" that can change public policy [98] and sway planning for urban water systems into a mode that would support the development of multi-benefit infrastructure. However, broad-based collaborative governance is not easy and stakeholders expressed concern that the Bay's Nutrient Management Strategy would fall apart if action on nutrients becomes imperative. One stakeholder said, "Things are going really amazingly well (with the Nutrient Management Strategy), yet it's very fragile. Inherently fragile. Just because there's billions of dollars, and there's interest, and all kinds of stuff at play." Our research shows that water managers and decision-makers in the San Francisco Bay Area case have addressed many of the barriers to sustainable urban water management addressed in the literature, which is summarized in the review by Brown et al. [25] (Table 5). Table 5. Barriers to sustainable water infrastructure management adapted from a review by Brown and Farrelly (2009) [25] and the San Francisco Bay approach, which is identified in stakeholder interviews and document analysis. Overcoming Impediments to Multi-Benefit Infrastructure Implementation Despite strong interest in multi-benefit wastewater infrastructure for nutrient control, substantial impediments to their implementation exist in the San Francisco Bay Area. While previous literature has focused on socio-institutional barriers [22,[103][104][105][106], we also found several technical barriers. In particular, technologies that require changes in consumer habits (e.g., urine source-separation) face substantial challenges because increased user responsibility could decrease technological reliability. Innovative multi-benefit wastewater systems could also be less reliable than traditional systems because there is less experience. To counteract the risk of lower reliability, stakeholders mentioned that it would be essential to develop wastewater technologies that were simple to implement and adapt to changing external conditions. These technologies could be deployed if riskier multi-benefit wastewater systems do not achieve the desired water quality effects. Additionally, regulatory structures to "pre-approve" adaptive technologies for quicker implementation was identified as useful to hasten implementation. Further research is needed to develop nutrient control technologies that can be easily and quickly adapted to changing conditions such as population size, rising sea levels, or tightened regulations. Today's wastewater treatment systems are designed to essentially be 'out-of-sight, out-of-mind' for the public. Yet some stakeholders relayed the difficulties with this design. The public does not consider how wastewater is treated and is unwilling to invest in new infrastructure because it lacks awareness of insufficiencies of existing infrastructure. Making wastewater treatment systems more visible to the public may inspire respect for the systems that turn sewage into clean water and may enable further investment in innovative, multi-beneficial technology. European studies indicate that people are more open to new water technologies if they see the environmental benefit [87], but more research is required on this topic especially in the United States. Many stakeholders also pointed out the lack of clear leadership as a barrier to planning and implementing multi-benefit infrastructure projects and no strategies to address this emerged from interviews. In the absence of consolidation of decision-making (combining agencies that manage different aspects of water management and different wastewater treatment agencies), which is unlikely to happen. One solution may involve collective goal-setting or "value-focused thinking" [62,65]. This is a useful tool for understanding and defining stakeholders' values and objectives. A leader would take this "visioning" step early on in a planning process. In the absence of a single entity in charge, coming to agreement about collective goals (and clarity about disagreements) can help fill that gap. The formation of a new agency or workgroup to facilitate this process may be necessary. Finding measures to assess the fulfillment of these goals that are acceptable to stakeholders would also help clarify how to collectively judge the success of an infrastructure project. Identifying stakeholder goals for water infrastructure projects also sets standards for their assessment-multi-benefit water systems need to actually meet the goals in order to truly provide multiple benefits. For example, if a constructed wetland is used to control nutrients based on the premise that it will also provide a bird habitat and improved shoreline access, then these goals can provide additional guidelines and metrics for determining the success of the technology. Conclusions Development of multi-benefit wastewater infrastructure requires proactive approaches rather than reacting to acute regulatory demands for water quality improvement. Many stakeholders in the San Francisco Bay Area involved with managing nutrients have taken this proactive approach. They view it as their professional responsibility to not only effect good water quality in the Bay but also to develop infrastructure for nutrient control that provides additional benefits. These may concern resilience to a sea level rise, creation of a wetland habitat, or recovery of resources from wastewater. These views mirror a larger paradigm shift in wastewater infrastructure that envisions holistic systems that go beyond the traditional goals of removing organic pollutants from wastewater. The methods presented in this paper are applicable for others planning nutrient management strategies and water infrastructure. More generally, the approaches are suitable for many environmental policy decisions such as identifying a broad set of stakeholder goals (for long-term water infrastructure) and soliciting stakeholder perspectives on barriers to implementing these goals as well as strategies for overcoming them. This is important in many cases. Our proposed mixed-methods approach allows including diverse insights in decision-making processes. It allows including stakeholders' knowledge of the system and honors their roles within it. It allows us to apply this knowledge to strategic management and to identify topics of disagreement and synergies to facilitate collaborative planning processes. These stakeholder perspectives are often implicitly assumed or overlooked in traditional water infrastructure planning processes. However, their inclusion is essential for developing a multi-benefit water infrastructure. Specifically, Bay Area stakeholders' enthusiasm for a new paradigm of wastewater infrastructure in the Bay Area has resulted in actions that support planning and implementation of multi-benefit water infrastructure. They have begun to build coalitions among disparate water management agencies. They are forging new relationships and modes of decision-making to support their vision for multi-benefit wastewater infrastructure even though they still face significant barriers. Many stakeholders are working beyond the scope of their institutional mandates, which do not represent many of their goals. The situation encountered in the San Francisco Bay is likely relevant for many other cases of planning for nutrient management and multi-benefit water infrastructure more broadly. The insights from this case may serve as a guideline and this suggests that the path for transitioning to a new paradigm of wastewater infrastructure includes the following. • Creating a network of the disparate agencies, organizations, and researchers involved with regional water management with strong communication channels and connections prior to decision-making. • Articulating shared regional goals for water challenges and developing metrics for assessing their fulfillment. • Creating policies to align institutional mandates with regional goals if they are not already aligned. In addition, implementing an innovative, multi-benefit technology inherently carries more risk for the stakeholders involved. This risk can be mitigated by easy-to-implement, highly adaptable technologies that could be deployed should the need arise. Scientists and engineers can support the transition to multi-benefit wastewater infrastructure by pursuing the development of these types of technologies.
2019-04-27T13:10:31.008Z
2018-08-23T00:00:00.000
{ "year": 2018, "sha1": "43f50194783b66aa4f2e1114a259c9e841d4cc5e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/9/1127/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bc33602a15afd5a0eadc1fdabd6c4355c576d81c", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
119218247
pes2o/s2orc
v3-fos-license
The Expansion of the Young Supernova Remnant 0509-68.7 (N103B) We present a second epoch of {\it Chandra} observations of the Type Ia LMC SNR 0509-68.7 (N103B) obtained in 2017. When combined with the earlier observations from 1999, we have a 17.4-year baseline with which we can search for evidence of the remnant's expansion. Although the lack of strong point source detections makes absolute image alignment at the necessary accuracy impossible, we can measure the change in the diameter and the area of the remnant, and find that it has expanded by an average velocity of 4170 (2860, 5450) km s$^{-1}$. This supports the picture of this being a young remnant; this expansion velocity corresponds to an undecelerated age of 850 yr, making the real age somewhat younger, consistent with results from light echo studies. Previous infrared observations have revealed high densities in the western half of the remnant, likely from circumstellar material, so it is likely that the real expansion velocity is lower on that side of the remnant and higher on the eastern side. A similar scenario is seen in Kepler's SNR. N103B joins the rare class of Magellanic Cloud SNRs with measured proper motions. INTRODUCTION The supernova remnant (SNR) 0509-68.7 is one of the most luminous X-ray sources in the Large Magellanic Cloud (LMC), despite being only ∼ 30 in diameter (about 3.6 pc at the distance of the LMC). 10 ASCA observations of N103B were presented in Hughes et al. (1995), who first identified the remnant as the result of a Type Ia supernova (SN), a conclusion that has been confirmed by several authors (Lewis et al. 2003;Badenes et al. 2009;Lopez et al. 2011;Yang et al. 2013). A single light echo was detected by Rest et al. (2005), who derive an age of 860 yr for the remnant, broadly consistent with its small size (remnants of comparable size in the LMC are < 1000 yr old). Ghavamian et al. (2017) used integral field spectroscopy of the Balmer-dominated shocks to detect broad Hα emission having a width as high as 2350 km s −1 . They derive an age of 685 years, assuming the remnant is in the Sedov phase. Recently obtained light echo spectroscopy has shown that the spectrum of the light echo is consistent with an SN Ia origin (A. Rest, in preparation). In Williams et al. (2014, henceforth W14), we used Spitzer imaging and spectroscopy to show that the remnant appears to be interacting with dense circumstellar material (CSM) (n 0 ∼ 10 cm −3 ), remarkably similar to the densities observed in Kepler's SNR by Williams et al. (2012). We suggested there that N103B is an LMC older cousin of Kepler's SNR, and thus far, these two remnants are the only two members of the class of Type Ia remnants interacting with dense CSM hundreds of years after the explosion. Li et al. (2017) present optical imaging and spectroscopy of N103B, concluding that the lack of emission in the eastern half of the remnant is caused by the asymmetric distribution of the CSM due to the high proper motion of the progenitor binary system toward the west. In this letter, we report a new epoch of X-ray observations with Chandra from 2017, which we use to measure the expansion of the remnant over a 17.4 year baseline. While X-ray proper motion measurements of Galactic remnants are commonplace, only a few remnants in the Magellanic Clouds have a high enough expansion velocity to allow for an expansion measurement. One example of this is the young Type Ia remnant 0509-67.5, which has had expansion measurements reported in the literature in both optical (Hovey et al. 2015) and X-ray (Helder et al. 2010;Roper et al. 2018) wavelengths. As another example, Xi et al. (2018) have used Chandra observations of 1E0102.2-7219 in the Small Magellanic Cloud to observe the proper motions in that remnant. The paper is organized as follows. In Section 2, we detail the X-ray observations and data reduction, and the attempts to align the two epochs to a common coordinate system. In Section 3, we report the results of our measurements and discuss their interpretation. Section 4 serves as a summary of our findings. Throughout this paper, we convert our measurements made in image units to the more useful physical quantity of km s −1 , taking advantage of the known distance to the LMC of 50 kpc (Pietrzynski et al. 2013). OBSERVATIONS We conducted a new epoch of Chandra imaging observations of N103B in the spring of 2017, with a total of 400 ks spread over 12 separate observations between Mar 20 and Jun 1. We used the ACIS-S array for these observations, placing the remnant (only ∼ 30 in diameter) close to the center of the optical axis of the telescope on the S3 chip, where Chandra's spatial resolution is best. The 1.7−7 keV image from these 2017 observations is shown in Figure 1. To measure the expansion of the remnant, we compared our 2017 observations with the earliest epoch: a 1999 Dec 4 observation for 40 ks (PI -G. Garmire). Our method for fitting the proper motion involves shifting one epoch with respect to another, accounting for the uncertainties in each epoch. This technique is described more fully in Williams et al. (2017) and , but briefly, we extract brightness profiles from the image in units of counts, with the square root of the number of counts as the uncertainty on each pixel. One epoch, generally the second, is used as the "reference" epoch, with the other epoch shifted until the total χ 2 value is minimized. The effect we are measuring is quite small (sub-arcsecond, see Section 3), so we took care to minimize or eliminate any potential sources of systematic uncertainty in our measurements. First, we opted not to combine the data from our 12 observations into a single event file, as this could create biases in the resulting FITS files at the sub-pixel level which would be difficult, if not impossible, to quantify. For our "reference" frame, we used the deepest single observation from our 2017 observations: a 60 ks observation (Obs ID 19923) begun on Apr 26. These data, along with the 1999 data (which combine for a time baseline of 17.4 yr), were both processed in identical fashion with the chandra_repro script in CIAO version 4.9 (using version 4.7.3 of the CalDB). Image Registration and Alignment The standard methods for aligning images in the world coordinate system (WCS) for a proper motion measurement involve either registering point sources detected in the image with known sources from external catalogs or aligning on common point sources within the images from each epoch, allowing for at least a relative alignment. While the former method is obviously preferred, X-ray analysis often relies on the latter, due to the relative paucity of point sources in the Xray band with known optical counterparts. N103B is an LMC remnant; however, its location (R.A. = 05 h 08 m 59 s , Decl. = −68 • 43 34 , J2000.) is well outside the main bar of the LMC and unfortunately, the number of point sources in the field of view is quite low. We restricted ourselves to point sources on the S3 chip (within 4 of the remnant), because Chandra's point-spread function (PSF) degrades quickly as a function of off-axis angle. To search for point sources in the events files from the two epochs, we first used the CIAO task wavdetect, as recommended by the Chandra X-ray Center. This task "found" a few dozen point sources in the image, but most were false positives. A relatively simple search by eye confirmed that only five of these sources were real and detected in both epochs. Using these five sources as input, we created a transformation matrix file using the wcs_match task, then used wcs_update to align the 1999 epoch 1 image to the 2017 epoch 2 image. Unfortunately, the results of this alignment were not accurate enough for a robust measurement. When we attempted to measure the proper motion of the leading edge of the emission (presumably the shock front), our results varied substantially, with results approaching 15,000 km s −1 in some places and negative 5000 km s −1 in others! We re-did the alignment using another CIAO tool, srcextent, but the results were similarly wildly varying depending on location within the remnant. Upon further inspection, we concluded that most of the point sources (all but one) used for alignment in both the wavdetect and srcextent methods do not have a strong enough detection for these algorithms to fit a PSF and determine an accurate location. As an example of what we mean by this, we show one of our sources in Figure 2. That this source is real is unquestionable, and it appears in nearly the same location in both epochs. But Figure 2 shows why any localization algorithms would not be able to report a location to the accuracy that we require. This source contains only about 15-25 photons total, depending on the epoch, nowhere near enough counts to get a two-dimensional centroid accurate to the subarcsecond level. For example, using equation 13 from Kim et al. (2007), we find positional uncertainties in the source shown in Figure 2 of 0.44 and 0.35 in the first and second epoch, respectively. The accuracy of the location is of utmost importance here, since the signal we are searching for is so small. For reference, even with Chandra and a time baseline of 17.4 years, a 5000 km s −1 blast wave at a distance of 50 kpc would move only ∼ 0.37 during that time, or about 3/4 of a Chandra pixel. We explored other options as well. There are a few knots of emission near the center of the remnant that could, in principle, serve as markers for alignment 11 . However, not only are these sources relatively diffuse (several arcseconds in extent), there is also no way of knowing that these knots do not have their own motion or slightly varying surface brightness profiles over the 17.4 years of evolution of the remnant. We also opted not to use CIAO's sub-pixelization algorithms. There is too much potential for the introduction of a significant systematic uncertainty by reducing the pixel size on the same length scale as the signal we are trying to measure. Ultimately, the uncertainties involved in obtaining image registration down to a fraction of a pixel led us to focus on measuring one thing that doesn't require knowledge of the WCS registration: the remnant's diameter along various axes. As we show below, we chose to measure the diameter of N103B in four directions (see Figure 1), effectively forming two orthogonal cardinal coordinate systems. Before making these measurements, we made one final filtering of the data. The ACIS array has suffered significant degradation at low energies due to contaminant buildup since launch. At low energies, the difference between the effective area in 1999 and 2017 is quite significant. Thus, we only considered counts at energies above 1.7 keV (up to 7 keV), high enough to ensure a nearly similar effective area to the 1999 observations while still capturing the strong ∼ 1.8 keV Si Kα line. MEASUREMENTS AND DISCUSSION We measured the diameter of the remnant using the radial profiles (obtained by using projection regions in ds9) along four "diameter" regions, shown in Figure 1. These regions sample the entire brightness profile of the remnant along diameters covering position angles 0 − 180, 45 − 225, 90 − 270, and 135 − 315. We made no attempt to scientifically define a center of the remnant, since the site of the explosion is unknown. We simply drew the diameter regions to run through the geometric center of the circular structure of N103B. To obtain enough signal for a robust profile, each region is 10 pixels wide, or ∼ 5 . The normalized brightness profiles extracted from each diameter in the two epochs are shown in Figure 3. We are not concerned with the small changes in internal structure, only the change in the diameter of the remnant as marked by the sharp rise of the shock front. To do this, we measure the shift in the shock front on both the left side and right side of the profiles separately (these "fit" regions are marked by shaded grey areas in Figure 3). The relative normalization of the peak of emission is tailored to each of these regions. Taking into account the uncertainty on each profile data point (not shown in the Figures for display purposes), we shift epoch 1 with respect to epoch 2 on a fine grid of 0.0048 resolution elements (∼ 0.01 Chandra pixels). The total expansion velocity in each region is simply reported as the average of the two values we measure, one from each side of the remnant. As can be seen from the Figures, the rise in the brightness profiles marking the edge of the shock front is generally fairly consistent between epochs. A few exceptions exist, such as the left (east) sides of regions 3 and 4. Nonetheless, those were included in the fitting procedure, and simply resulted in increased uncertainties in those regions. In a few places, such as the left side of region 1, epoch 2 is in-terior to epoch 1, leading to a negative shock velocity, almost certainly due to a coordinate registration error, as discussed above. However, the strength of this fitting procedure is that the change of the diameter of the remnant does not depend on this. For example, in the case of diameter region 1, we "measure" a shock velocity of -4,070 km s −1 for the left (north) side of the emission, and an incredible (and almost certainly unphysical) 14,840 km s −1 for the right (south) side. However, the average of these two is 5,360 km s −1 , the value reported in Table 1, which should be robust even in light of uncertainties in the image registration. For the uncertainties, we measure both statistical and systematic error terms. The statistical uncertainties come from the fits themselves: we report the best fit as the shift in which the value of χ 2 is minimized, and take as the 90% confidence limits the value of the shift in each direction where χ 2 has risen by 2.706. For the systematic uncertainties, the typical reported values, such as those in Katsuda et al. (2013), of the registration uncertainties in aligning the images are irrelevant for our purposes. Instead, we found that varying the choice of fit region for each shock front, marked by the shaded grey areas, resulted in slightly different values for the shock velocity. These errors were generally small, usually a few tens of km s −1 , and are dwarfed by the statistical uncertainties on the fit. Nonetheless, we include both uncertainties, added linearly, in our results reported in Table 1. Our values for the average expansion velocity range from 2,990 to 5,360 km s −1 , with a mean expansion velocity of 4,170 km s −1 , with lower and upper limits of 2,860 and 5,450, respectively. As a final "sanity check" on the expansion of N103B, we conducted an entirely different and independent experiment. We drew a single contour in ds9 around the remnant in both epochs. Since the vast majority of background pixels in a given Chandra observation have zero counts, a single contour defining the edge of the emission from the remnant (and some small amount of leakage resulting from the wings of the PSF) is quite easy to define, simply be defining a contour level of "1." The contours from both epochs are shown in Figure 4. We converted these contour to region files, and measured the number of pixels contained within each. In the 1999 epoch, this contour contained 2,846 pixels, while in 2017 the remnant occupied 2,983 pixels. Since the area of the remnant increases as the square of the radius, this means that, on average, the radius of the remnant has increased by 2.37%, or about 0.36 pixels. This corresponds to a shock velocity of 4,810 km s −1 . This is somewhat higher than our average, reported above, but well within the uncertainties, confirming the expansion between the two epochs. A high shock velocity confirms N103B's status as a young SNR, as reported by Rest et al. (2005) and Lewis et al. (2003). An expansion velocity of 4,170 km s −1 implies an undecelerated age for N103B of 850 yr, making the real age somewhat lower. It is somewhat surprising to find such a high shock velocity, given the high densities reported in W14. The most obvious caveat here is that by only measuring the change in the remnant's diameter, we cannot know if one side of the remnant is expanding faster than the other. The high densities reported in W14 came from the western half of N103B, the same half in which Ghavamian et al. (2017) reported lower shock velocities from the broad Hα component. However, these Balmer line filaments are quite faint, and it is likely that only shocks in denser regions are spectroscopically detectable through their broad component. X-ray emission above 1.7 The grey shaded regions mark the regions in which the fits were performed. The normalization was adjusted for each shaded region. The profiles run north-to-south for region 1 and east-west (left to right) for regions 2-4. keV is dominated by intermediate-mass elements, particularly Si and S. If these lines result from the ejecta, their velocity might be different from the blast wave velocity, complicating the results. Thus, to ensure an "apples-to-apples" comparison between the shock velocity implied by the broad Hα width and the velocity measured by proper motion, we would need to measure the optical proper motion of the Hα filaments used, requiring a second epoch of optical imaging. Optical proper motion measurements could also in principle provide a measure of the deceleration parameter, θ, further constrain- 1 5360 4080 6570 2 4070 3140 4940 3 4280 3320 5250 4 2990 910 5030 Average 4170 2860 5450 NOTE. -Mean vs is total expansion of the shock front along each diameter region, divided by two, as described in the text. Distance is assumed to be 50 kpc. The lower and upper limits include both statistical and systematic uncertainties, as described in the text. All results are reported to the nearest 10 km s −1 . ing the age of the remnant. A comparison can be drawn here between N103B and Kepler's SNR, where the forward shock speeds are found to vary by about a factor of two between the north and south rims ; Vink et al. (2008). If the same velocity ratio is present here (in an east-west direction), that would lead to a shock velocity of ∼ 2800 km s −1 in the west (much closer to the speeds seen in Ghavamian et al. (2017)) and ∼ 5500 km s −1 in the east. Such high shock velocities in N103B may imply a nonthermal synchrotron component in the X-ray spectrum, as is seen in Kepler. In a follow-up paper on detailed X-ray spectroscopy, we will explore the evidence for this component. CONCLUSIONS We re-observed the bright LMC SNR N103B with Chandra in 2017, 17.4 years after it was first observed in 1999 with the goal of measuring the expansion of the remnant. The lack of strong detections of point sources in the field of view made absolute alignment of the two epochs impossible, but we were able to measure the change in the diameter of the remnant in four different directions, yielding an average expansion of the shock front of just under 4200 km s −1 . This further supports the view that this remnant is young. The undecelerated age is 850 yr, but since some deceleration has almost certainly occurred, the real age is younger than this, entirely consistent with the estimates from light echo studies (Rest et al. 2005). We encourage future monitoring of this object at all wavelengths, particularly X-ray and optical, where high resolution observations can continue to refine measurements of the expansion. We thank the anonymous referee for providing valuable comments which improved the manuscript. Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number G06-17064 issued by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. PFW acknowledges additional support from NSF, through grant AST-1714281. IRS acknowledges support from the Australian Research Council Grant FT160100028.
2018-09-17T18:06:27.000Z
2018-09-17T00:00:00.000
{ "year": 2018, "sha1": "773e6eb9567607e1a501ec9b7b2c11fc9301f3c5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.06391", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "773e6eb9567607e1a501ec9b7b2c11fc9301f3c5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226217128
pes2o/s2orc
v3-fos-license
The effects of human pregnancy-specific β1-glycoprotein preparation on Th17 polarization of CD4+ cells and their cytokine profile Background Pregnancy-specific β1-glycoproteins are capable of regulating innate and adaptive immunity, exerting predominantly suppressive effects. In this regard, they are of interest in terms of their pharmacological potential for the treatment of autoimmune diseases and post-transplant complications. The effect of these proteins on the main pro-inflammatory subpopulation of T lymphocytes, IL-17-producing helper T cells (Th17), has not been comprehensively studied. Therefore, the effects of the native pregnancy-specific β1-glycoprotein on the proliferation, Th17 polarization and cytokine profile of human CD4+ cells were assessed. Results Native human pregnancy-specific β1-glycoprotein (PSG) at а concentration of 100 μg/mL was shown to decrease the frequency of Th17 (RORγτ+) in CD4+ cell culture and to suppress the proliferation of these cells (RORγτ+Ki-67+), along with the proliferation of other cells (Ki-67+) (n = 11). A PSG concentration of 10 μg/mL showed similar effect, decreasing the frequency of Ki-67+ and RORγτ+Ki67+ cells. Using Luminex xMAP technology, it was shown that PSG decreased IL-4, IL-5, IL-8, IL-12, IL-13, IL-17, MIP-1β, IL-10, IFN-γ, TNF-α, G-CSF, and GM-CSF concentrations in Th17-polarized CD4+ cell cultures but did not affect IL-2, IL-7, and MCP-1 output. Conclusions In the experimental model used, PSG had а mainly suppressive effect on the Th17 polarization and cytokine profile of Th17-polarized CD4+ cell cultures. As Th17 activity and a pro-inflammatory cytokine background are unfavorable during pregnancy, the observed PSG effects may play a fetoprotective role in vivo. Supplementary Information Supplementary information accompanies this paper at 10.1186/s12865-020-00385-6. . Immunomodulatory effects of these proteins, such as the activation of TGF-β [10], stimulation of FOXP3 expression by T cells [11] and IDO expression by monocytes [12], suppression of pro-inflammatory cytokine production by intact mononuclear cells [13], and modulation of the functional activity of naive and memory T cells [14], demonstrate their important role in the formation of fetomaternal tolerance. Among other factors, the ratio of anti-inflammatory and pro-inflammatory lymphocyte subsets (Treg and Th17) is essential for successful fetal development. During a healthy pregnancy, the Treg/Th17 ratio shifts towards Treg cells [15], and a decrease in Tregs and/or an increase in the Th17 percentage accompanies pregnancy disorders such as preeclampsia, preterm birth, misscarriage, and unexplained recurrent pregnancy loss [16]. Even in the case of a relatively successful pregnancy, an increased pro-inflammatory background may cause deviations from the typical development of the nervous system, which lead to an increased risk of neuropsychiatric disorders [17][18][19]. Failed immune regulation of Th17 leads not only to pregnancy complications but also to the development of autoimmune diseases such as asthma, psoriasis, rheumatoid arthritis, Crohn's disease, multiple sclerosis, and others [22,23]. Interestingly, a progression of Th2-type autoimmune disease has been observed during pregnancy, while Th1/Th17-type autoimmune deseases underwent remission [24]. These data confirm that Th17-mediated immune responses are undesirable during pregnancy and are suppressed by factors that are generated during gestation. Therefore, the objective of this study was to elucidate the effects of PSGs on the Th17 polarization of CD4 + cells and their cytokine production. Results and discussion The effect of PSG on the Th17 polarization of CD4 + cells Isolated CD4 + T cells were cultivated for 72 h in the presence of a native PSG, TCR-activator, IL-1β, and IL-6, with subsequent assessment of proliferation using flow cytometry, detection of RORɣτ (RAR (retinoic acid receptor)-related orphan receptor gamma) and Ki-67 expression, and evaluation of different cytokine concentrations in culture supernatants (Fig. 1). We used concentrations of PSG (1, 10, and 100 μg/mL) that correspond to the first, first-second, and second-third trimesters of pregnancy, respectively [6,25]. When estimating the effect of PSG on the proliferation of CD4 + T cells that accompanied their Th17 polarization, it was established that the addition of PSG into CD4 + cultures at concentrations of 10 and 100 μg/mL dose-dependently changed the percentages of proliferating and non-proliferating cells (Table 1). According to the differential gating method, PSG concentrations of 10 μg/mL and 100 μg/mL decreased the percentage of proliferating and increased the percentage of non-proliferating helper T cells. PSG did not affect the percentage of dead and apoptotic cells. Thus, PSG, in our study, primarily suppressed the proliferation of CD4 + T cells in Th17-polarizing conditions ( Table 1). The antiproliferative effect of PSG was previously known [8,26], but there was no evidence of the effect of this placental protein hormone on the Th17 proinflammatory subpopulation. In our previous study, the general inhibitory effect of PSG on CD4 + T cell proliferation was shown [27], but we now confirmed this effect for RORγτ + Th17. This effect is probably due to a single mechanism of latent TGF-β1 activation for all lymphocytes [28,29]. When studying the effect of PSG on Th17 differentiation, it was found that PSG at a concentration corresponding to the last trimester of pregnancy (100 μg/mL) significantly decreased the number of CD4 + lymphocytes expressing the RORγτ transcription factor, the main intranuclear marker of Th17 cells [30] (Fig. 2). The differentiation processes of Th17 cells are accompanied by the active proliferation of these cells [31]. Therefore, we evaluated the level of intracellular expression of Ki-67, which appears in actively proliferating cells [32,33]. Thus, PSG regulates both the proliferation and differentiation of CD4 + cells under Th17 polarizing conditions, rendering mainly an inhibitory effect. The influence of PSG on the cytokine profile of Th17polarized CD4 + cells When analyzing the cytokine profile of Th17-polarized CD4 + cell cultures, we found that PSG at concentrations of 10 and 100 μg/mL decreased the production of IL-10, IFN-γ, MIP-1β, and TNF-α ( Table 2). In some cases, an inhibitory effect of only a high concentration of PSG on the production of cytokines was found. PSG at concentration of 100 μg/mL suppressed the production of IL-4, IL-5, IL-8, IL-12, IL-13, IL-17, G-CSF, and GM-CSF (Table 2). Among the abovementioned cytokines, the pregnancy protein had the most pronounced inhibitory effect on the secretion of IL-17 (2.64-fold) and GM-CSF (2.74-fold). Reduction in IL-17 output in 100 μg/mL PSG CD4 + culture correlates with a decrease in the proliferating RORγτ + Ki67 + cell percentage and inversely correlates with an increase in the non-proliferating RORγτ + Ki67 − Marithmetic mean, SDstandard deviation, P -P-value; controlthe CD4 + cell culture without PSG, but with TCR-activator, IL-1β, and IL-6. Bold P values indicate significant differences as compared with control by the one-way ANOVA with Dunnett's multiple comparisons test cell percentage and thus may be directly associated with the antiproliferative effect of PSG. In addition to IL-17, their major effector cytokine, Th17 cells can produce IL-21, IL-22, TNFα, IFNγ, and GM-CSF. This impressive arsenal helps cells cope with extracellular and intracellular bacteria; ensure protective immunity to Mycobacterium tuberculosis, Chlamydia trachomatis, fungi, and viruses; protect mucosal homeostasis; and enhance the neutrophil response. Conversely, it is involved in inflammation and several autoimmune diseases [34]. IL-17 has been shown to activate the nuclear factor (NF)-kB downstream signaling pathway, which results in the expression of pro-inflammatory cytokine genes, such as TNF-α, IL-1, IL-6, G-CSF, and GM-CSF; the chemokines CXCL1, CXCL5, IL-8, CCL2, and CCL7; the matrix metalloproteinases MMP1, MMP3, MMP9, and MMP13; and the antimicrobial peptides defensins and S100 proteins [35]. Since IL-17 is a decisive factor triggering inflammatory reactions, it is quite logical that in our study, a decrease in its concentration in 100 μg/ml PSG cultures correlated with a decrease in TNF-α (r = 0.78, P = 0.003) and IFN-γ (r = 0.73, P = 0.007). TNF-α, like other IL-17-derived proinflammatory cytokines, is highly undesirable during pregnancy, as it can causе pregnancy complications, such as recurrent miscarriage, premature rupture of fetal membranes, preeclampsia and intrauterine fetal growth retardation [36]. In our study, PSG at a concentration of 100 μg/mL inhibited the production of G-CSF and GM-CSF by CD4 + cells under Th17-polarizing conditions ( Table 2). These hematopoietic colony-stimulating factors are necessary for the onset and progression of pregnancy, but above all, locally. Other pregnancy proteins may stimulate their synthesis in vivo. In particular, it is known that the expression of GM-CSF is triggered by chorionic gonadotropin [37], the concentration and therefore the dose-dependent effect of which are significantly higher in the first trimester of pregnancy than PSG. In addition to inhibiting the "classic" pro-inflammatory cytokines and chemokines, PSG, in our study, also decreased the production of anti-inflammatory cytokines such as Il-4, IL-10, and IL-13. The decrease in IL-10 output in the 10 μg/mL PSG culture was inversely correlated with the percentage of RORγτ + cells. This result is logical since the main producers of IL-10 are Treg, Th2, and the effector Th1 [34,38]. The presence of these cells in the cultures is very likely since we Th17 polarized not only naive but all CD4 + T cells. The cultures initially contained a mixture of different T helper subsets, including Th1, T reg, and Th17, which are known to have plasticity and can transdifferentiate from one subpopulation to another, depending on the cytokine environment [39,40]. Most likely, the main portion of helper T cells polarized into RORγτ + Th17 due to the cytokine background (Fig. 2); however, some of the CD4 + T cells "remained true" to their original phenotype. Interestingly, although the concentration of IL-4 in cultures of Th17-polarized CD4 + cells was initially low in the control (Me = 11.7 pg/ml), in the 100 μg/mL PSG culture, there was a significant decrease in the production of this cytokine (Me = 10.46 pg/ml), which correlated (r = 0.67, P = 0.014) with a reduction in the percentage of Ki67 + and RORγτ + Ki67 + cells. Perhaps, due to the plasticity of the T helper phenotypes, a small percentage of transitional cell subsets co-expressing, along with IL-17, a variety of other cytokines, including IFN-γ, IL-10, and IL-4, is formed in the proinflammatory cytokine environment, as occurs with autoimmune diseases [40]. Regarding the suppression of cytokine production, our previous studies have already shown a simultaneous decrease in the production of both pro-inflammatory (IFNγ) and anti-inflammatory (IL-4) cytokines [27] by CD4 + T cells under the influence of PSG. This effect can probably be associated with the "universal" antiproliferative effect of this pregnancy hormone since, in this study, a correlation of the percentage of Ki67 + (proliferating) cells was detected with the decrease in concentrations of several cytokines, including IL-4, IL-17, GM-CSF, IFN-γ, and TNF-α, and the chemokine MIP-1β [28,29,41]. Conclusions Thus, in the experimental model used, PSG had an expressed suppressive effect on the proliferation and Th17 polarization of CD4 + T cells and on their cytokine/chemokine production. As Th17 activity and an increase in pro-inflammatory cytokine production are unfavorable during pregnancy, the revealed PSG effects may play a fetoprotective role in vivo. Peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation (Diacoll 1077, Dia-m, Russia, ρ = 1.077 g/cm 3 ). The maximum time Q1-Q3)) are presented. Controlthe CD4 + cell culture without PSG, but with TCR-activator, IL-1β, and IL-6. * -Significant differences (P < 0.05) compared with the control by the Friedman test with Dunn's multiple comparisons test between collection of a blood sample and separation in a density gradient was 30 min. Native PSG isolation PSG was purified from the blood of healthy pregnant women according to the technique developed by Mikhail Rayev [42] and described in detail elsewhere [27]. PSG preparation is a mixture of the following proteins: PSG1, PSG3, PSG7, PSG9, and some of their isoforms and precursors. Isolation and cultivation of CD4 + cells Cultures without PSG served as a control. The viability of the cells after incubation evaluated using 0.4% trypan blue (Invitrogen, USA) was 95-98%. PSG did not affect either the cell number or the viability of cells. Th17 polarization resulted in a significant increase in the percentage of proliferating CD4 + cells (Me (Q1-Q3), from 11.32 (9.32-15.63) to 42.33 (34.03-50.14)), accompanied by a simultaneous decrease in the percentage of non-proliferating cells (from 83.56 (77.85-97.53) to 47.84 (39.47-53.57)), while the level of apoptosis cells did not change. In general, this result indicates an adequate activation of CD4 + cells in the present experimental model. The obtained data are consistent with similar experiments, where we studied the proliferation of TCR-activated helper T cells in the presence of IL-2 [44]. The independent effect of Th17 polarization consisted of a significant (more than 60-fold) increase in the number of cells expressing Ki-67 (Additional file 1, Table S1). The effect of Th17 polarization on the cytokine/chemokine profile of CD4 + cells was expressed as a reliable increase in IL-2, IL-4, IL-5, IL-7, IL-10, IL-17, G-CSF, GM-CSF, IFN-γ, MIP-1β, and TNF-α output. The most pronounced increase was observed in the TNF-α and IL-17 concentrations, thus confirming the predominance of IL-17-producing cells in culture and, in general, the successful formation of a pro-inflammatory background (Additional file 2, Fig. S1). Flow cytometry The main transcription factor of Th17 cells is RORγτ [26]. Therefore, after 72 h of cultivation, we determined the frequency of Th17 cells as a percentage of RORγτ + CD4 + cells. To evaluate the percentage of proliferating Th17 cells, we assessed the level of intracellular Ki-67 protein. Sample preparation for intracellular/intranuclear staining was performed with a FOXP3 Fix/Perm Buffer Set (Biolegend, USA). Stained samples were analyzed by running a two-color flow cytometry assay with a Cyto-FLEX S (Beckman Coulter, USA). The antibodies used were anti-RORg(t)-PE, human and mouse, and anti-Ki-67-PerCP-Vio700™, human and mouse (both Miltenyi Biotec, Germany). The frequencies of RORγτ + , Ki-67 + , ROR-γτ + Ki-67 + , and RORγτ + Ki-67 − cells were assessed. The data are presented as percentages of RORγτ + , Ki-67 + , RORγτ + Ki-67 + , and RORγτ + Ki-67 − cells from all events in the gate of living СD4 + lymphocytes established according to FSC and SSC properties. The threshold between positive and negative cells was determined using the fluorescence minus one (FMO) controls. Flow cytometry data were analyzed using CytExpert 2.0 software (Beckman Coulter, USA). Proliferation analysis An author-modified differential gating method [45] was used to determine the proliferative status of cells. In our study, in contrast to the above-referenced method, we did not calculate the absolute but rather the relative number of proliferating cells. That is, we calculated the percentage of cells in each gate (proliferating, nonproliferating, and apoptotic) from all cells in the three gates [27,46]. Data were acquired on a CytoFLEX S Flow Cytometer and analyzed in CytExpert 2.0 software (Beckman Coulter, USA). Procedures were performed according to the "Bio-Plex Pro™ Human Cytokine 27-plex Assay" and "Bio-Plex Pro™ Human Th17 Cytokine Panel 15-Plex" protocols. The results were recorded using a Bio-Plex automatic microplate photometer and Bio-Plex Manager software (Bio-Plex® 200 Systems, Bio-Rad, USA). The cytokine concentrations were determined from a calibration curve for each cytokine (dynamic range from 2 to 32,000 pg/ mL) according to the manufacturer's recommendations. Statistics The statistical data analysis was performed with Graph-Pad Prism 6 using one-way ANOVA (proliferation, differential gating) and the paired Friedman test with Dunn's multiple comparisons test (RORɣτ and Ki-67, cytokines). Data are presented as arithmetic means and standard deviations (M ± SD) and medians with first and third quartile values (Me (Q1 − Q3)), respectively. Differences were considered significant at P < 0.05. In some cases, the Spearman correlation coefficient was calculated. Additional file 1: Table S1. The effect of Th17 polarization on the RORγτ and Ki-67 expression in CD4 + T cells. Medians and interquartile ranges of cell percentages from all alive CD4 + T-cells are presented (Me (Q1-Q3)). Control -CD4 + cell culture with CM only, * -Significant differences (P < 0.05) compared with the control by the one-way ANOVA with Dunnett's multiple comparisons test; n = 11. Additional file 2: Fig. S1. Cytokine profile of Th17-polarized CD4 + cell culture. Medians of cytokine concentrations in control CD4 + culture (with TCR-activator, IL-1β, and IL-6) are presented; n = 11. The box shows the interquartile range (Q1-Q3), the band inside the box is the median (Me), and the ends of the whiskers represent the minimum and maximum of all the data.
2020-11-01T14:06:05.709Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "4bf4473e46f6fe03dfe99e08ef33e5091a220c1e", "oa_license": "CCBY", "oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/s12865-020-00385-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b765328d3b8dba0d83fe1505229779b321acb47", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
12467788
pes2o/s2orc
v3-fos-license
Non-destructive species identification of Drosophila obscura and D. subobscura (Diptera) using near-infrared spectroscopy The vinegar flies Drosophila subobscura and D. obscura frequently serve as study organisms for evolutionary biology. Their high morphological similarity renders traditional species determination difficult, especially when living specimens for setting up laboratory populations need to be identified. Here we test the usefulness of cuticular chemical profiles collected via the non-invasive method near-infrared spectroscopy for discriminating live individuals of the two species. We find a classification success for wild-caught specimens of 85%. The species specificity of the chemical profiles persists in laboratory offspring (87–92% success). Thus, we conclude that the cuticular chemistry is genetically determined, despite changes in the cuticular fingerprints, which we interpret as due to laboratory adaptation, genetic drift and/or diet changes. However, because of these changes, laboratory-reared specimens should not be used to predict the species-membership of wild-caught individuals, and vice versa. Finally, we demonstrate that by applying an appropriate cut-off value for interpreting the prediction values, the classification success can be immensely improved (to up to 99%), albeit at the cost of excluding a considerable portion of specimens from identification. Introduction Drosophila obscura and D. subobscura (Diptera: Drosophilidae) are closely related species of the D. obscura group, 1 with a wide distribution in the Palaearctic. Both are generalists and co-occur broadly in the colline and alpine zone. 2 They are frequently used species in evolutionary-biological studies (for review see refs. [3][4][5][6][7]. Accurate species identification of living specimens of both sexes is difficult, as the two species are morphologically highly similar, 8 with considerable intraspecific variation in the diagnostic characters. 9 The problem is aggravated by the need to keep to a minimum the anesthesia by CO 2 , to avoid reduced longevity and fecundity. 10,11 For introducing wild-caught individuals to the laboratory with the aim to retain genetic variation, a rapid and non-destructive method for species identification with the potential for high throughput would thus be desirable as an alternative to morphology-based methods. Insect cuticular layers contain complex mixtures of hydrocarbons (CHCs), many of which are synthesized by the insect itself, The vinegar flies Drosophila subobscura and D. obscura frequently serve as study organisms for evolutionary biology. Their high morphological similarity renders traditional species determination difficult, especially when living specimens for setting up laboratory populations need to be identified. Here we test the usefulness of cuticular chemical profiles collected via the non-invasive method near-infrared spectroscopy for discriminating live individuals of the two species. We find a classification success for wild-caught specimens of 85%. The species specificity of the chemical profiles persists in laboratory offspring (87-92% success). Thus, we conclude that the cuticular chemistry is genetically determined, despite changes in the cuticular fingerprints, which we interpret as due to laboratory adaptation, genetic drift and/or diet changes. However, because of these changes, laboratory-reared specimens should not be used to predict the speciesmembership of wild-caught individuals, and vice versa. Finally, we demonstrate that by applying an appropriate cut-off value for interpreting the prediction values, the classification success can be immensely improved (to up to 99%), albeit at the cost of excluding a considerable portion of specimens from identification. i.e., supposedly genetically determined. 12 In addition to their central role in the prevention of desiccation, 13 CHCs are important for chemical communication, for example in mate choice 14,15 and social behavior. 16 The idea that the bouquet of CHCs will thus be species specific led researchers to enquire into their usefulness in species identification 17,18 and various examples of success exist. 19 We decided to test the usefulness of near-infrared spectroscopy (NIRS) to discriminate between D. subobscura and D. obscura: Previous studies suggested differences in desiccation resistance between the two species, 8 and, possibly due to a lack of interspecific mate recognition, no hybridization between them has been reported as yet, 20 both of which may involve CHC species specificity. NIRS characterizes chemical patterns qualitatively and quantitatively based primarily on C-H, N-H, O-H stretching vibrations. 21 It is thus a useful tool for the characterization of biological material, and is, also owing to its non-destructive and non-invasive nature, becoming common practice in ecology 22 and entomology. 23,24 Of relevance here, it was successfully used in the identification of (non-drosophilid) dipterans. [25][26][27] to 90-91% and 9-11% of individuals excluded ( Table 1); for our data set sizes, we found this to represent an acceptable compromise across models between accuracy and number of specimens excluded. When values 1.30-1.70 and 1.20-1.80 were excluded, the success rate for F8f increased to 97% and 99%, but the portion of specimens identified dropped to 78% and 65%, respectively ( Fig. 1). Wavelengths important for the identification of D. subobscura and D. obscura were identified from the PLS regression coefficients, with wavelengths showing very high or very low coefficients being more important. There were peaks occurring in all of the five calibration models and peaks that were important only in single models. Figure 2 shows the regression-coefficient plot for F8f. When calibration models created for one group were used to predict validation sets from the other groups, the classification success ranged from 56% to 83% ( Table 2; prediction values between 1.4 and 1.6 excluded). Discussion Here we show that NIRS can be used to distinguish between Drosophila subobscura and D. obscura with an accuracy of 85% to 92% using PLS analysis, when using the full range of prediction values. This indicates that the composition of CHCs may differ between the two species. We cannot directly relate NIR-spectral differences to CHCs, and also the visible spectral range was relevant to successful PLS models (see further down), but we assume that CHC composition contributed significantly to species differences (compare refs. [14][15][16][17][18][19]. The prediction results for the wild-caught flies were comparable to those obtained for laboratory-reared specimens, in line with the notion that hydrocarbon profiles are more genetically than environmentally determined 31 -although the two species were reared under the same conditions, differences in the cuticular The objectives of this study were to determine if NIRS (1) can be used to discriminate living D. obscura and D. subobscura specimens by using multivariate chemometrics, and (2) whether calibration models elaborated for wild-caught specimens and for specimens from different laboratory-reared generations can be cross-applied. Cross-applicability would reduce significantly the effort needed for establishing the identification of specimens with such differing backgrounds. However, it needs to be kept in mind that genetic and phenotypic changes can arise from evolution in a novel environment. 28 The CHC bouquet, in particular, can evolve due to changes in the ambient thermal regime and in diet composition 29 but can also change due to acquisition of hydrocarbons from food. 30 Results Statistical parameters of the partial least squares (PLS) calibration models (number of PLS factors used, coefficient of determination (r2) and standard error of cross validation (SECV)) and the classification results for the validation sets are listed in Table 1. The calibration models had r 2 values between 0.43 and 0.63, and SECV values between 0.33 and 0.40. The correct classification for the validation sets ranged between 85% and 92%; the best prediction results were achieved for the eighth lab-reared generation (F8), with 90% for F8 males (F8 min) and 92% for F8 females (F8f). We then explored how the exclusion of prediction values around 1.5 influenced classification rate and loss of specimens, by symmetrically excluding values below and above 1.5, decreasing and increasing in steps of 0.02 to the extremes of 1.0 and 2.0, respectively. Exclusion of values between 1.4 and 1.6 resulted in, for example, the F8 females in an increase of the correct classification from 92% to 96% and in the exclusion of 10% of specimens (Fig. 1). For the other models and validation sets, the corresponding values were similar at increases of correct classification in a quantitative manner. We suggest that such exploration be adopted as a standard procedure in NIRS species-identification studies. Depending on the demands for the specific project, researchers could thus prioritise either classification success or number of specimens identified in a controlled manner. Another way of improving accuracy with our species could be to scan just wings. Using the pulled-out right wings of thawed males in NIRS analysis enabled us to distinguish D. subobscura from D. obscura with 100% accuracy (n = 50 males per species; data not shown). This is in line with the findings from Shevtsova et al. 35 who found high interspecific variation in the wing interference patterns of Drosophilidae. Scanning just wings of live specimens is very difficult to put into praxis, however, due to the need for standardised positioning of wings on the one hand and minimum CO 2 exposure of specimens at the other (S. Fischnaller, unpubl.). Exploring this possibility in depth remains subject to future exploration. Examination and comparison of the regression coefficient plots indicated that there are peaks important to species discrimination that are common to all five calibration models. The region around 510, 540 and 610 nm indicates that there are differences between the two species in the visible region, possibly caused by variation in cuticle thickness, bristles and/or pigmentation. 35 The region of 1,050-1,070 nm indicates vibration of water molecules at the third overtone, as well as occurrence of molecules containing N-H functional groups (ref. 36, also used for the interpretation of the subsequently listed wavelengths). Peaks at 1,370-1,390 nm (CH 2 second overtone, and water), 1,720-1,730 nm (CH 3 first overtone), 1,810-1,840 nm and 1,870 nm (C-H first overtone, water), and 2,140-2,180 nm (N-H and O-H combination bands) also contributed to all calibration models. Our study suggests that wild-caught specimens of our species should not be used to identify laboratory-reared specimens, and profiles persisted and were detectable by NIRS. These findings contrast the NIRS study by Mayagaya et al. 25 who predicted two Anopheles species reared in the laboratory with an accuracy of almost 100%, and field-collected specimens with 80% accuracy. Including both wild-caught flies and laboratory-reared flies (from all generations) in the same model did not improve our prediction results, the best models resulting in 82% and 79% prediction success for females and males, respectively (S. Fischnaller, unpubl.). However, from the practical point of view of setting up breeding lines based on identification via NIRS, our error rates are not fatal, given that Drosophila obscura and D. subobscura do not hybridize. 20 Hence, no interspecific gene flow is expected for unintentionally heterospecific cultures, and the identification procedure can be repeated in consecutive generations. The lower rate of correct classification in our study as compared with the work by Rodriguez-Fernandez et al., 27 who used nine Diptera species, could be caused by a closer phylogenetic relatedness of our species as well as by our including multiple populations in the sample -genetic diversity was found to be very high across other wild populations of D. subobscura. 28 Furthermore, we included individuals of all ages, and thus likely both unmated and mated individuals, in our calibration and validation sets. NIRS is sensitive to the age of individuals, and thus used for age-grading of various insects, 25,32,33 and Everaerts et al. 34 showed that in Drosophilidae, in both females and males, CHC changes occur during mating. The variation introduced by either or both of these effects may possibly have impeded greater success of our calibration procedures. One way to improve classification is the exclusion of specimens with prediction values around 1.5 (Table 1 and Fig. 1). This procedure was suggested by Sikulu et al. 26 in general, but to our knowledge the trade-off between increase of classification success and loss of specimens has not yet been explored drift can additionally increase the genetic differentiation across populations (D. subobscura: see refs. 28,37). Also, hydrocarbon profiles can change in a non-inherited manner due to acquisition of food-derived hydrocarbons (ant example: ref. 30). Thus, changes in the metabolic profiles -either due to genetic or environmental changes -may have altered the recorded NIRS data across generations, impeding the use of calibration models generated for one generation in the others. Future research should aim to pinpoint potential non-inherited contributions as well as assess if this problem ceases in later generations which would indicate that it is due to rapid laboratory adaptation 5,38 or whether larger population sizes diminish it which would indicate that it is due to genetic drift (but note that our population sizes were in line with general practice, e.g., Fry 39 ). In conclusion, there are three main findings to our study: First, near-infrared spectroscopy proved a useful tool for the identification of living Drosophila flies. Second, we could not cross-apply models and validation sets among field-caught and lab-reared individuals and across generations, indicating changes due to laboratory adaptation, genetic drift and/or diet changes. Third, classification rates could be considerably improved by excluding prediction values around 1.5, suggesting that researchers should consider excluding a particular range of prediction values depending on their research question. Our study thus underscores the enormous potential of the NIRS technique to species identification (e.g., refs. 24, 25, 26, 40 and 41), and indicates that it could become an important tool also for the delimitation of species in integrative taxonomy, 42 as well as in other biological fields. 43 vice versa, due to excessive failure rates ( Table 2). This contrasts the findings of Mayagaya et al. 25 of 79% correctly classified wildcaught Anopheles when using models based on laboratory-reared individuals. Our low success rate is supported by absorption peaks in the regression coefficients exclusive to just one of the calibration models (e.g., 1,025 nm, 1,460 nm in Wm; 1,500 nm, 2,050 nm in F1 min; 1,770 nm in F1f; 2,000 nm in F8f). In other words, chemical differences led to the observed generation specificity of the models. Toolson and Kuper-Simbron 29 reported for Drosophila pseudoobscura that maintenance in the laboratory leads to physiological and biochemical changes. They reported a shift in the cuticular composition even for the first generation of large populations reared in the laboratory, and explained it by changes of selective pressure and fitness advantages under novel environmental factors. Especially in small populations genetic because Drosophila sexes differ in their CHC-profiles. 34 We performed models for the following five groups: (1) the wild, fieldcollected males, referred to as "Wm" (due to the low number of field-caught D. obscura females, no model could be created for this group), (2) the first lab-reared generation, referred to as "F1m" and (3) "F1f," and (4) the eighth lab-reared generation, referred to as "F8m" and (5) "F8f." The training sets for each calibration model contained 70 spectra (35 of each species). A two-way comparison in PLS analysis was made by assigning integer values 1 and 2 to D. subobscura and D. obscura, respectively. Independent validation sets, treated as "unknown" specimens, were then classified on the basis of the calibration model in each group. Spectra predicted to have a class value of ≤ 1.5 were considered to belong to D. subobscura, those with a predicted value of ≥ 1.5 to D. obscura. The numbers of PLS regression factors to be used in the prediction models were determined by examining the values of the predicted residual sum of squares 46 and the classification results of the independent validation sets. Accuracy of the calibration models was examined by checking the r 2 indicating the closeness of fit between NIRS and reference data, the SECV of the leave-one-out procedure, and by calculating the prediction results using the validation sets -the most rigorous indicator of model quality. 47 Spectral residuals, which were possibly due to technical problems such as movement of insufficiently anaesthetised specimens, were discarded from the sample. Such outliers were detected by visual examination of the spectra using spekwin32 (F. Menges "Spekwin32 -free software for optical spectroscopy"-Vers.1.71.5, 2010, http://www.effemm2. de/spekwin/) and by examination of the leverage and studentised residuals plots generated in GRAMS (compare ref. 48). Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed. Acknowledgments We thank Heike Perlinger and Clemens Folterbauer for assistance in the laboratory, Alexander Rief for sampling assistance, Regina Medgyesy for help in compiling literature, Gerhard Bächli for help with morphological species identification and two anonymous referees for constructive criticism. This research was funded by the University of Innsbruck; F.M.S. was supported by FWF P 23949-B17. Materials and Methods Insects. Specimens were collected from six different locations in North Tyrol (Austria) during August and September 2010. To represent a wide range of habitats, the collection sites were chosen from various altitudes between 570 and 2000 min above sea level ( Table 3). The minimum and maximum distances between populations were 2 and 60 km, respectively. Collecting was done by net sweeping over baits of fermented banana 44 in the evening hours from 5 to 7 p.m. The field-caught flies were transported alive to the laboratory and anaesthetised with CO 2 for morphology-based species identification. CO 2 exposure length for species identification, as well as for spectra collection (see below), was kept to a minimum and never exceeded four minutes per specimen. Flies that were identified as D. subobscura or D. obscura according to Bächli and Burla 9 were used to set up breeding lines for each location sampled. All lines were kept at a minimum census size of 60 individuals on an artificial diet (corn-meal, sugar, agar, yeast, Tegosept) and at a photoperiod of 12/12 h (light/ dark) at 19°C. Data collection. Spectra were collected from anaesthetised flies using a Labspec® 5000 Portable Vis/NIR Spectrometer (350-2,500 nm; ASD Inc.) by placing flies individually on their backs on a 9 cm diameter Spectralon plate. The 3 mm diameter bifurcated fiber-optic probe was positioned about 2 mm above the specimen, focusing on the abdomen. The spectrometer automatically calculated and saved the average spectrum of 30 collected spectra of each individual. Background reference (the baseline) was measured using a separate 3 cm diameter Spectralon plate to avoid contamination. All field-caught individuals as well as 251 randomly chosen individuals of the F1 and 421 of the F8 of the breeding lines were sexed and scanned. We thus included a wide range of individual ages in our sample. Data analysis. All recorded spectra were converted into Galactic spectrum file format using ASD ViewSpecPro. Spectra used for the calibration sets were pre-processed by mean-centring and analyzed using PLS regression and leave-one-out cross validation 45,46 implemented in GRAMS software PLS/IQ. Spectra were generally very noisy below 500 nm and above 2200 nm and these regions were excluded from further analysis. Calibration models were elaborated separately for males (m) and females (f), because females can be easily distinguished from males and a.s.l. = above sea level
2016-05-07T10:10:47.075Z
2012-08-13T00:00:00.000
{ "year": 2012, "sha1": "266a5b30546f5f60c81ec1bd4343f4c07a82fbca", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/fly.21535?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "266a5b30546f5f60c81ec1bd4343f4c07a82fbca", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
212644688
pes2o/s2orc
v3-fos-license
Symmetry resolved entanglement entropy of excited states in a CFT We report a throughout analysis of the entanglement entropies related to different symmetry sectors in the low-lying primary excited states of a conformal field theory (CFT) with an internal U(1) symmetry. Our findings extend recent results for the ground state. We derive a general expression for the charged moments, i.e. the generalised cumulant generating function, which can be written in terms of correlation functions of the operator that define the state through the CFT operator-state correspondence. We provide explicit analytic computations for the compact boson CFT (aka Luttinger liquid) for the vertex and derivative excitations. The Fourier transform of the charged moments gives the desired symmetry resolved entropies. At the leading order, they satisfy entanglement equipartition, as in the ground state, but we find, within CFT, subleading terms that break it. Our analytical findings are checked against free fermions calculations on a lattice, finding excellent agreement. As a byproduct, we have exact results for the full counting statistics of the U(1) charge in the considered excited states. Introduction Symmetries play a central role in all fields of modern physics from the phenomenology of the standard model (and beyond) to the theory of phase transition, from string theory to solid-state passing through nuclei and molecules: it is impossible to overestimate their importance as a guiding principle in our current understanding of the physical world. One fundamental aspect of symmetries is Noether theorem: any continuous symmetry corresponds to a conservation law of local degrees of freedom, in both classical and quantum physics. For example, the rotational symmetry of an interacting spin chain implies that some components of total spin is preserved during the time evolution. Hence, the presence of symmetries is a huge constraint for the dynamics of many-body systems and it is therefore important to understand the properties of physical states under the related symmetry transformations. In particular, it is rather natural to wonder about the implications of symmetries for the entanglement content of physical states of extended quantum systems with internal symmetries, a subject that, very surprisingly, got attention only in very recent times. The most useful and successful way of characterising the bipartite entanglement in a many-body quantum system is through the Rényi entropies of the reduced density matrix ρ A of a given subsystem A. These are defined as where the index n is an arbitrary (positive) real parameter, but in many cases, as we shall see, it is useful to think to it as an integer. Among the Rényi entropies, the von Neumann one has a special place. Nowadays, the entanglement entropies are standard tools in the study and analysis of many-body quantum systems: they are largely employed to detect quantum phase transitions [1][2][3][4]; they proved to give a deeper understanding of topological features of condensed matter systems such as quantum Hall states [5]; and much more (see [6][7][8][9] as reviews of applications). In the context of quantum field theory (QFT), they are typically calculated and analysed in a replica approach [3,4]; in the special case of (1 + 1)-dimensional conformal field theories (CFT) explicit analytic results can be obtained in many different situations and states. For quantum systems with internal symmetries, one can identify the contributions to the entanglement coming from each symmetry sector through the symmetry resolved entanglement (see Section 3 for explicit definitions). Nonetheless, computing them analytically in a many-body quantum system remained a hard task until recently, when a new theoretical framework has been introduced in Refs. [10,11]. The new main insight of these works is to relate the symmetry resolved quantities to the path integral over a Riemann surface, where twisted boundary conditions are imposed along the branch cuts (see Section 3 for further details), in turn easily computed using a simple modification of the known replica tricks. After these initial works, there has been a large effort in characterising the symmetry resolved entanglement in the ground state of many-body systems. In fact, for onedimensional (1D) systems, several results are known for CFTs [10,12], free gapped and gapless systems of bosons and fermions [13][14][15], and integrable spin chains [13,16,17]. Very recently, few results appeared also in higher dimensions [15,18,19]. The out-of-equilibrium behaviour of symmetry resolved entanglement after a local quantum quench has been investigated as well [20]. Finally, the relevance of the symmetry resolution of the entanglement in the non-equilibrium dynamics of disordered systems has been underlined in [21], even from the experimental point of view. The main goal of this work is to generalise the ground-state CFT approach for the symmetry resolved entanglement [10,12] to excited states. The total von Neumann and Rényi entropies in excited states were first considered in [22,23], where the replica trick for the ground state [3] was generalised to treat these more complicated states. By combining the results of [10] with the ones of [22,23], we provide full analytical results for the symmetry resolved entanglement entropies. The paper is organised as follows. In Section 2, we briefly recall the results of [22,23] for the excited states' entanglement. In Section 3 we provide all the definitions concerning symmetry resolved entanglement measures and summarise the known results in the ground state. Our main findings for the symmetry resolved entanglement in excited states are reported in Sections 4 and 5: in the former we report the general treatment in an arbitrary CFT and in the latter we specialise to the massless compact boson; numerical checks for free fermions on the lattice are given in Section 5.3 together with some details about their implementation. We conclude in Section 6 with some discussions and speculating about future directions. Here, we briefly summarise the replica approach to the entanglement entropies of excited states in a (1 + 1)-dimensional CFT as developed in Refs. [22,23]. Let L be the total length of a periodic 1D system, A a subsystem consisting of a segment of length (say A = [u, v] with v − u = ) and B its complement. In the path integral approach, the ratio between tr(ρ n A ) (with n integer) in a given excited state and the one in the ground state is written as a ratio of correlation functions on specific Riemann surfaces; such ratios turn out to be universal functions of x ≡ /L. From there, it is possible to extract the excess of Rényi entropy that the excited state has with respect to the ground state in the bipartition A ∪ B. Note that, one has to focus on a finite value of L (instead of an infinite system) in order to observe some excess of entropy: for low-lying excited states, this excess vanishes in the thermodynamic limit. The starting point is that an excited state |Υ may be written as the insertion of a local operator where |0 is the ground state of the CFT. This mapping is known as state-operator correspondence (see, e.g., [24] for details) and applies to any state of the Hilbert state of the CFT. The corresponding path-integral representation of the density matrix ρ = |Υ Υ| presents two insertions of Υ at z = x + iτ = ±i∞. The world sheet is an infinite cylinder of circumference L. In what follows we omit the index A of the density matrix ρ A , denoting by ρ Υ ≡ tr B (|Υ Υ|) the reduced density matrix associated with the state |Υ ; we denote the reduced density matrix of the ground state by ρ I , thinking to the ground state as the one corresponding to the identity operator I. Next, following Refs. [22,23], we define the ratio so to have a universal quantity which is neither UV-divergent nor depends on microscopic scales (as it is instead the case for tr(ρ n Υ )). For an arbitrary operator Υ, tr(ρ n Υ ) may be be obtained by sewing cyclically (along the interval [u, v]) n of the above cylinders defining the reduced density matrix ρ Υ . In this way, one arrives at a 2n-point function of Υ on a n-sheeted Riemann surface R n . Keeping track of the correct normalisation of ρ Υ , one straightforwardly obtains [22,23] where z ∓ k corresponds to the points at past/future infinite respectively of the k-th copy of the system (k = 0, ..., n − 1) in R n (R 1 is just the cylinder). The normalisation factor of the field Υ does not matter because it cancels out in the ratio (5); moreover, F Υ (x) = 1 as it should be because of the normalisation of the involved density matrices. Through the conformal mapping [23] the Riemann surface R n is transformed into a single cylinder. At this point, exploiting the transformation of the field Υ under a conformal mapping, one relates the ratio (5) to the correlation functions of Υ on the plane. For this reason, afterward we focus on those low-lying states described by primary operators of the CFT. This assumption is not fundamental but simplifies the treatment due to the simple transformation law of such operators, i.e. with (h,h) the conformal weights of Υ. Hence, for primary operators, one can easily express F where w ± k are the points corresponding to z ± k through the map w(z), i.e. Translational invariance (w → w + r with r ∈ R) and parity (w → −w) of the cylinder imply which, among the other things, guarantee the symmetry → L − of the Rényi entropies. The Luttinger liquid CFT In the following, we will explicitly work out the symmetry resolved entropies of the Luttinger liquid or equivalently (via bosonisation) of a free massless compact boson. The compactification radius of the boson is related to the Luttinger parameter K. The Luttinger liquid's universality class describes a large number of critical one-dimensional models including free and interacting spin-chains, quantum gases, fermionic hopping models, etc. (see e.g. [25]). The central charge is c = 1. Denoting by ϕ a real bosonic field, the euclidean action (in bosonic form) is The field can be decomposed in holomorphic and antiholomorphic components, ϕ(z,z) = φ(z) +φ(z). As examples, the set of primary fields of the theory include the holomorphic vertex operators and the derivative operator The scaling functions (4) of the moments of ρ Υ for the excited states generated by the insertion of these primary operators as in Eq. (3) have been obtained in [23], following the procedure outlined in the previous subsection. The final result for the vertex operator is F (n) V β (x) = 1, implying that all Rényi entropies of these excited states are the same as in the ground state. For the derivative operator, F (n) i∂φ (x) is instead nontrivial and can be written as a 2n × 2n determinant [23]; its analytical continuation has been obtained in Refs. [26,27] and reads Other primary states are the antiholomorphic versions of these operators (and combinations thereof), to which similar results apply. Also some results for non-primary operators and boundary theories are known, see e.g. [28][29][30]. Symmetry resolved entanglement We now consider a quantum system with an internal U (1) symmetry and a bipartition in two spatial subsystems, A and B. Moreover, we assume that the quantum state with density matrix ρ lies in a representation of the symmetry. For instance, if Q is the generator of the U (1) symmetry, we require that the state is an eigenvector of Q with eigenvalue which identifies the underlying representation. Although For a U (1) charge (more generally for any additive charge), the commutator [ρ, Q] = 0 implies [ρ A , Q A ] = 0, by simply tracing out the subsystem B. Hence, ρ A has a block diagonal structure with each block corresponding to an eigenvalue q of Q A . One can thus relate a conditioned density matrix ρ A (q) to any eigenvalue q; ρ A (q) is obtained by projecting ρ A onto the eigenspace of Q A with fixed q, as induced by the projector Π q , i.e. . The denominator is introduced to force the normalisation trρ A (q) = 1. Consequently, we can define symmetry-resolved entanglement entropy S(q) and Rényi entropies S n (q) for each sector where Q A = q; this is the amount of entanglement shared by A and B in each symmetry sector. The symmetry resolved Rényi entropies are with von Neumann limit Notice that, in this language, the probability distribution of the charge is Taking the average of S(q) with respect to the charge q (i.e. multiplying both side of Eq. (17) by p(q) and summing over q), one obtains where we introduced S(q) p ≡ q p(q)S(q) and S is the total entropy in Eq. (2). Equation (19) shows that the total entropy is larger than the averaged symmetry resolved entropy (equivalently their weighted sum) and the difference is the Shannon entropy related to the probability distribution of Q A , i.e. − q p(q) log p(q) = − log p(q) p . The two terms in (19) are usually referred to as configurational and fluctuation entanglement, respectively [21]. Note that the configurational entropy is also related to the operationally accessible entanglement entropy [31][32][33]. In general, the calculation of the symmetry resolved entropies requires the knowledge of the spectrum of ρ A and its resolution in Q A . However, this is a rather difficult task, especially for an analytic derivation. The main idea put forward in Ref. [10] (see also [12]) is that the same result can be achieved by focusing on the computation of the charged moments In fact, the Fourier transform of the charged moment Z n (α) with respect to α gives tr(Π q ρ n A ) and thus it is thus directly related to S n (q) through (16). Similar charged moments have been already considered in the context of free field theories [56][57][58], in holographic settings [59,60], as well as in the study of entanglement in mixed states [61,62]. Eq. (19) is valid only for the von Neumann entropy and it is not possible to write down an analogue formula for the Rényi in terms of the probability p(q). This no-go result could be at first disappointing, but it can be circumvented by defining While for n = 1, p 1 (q) is just p(q), the physical probability distribution of the charge Q A , for n = 1 there is no direct meaning of the probabilities p n (q) for n = 1, although they are normalised as q p n (q) = 1. However, these probabilities p n (q) are useful since they allow us to write S n (q) as in which the entire q-dependence is in the second term being S n the total Rényi entropy of Eq. (1). The limit for n → 1 of the above is The average of (23) over p(q) gives back Eq. (19) after using q ∂ n p n (q)| n=1 = 0 (which follows from the derivative wrt n of the normalisation of p n (q), i.e. q p n (q) = 1). It is also useful to average Eq. (22) over p(q) to obtain the two equivalent forms In the last expression the first term is the averaged symmetry resolved Rényi entropy, i.e. a configurational Rényi entropy analogous to the von Neumann one in Eq. (19); the second term is just the fluctuation von Neumann entropy, identical to the one in Eq. (19); the third term is instead new and it is the only one related to the probability p n (q) which makes not possible to write S n only in terms of p(q). A similar expression can be also written as average over p n (q) instead of p(q) (in the first line in Eq. (24) only the probability for the average changes). Eq. (24) is different from the Rényi fluctuation , in Ref. [34]. The Fourier transform of the generalised probability p n (q) in Eq. (21) is and it is a (normalised) moment generating function for p n (q). Replica method and CFT The moments tr(ρ n A ) have a geometrical interpretation for any (1 + 1) QFT in terms of a partition function over a Riemann surface R n ; such geometrical approach leads to universal results for the ground states of (1 + 1) CFT's. Following Ref. [10], we give a geometrical meaning also to tr(ρ n A e iαQ A ). To this aim, let us introduce a local operator V α (x, τ = 0) which implements the U (1) symmetry, acting as a phase shift of e iα in the spacial subregion [x, +∞) (see [10,14] for details). If A is the segment [u, v], one can thus identify When ρ is the ground state of a QFT, tr(ρ n A e iαQ A ) can be seen as a partition function over a Riemann surface with twisted boundary conditions (introducing a phase factor e iα between the first and the last sheet along A) or, equivalently, as a correlation function V α (u, 0)V −α (v, 0) Rn over the Riemann surface R n with periodic boundary conditions. In the ground state of a CFT, if one specialises to an infinitely extended system and when V α is a primary operator, the scaling of tr(ρ n A e iαQ A ) is determined by the value of c, the central charge of the underlying theory, and (h α ,h α ), the conformal weights of V α , through [10] tr having denoted as ≡ v − u the length of the region A. Similar conclusions applies to finite systems of length L through the replacement [3] following from the conformal map from the plane to the cylinder. The previous results apply to a generic U (1) charge. Hereafter, we specialise to the U (1) symmetry of the Luttinger liquid or compact boson defined by the action (11). In this case, the conserved current is proportional to ∂ x ϕ and hence the charge operator in the interval A is Hence, by simple inspection of Eq. (26), the local operator V α is implemented by the vertex operator it contains both the holomorphic and the antiholomorphic sector). In this case, the Fourier transform of p n (q) (cf. p n (α) in Eq. (25)), is gaussian and, as a consequence, also p n (q) itself is gaussian. Therefore, in this CFT, p n (q) is fully characterised by its variance. Let us now conclude this section by discussing the consequences of our findings for a microscopical model that, at large scale, displays conformal invariance and it is in the Luttinger liquid universality class (such as, for example, the XXZ spin chain, the one-dimensional Bose gas and many more). For these models, conformal invariance fixes only the universal part of the scaling form of the distribution p n (q) but it does not predict other non-universal contributions which may play some role also in the limit → ∞. For concreteness, let us focus on the variance ∆q 2 n , calculated as average over the distribution p n (q), which for large scales as as follows straightforwardly from Eq. (31). In Eq. (32), the multiplicative factor of the logarithm is fixed by CFT and it is universal; instead the additive constant b n depends on the details of the model (it is fixed by the non-universal amplitude not specified in Eq. (31)). For example, in the XX chain (whose underlying CFT is a Luttinger liquid with K = 1) the exact value of b n has been derived exploiting the Fisher-Hartwig conjecture [14]. The valueq also is not fixed by CFT and requires a microscopic computation. Note also that, for finite size systems, the replacement (28) is equivalent to Hence, at order O(1), the probability p n (q) is still gaussian, and so, in terms of the variance, we can write it as where ∆q 2 ≡ (q−q) 2 . Notice that in a lattice microscopical model (such as a spin chain), the integration over α does not run on the entire real axis but only on the interval α ∈ [−π, π]. However, because of the Gaussian form of p n (α) with the variance (32), this change of domain of integration only provides subleading corrections to (34). The probability distribution (34) is all we need to determine the scaling for the symmetry resolved entanglement S n (q), which, in the physical regime with ∆q of order 1, turns out to be Let us critically discuss this form. The leading term of S n (q) of order log is equal to the total entropy S n (hence there are no contributions at the leading order from the second term in Eq. (22)). The first subleading term behaves like − 1 2 log log + O( 0 ) and also does not depend on q. The fact that the leading terms in S n (q) do not depend on q has been dubbed equipartition of entanglement [12]. The other subleading terms can be written as a formal expansion in (log ) −1 . The first one in (log ) −1 is independent of q and hence can be conveniently absorbed as the non-universal scale δ n in the log log term, as we did in the last line of Eq. (35), closely following Ref. [14]. The first term breaking the equipartition of entanglement appears at order ∆q 2 (log ) 2 and its amplitude is nonuniversal. We mention that all non-universal constants in Eq. (35) have been exactly calculated in [14] for the tight-binding model and they are important to correctly reproduce numerical results [14]. Symmetry resolution of excited states The main goal of this work is to obtain universal results for the symmetry resolved entanglement entropy in low lying excited states of CFT, in particular for Luttinger liquids. To this aim, we must combine the techniques for the symmetry resolved entanglement of Sec. 3 with the CFT description of excited states in Sec. 2. We will combine these two concepts in this section, showing that, for all low lying excited states obtained by the action of a primary field as in Eq. (3), we can derive full analytic predictions for the universal scaling functions of the charged moments and from there draw conclusions for the symmetry resolved entanglement. We start by defining the universal function of interest for symmetry resolved entanglement of excited states. For the total entanglement, the ratio of moments (4) is universal and can be computed in CFT without any input from the microscopical model [22]. In the same spirit, we can define the α-dependent ratio for the charged moments (we recall x = /L) which is also universal and independent of any microscopic details. Notice that at α = 0, we have . The latter observation suggests to define another ratio . This is the only new correlation arising in the calculation of the function f n (α, x), cf. Eq. (39), related to the (Fourier transform of the) symmetry resolved Rényi entropy in the excited state corresponding to the operator Υ. which is also universal (it is the ratio of universal functions), but it has also the nice property f n (0, x) = 1 identically. Note that f n (α, x) is nothing but the ratio of the generalised moment generating functions p n (α) (cf. Eq. (25)) associated with the excited state |Υ and with the ground state |0 respectively Obviously, in these universal functions, all the non universal factors, e.g. coming from the variance of the p n 's (cf. Eq. (32)), cancel. The moments entering in the definition of f n (α, x) in Eq. (37) may all be expressed as correlation functions of Υ and V α on the n-sheeted Riemann surface. Compared to the correlations defining F n (x) in Eq. (5) we only need to insert V α on an arbitrary sheet (we choose the 0-th one) at the branch points of the Riemann surface (i.e. u 0 , v 0 ). Using the same conventions of Eq. (5) for the insertions of Υ and Υ † (located at {z ∓ k }, i.e. the past/future infinite respectively of the k-th copy). we have The only correlation that has not yet been computed is the one in the numerator of (39) which involves both V α and Υ and evidently is the most complicated one. A pictorial path-integral representation of this correlation is given in Fig. 1. Notice that in Eq. (39) there is no dependence on the normalisation of Υ and V α , as it should. All the correlation functions in (39) are mapped by the conformal transformation (6) to the cylinder. Moreover, in this mapping, all powers of ( dz dw ), coming from the transformation law of the primary operators, cancel out. Hence f n (α, x) may be rewritten as It is now useful to define the excess-cumulant generating function In fact, denoting with κ Υ k and κ GS k the k-th cumulant in the state Υ and in the ground state respectively, we straightforwardly have Hence, the first derivative of g n (α, x) in α = 0 is the shift of the expectation value of Q A in going from the ground state to the excited state; the second derivative is the excess of the variance of Q A , and so on for all other derivatives. Hence, while all cumulants in general have non-universal contributions, the difference between a cumulant in the excited state and the same one in the ground state is always universal. Eq. (40) can be employed to calculate the charged moments of a primary excited state of an arbitrary CFT with a U (1) symmetry (indeed it can be used for the resolution of an arbitrary symmetry, even non abelian, see e.g. Ref. [10]). In the following section we specialise to the case of a Luttinger liquid CFT, introduced in Section 2.1. The Luttinger liquid CFT In the compact boson, there are two kinds of primary operators: the vertex and the derivative operators. In the following we work out the function f n (α, x) for these two cases. We recall that for a Luttinger liquid the operator V α is a vertex operator (cf. Eq. (30)) and so the calculation of f n (α, x) just requires either the computation of multipoint correlation of vertices or of vertices and derivatives. Vertex operator The correlation functions of an arbitrary number of vertex operators V α j (z) are known by elementary methods [24] and on the cylinder may be written as The factorisation of this correlation simplifies considerably the calculation of f n (α, x) for the excitation induced by Υ = e iβφ . Indeed, plugging (43) into (40) and removing the common terms in numerator and denominator, we easily get We regularise our calculation, making the insertion of V ± α 2π at w = ±iΛ and taking the limit Λ → ∞ only at the end. The first two factors of (44), related to the points {w − k } at past infinite, give The other two factors in (44) provide the same result with the replacements x → −x and β → −β in Eq. (45). Eventually, multiplying the two, we have the very simple final result It immediately follows that, when the excited state is induced by the (holomorphic) vertex operator, the only effect is a shift of the mean charge (the cumulant generating function is g n (α, x) = iαβx). In fact, while the averageq is not predictable by CFT, its shift from the ground state to a vertex state is universal. The fluctuations (and all the other cumulants), instead, are the same as those of the ground-state. The resulting probability function p(q) is shown in Fig. 2, together with the numerical results for a free fermion model that will be described in the following (cf. Sec. 5.3). For the symmetry resolved Rényi entropies, Eq. (46) implies that S V β n (q −q β ) = S GS n (q −q GS ) (whereq β =q GS + βx andq GS are the mean values of Q A in the vertex and ground state respectively). In particular, since equipartition holds for the leading CFT terms, it remains valid for these excited states. Derivative operator In this subsection we consider the other primary operator of the compact boson, namely Υ = i∂φ. For the function f n (α, x) generated by the derivative, we need a general expression for the correlation for β = −α. A very useful and standard trick to calculate this kind of correlations is to exploit the which allows us to rewrite the desired correlation (47) in terms of derivatives of correlation function of vertex operators in Eq. (43) as The excitation corresponds to a particle created over the Fermi sea in the lattice model. The continuous lines are gaussian probability distributions with variance given by ∆q 2 = 1 π 2 log sin k F L π sin π L + 1 π 2 (1 + γ E + log 2) (the non-universal O(L 0 ) contribution can be found in [35,36]). The mean value of particles is q = 100.5 in the excited state, while it is q = 100 in the ground-state. For any given n ∈ N, Eq. (49) can be explicitly evaluated. However, in view of the analytic continuation to non integer n, we are looking for the general expression as a function of n which is not easily read off from Eq. (49). We temporarily fix K = 1 in order to have more compact formulas during the course of the calculation. In the final result it is enough to replace α with α √ K to get the result for generic K. In order to understand the general structure of this correlator, it is instructive to look first at the simplest case n = 1 which is deduced by the four-point function of the vertex operator; after taking the derivatives and the limits i → 0, it reads We now make the following observations • the contribution involving ζ 1 and ζ 2 factorises; • the contributions which involve ζ 1 and z i , i.e. , come from 1 i ∂ z i V α (ζ 1 )V i (z i ) (and similarly for ζ 2 with α → β); • the term involving z i and z j (z i = z j ), i.e. (i∂φ)(z i )(i∂φ)(z j ) = • every z i is connected to another z j or to ζ j . It is clear that these observations done at n = 1 are actually true for any n and follow directly from the factorisation of the vertex correlation function (43) and the product rule of the differentiation. Thus, for general n, summing up all the ways that all the points can be connected with the rules above, we obtain the desired correlation function. Moreover, we are interested in the case ζ 1 = i∞, ζ 2 = −i∞ and β = −α, when the contributions connecting ζ i and z i simplify as follows Putting all these combinatorial pieces together, it is easy to realise that the desired correlation function can be written as where P M (λ) is the characteristic polynomial of the matrix M with elements From a rigorous point of view Eq. (52) may also be proven by induction, but this is not very instructive. Notice that for α = 0, it reduces to the result for the entanglement entropies in the derivative state obtained in Refs. [22,23]. Although the expression for P M (λ) is direct and implemented simply for any finite integer n, it is desirable to write down a more explicit form that eventually can be analytically continued. Indeed, such explicit expression can be obtained by generalising the calculation for the same characteristic polynomial at λ = 0, i.e. P M (0) = det(M ) presented in Ref. [27]. The calculation is cumbersome but straightforward; hence the details of the derivation are reported in Appendix A. Combining the results in the appendix for P M (iα) with the other correlation functions entering in f n (α, x), cf. Eq. (40), it reduces to the following polynomial of degree 2n for any integer n: This expression is easily analytical continued to arbitrary non integer values of n as where we just used repeatedly the analytic continuation of the factorial Γ(n + 1) = n!. Eq. (55) provides all the connected moments of the generalised probability p i∂φ n (q) as in Eq. (42); first the shift of the mean value vanishes q i∂φ n − q GS n = 0; then, the excess of variance is where the first line is valid for integer n and the second one is the analytic continuation in terms of the digamma function ψ(z) (the logarithmic derivative of the Γ function). Note that, as already stressed for a general state (cf. Eq. (42)), the variance excess is universal for any n. More generally all the differences of cumulants between the excited states and ground state are universal (cf. Eq. (42)). When n = 1, we have the very compact result as a function of x for various n. The behaviour as a function of n is rather peculiar since the various curves cross, signalling a non uniform and monotonic behaviour in n and x. Notice that the curves shrink quickly as n increases and, in fact, the limit for n → ∞ is a discontinuous function equal to 0 for all x, except for x = 1/2 when the limit is 1/4. This discontinuous function is expected to lead to very strong finite size effects . In the right panel we report the derivative of the excess of variance wrt to n at n = 1. This function, as we shall see, enters in the symmetry resolved von Neumann entropy. It is non-monotonic in the interval for x ∈ [0, 1/2] and it changes sign, a rather unusual shape which leads to non uniform finite size corrections. Figure 2. While for the ground-state the probability distribution is Gaussian, for the excited state this is no longer the case. From charged moments to symmetry resolution From the knowledge of the universal function f n (α, x), we straightforwardly get the generalised moment distribution function p i∂φ n (α, x) = p GS n (α, x)f n (α, x), cf. Eq. (38). However, the computation of the symmetry resolved entanglement entropies S n (q) requires the knowledge of its Fourier transform, p i∂φ n (q). Again, for conciseness of the formulas, we set K = 1 hereafter. For different K, the rescaling α → √ Kα, leads to p n (∆q) → p n (∆q/ √ K)/ √ K. It is instructive to explore first what happens for n = 1. From Eq. (54) we read the generating where σ 2 0 = ∆q 2 GS 1 and σ 1 is in Eq. (57). This generating function is even, so all odd cumulants are zero. Instead, the excess of even cumulants κ i∂φ k with k ≥ 4 is which are non zero, universal, and of order one. We recall that in the ground state within CFT only the variance is non zero because the probability p GS 1 (q) is Gaussian. However, any microscopic model has O(1) non-universal cumulants (see e.g. [35,36]); the meaning of Eq. (59) is that κ i∂φ 2k − κ GS 2k is universal although it is a difference of two O(1) term. The non-Gaussian nature of p i∂φ 1 (q) and the universality of the difference of higher cumulants is an important observation, that, to the best of our knowledge, has not been done in the past. Unfortunately, since the variance of the distribution is dominated by the ground-state value σ 2 0 which is proportional to log , all these nice O(1) universal non-gaussian effects are subleading corrections to the ground-state gaussian behaviour (this is highlighted by the behaviour of the kurtosis scaling as ∝ (log ) −2 ). Taking the Fourier transform of Eq. (58), we get the probability p i∂φ which clearly is not Gaussian. This is shown graphically in Fig. 4 (together with the data for a real system, discussed in the next subsection, to report realistic values for the non-universal constants). Anyhow, the leading deviation from Gaussianity is proportional to (σ 2 1 /σ 2 0 ) ∝ (log ) −2 : although this is only a subleading correction to the scaling for large , it decays extremely slowly with and its effects are rather strong and visible also in Fig. 4. Let us now move to a generic n. When n ∈ N, we have shown that f n (α, x) is an even polynomial in α (cf. Eq. (54)). We then just need the Fourier transform of e where H 2k (x) is the 2k-th Hermite polynomial. Thus, as long as n is integer, p i∂φ n (q) is the sum of a finite number of terms of the form of the Fourier transforms in Eq. (61) and all the universal even higher cumulants can be calculated although their full analytic expression is unwieldy for n > 1. For fixed non-integer n, we can numerically perform the Fourier transform of p n (α) to extract the probability p n (q) and from this the symmetry resolved entanglement of our interest, but we do not get close explicit expressions for it. The main drawback of this was of proceeding is that we do not have an analytic formula to perform the analytic continuation for the von Neumann entropy. A possible approximation to handle analytically the problem is to keep only the first order α 2 in the expansion of f n (α), i.e. to approximate the generating function as This approximation has the advantage that is well defined for any n, even non integer. It is also motivated by the fact that the neglected contributions ∝ α 2k , after Fourier transform, in the symmetry resolved entropies provide terms which are suppressed as higher powers of (log ) −1 . Unfortunately, such log-corrections are for generic n too slow to be ignored. However, close to n = 1, this quadratic approximation works very well since exactly at n = 1 it becomes exact, see Eq. (58). Within this approximation, we obtain an analytic result for the symmetry resolved entanglement entropies. The resulting expression is rather cumbersome. Thus, to lighten the notation, we define d n ≡ ∆q 2 i∂φ n − ∆q 2 GS n . In the physical regime ∆q of order 1, we then have Here ∆S is q independent; its precise form is not very illuminating but we report it anyway: We stress once again that the approximation (63) is not very effective for n > 1 at the value of that are usually accessed by numerical calculations, but works very well at n = 1, as we shall see. A very important aspect of Eq. (63) is the presence of a universal term that breaks equipartition of entanglement at order (log ) −2 . Hence, the term breaking equipartition is of the same order as the non-universal cutoff term in the variance, cf. Eq. (35). However, the latter may be subtracted considering differences with the ground state values. Numerical tests for free fermionsl We now provide numerical tests of the universal CFT results in the previous subsections, using freefermion techniques [37][38][39]. We consider the tight-binding model, i.e., a 1D chain of free fermions described by the following hamiltonian with c † j , c j a set of lattice fermionic ladder operators, satisfying the anticommutation relations {c i , c † j } = δ ij and h is the chemical potential. It is well known that by a Jordan-Wigner transformation, this model is mapped to the XX spin chain [40] and that (being the Jordan-Wigner local within a block) the fermion entanglement is the same as the spin one [41,42]. For this reason, we will also refer to the Hamiltonian (65) as the XX spin chain. The ground state of the Hamiltonian (65) is a Fermi sea with Fermi momentum k F = arccos |h|. The U (1) symmetry is related to the conservation of the number of fermions N = j c † j c j . We are interested in the spatial bipartition of the system where A is given by contiguous lattice sites. The RDM is Gaussian and it can be written as [39] where the × matrix C A ≡ c † i c j (with i, j ∈ A) is the correlation matrix restricted to A. For an infinite chain (L = ∞), in the ground state, C A has the following elements However, in our case, we are interested in finite L, when the excited states have a finite excess of entropy (which instead vanishes in the thermodynamic limit). In this case, it holds Using (66) together with (68), a system with 2 degrees of freedom can be studied through the numerical diagonalization of a × matrix, a low demanding numerical task, especially when compared to the exact diagonalisation of the entire Hamiltonian. In general, the Wick theorem allows us to apply this method not only to the ground state, but to all excited states (in Fock basis) which are gaussian. The low lying states are excitations of particles and Figure 5: Bosonisation dictionary for the low-energy excitations of the free-fermion chain with Hamiltonian (65). In this notation Ψ R (Ψ † R ) is the annihilation (creation) operator of the continuum theory at the right Fermi momentum (on the lattice it corresponds to c k F −π/L (c † k F +π/L )). holes above or below the Fermi sea and correspond to the primary operators of the Luttinger liquid via the state-operator correspondence. We briefly recall the bosonization dictionary in Fig. 5 (for further details see, e.g., [22]). The two states we consider are i) the vertex operator e iφ which corresponds to a particle excitation the (right) Fermi point (cf. Fig. 5) with correlation matrix and ii) the derivative operator i∂φ(z) which corresponds a (right) particle-hole excitation (cf. Fig. 5) with correlation matrix Notice that both Eqs. (69) and (70) in the thermodynamic limit L → ∞ reduce to C GS A as they should. Symmetry resolved moments and their generating function In a general state, any local operator within A can be written in terms of ρ A and hence, thanks to Eq. (66), in terms of C A . In particular, the entanglement spectrum is only a function of the spectrum of C A that we denote as {ν k }. For example the total Rényi entropies are written as [39] S n ≡ Similarly the charged moments are [10] tr where we recall that Q A = j∈A c † j c j . When calculating moments and cumulants of the probabilities p n (q, x), it is not necessary to calculate first the probability using Eq. (72) and, from these, the moments. It is more effective to write directly the moments in terms of the eigenvalues ν k calculating the derivatives wrt α of the generating function (72) written as sum over ν k (this is already routinely done for n = 1, e.g., in Refs. [35,36]). As an example, the variance of p n is Similar formulas for higher moments are straightforwardly written down. We start our numerical analysis of the free fermion chain from the variance in the excited state: we focus on the excess of variance between an excited state and the ground state, because, as we have shown (cf. Eq. (42)), it is universal and does not depend on any microscopical detail. For the vertex operator, the CFT prediction (46) implies that all moments for any n are the same as in the ground state. For the free fermion chain, this property trivially follows from the fact that the excited state remains a compact Fermi sea, just with one particle more at the right Fermi point [23]. In fact, the operator e iφ corresponds to a particle excitation in the right sector of the Fermi sea and the meaning of Eq. (46) is that one finds the exceeding particle with a probability /L. Therefore, we now focus on Figure 7: Corrections to the scaling for the excess of variance d n = ∆q 2 i∂φ n − ∆q 2 GS n for the particlehole state respect to the ground state. We report the numerical data minus the leading CFT prediction. We plot the data against the expected scaling of the corrections −2/n for n = 1 (left) and 1.5 (right) for three values of the ration x = /L. The straight lines are guide to the eyes with the expected asymptotic behaviour. the excess of variance of the derivative operator. The numerical calculated data for n = 1, 1.5, 2, 2.5 and for several values of L in the range from L = 50 to L = 400, are reported as function of x = /L in Figure 6. The comparison between the CFT prediction (56) and numerics is shown in the same picture. It is clear that, when increasing the system size, the points get closer to the analytical curve, valid in the thermodynamic limit, in spite of the presence of oscillating corrections to the scaling with amplitudes that clearly decrease with system size. These oscillations get larger for larger values of the Rényi index n and they are absent at n = 1 when, for L as small as 100, the numerical data are perfectly on top of the CFT prediction. Such oscillations do not come unexpectedly: they are just the well known unusual corrections to the scaling. In the ground state they have been fully characterised both in CFT [44,45], and in microscopic models [46][47][48][49][50][51][52] for the total Rényi entropies (and are known to be absent, instead, in the case n = 1, the Von Neumann entropy). They are present not only in the ground state, but in excited states as well [23,53]; indeed they are related to the structure of conical singularities in CFT [44] (i.e. to the Riemann surface R n ), are not affected by possible operator insertions, and so they are independent of the state (the amplitude, however, depends in a complicated and yet unknown manner on the state itself). In Ref. [14,15] through the (generalised) Fisher-Hartwig conjecture, it has been shown that for free fermions such corrections for the charged moments scale like L −2/n(1−α/π) replacing the well known decay L −2/n at α = 0; hence they become larger as α moves away from zero (and the CFT argument of Ref. [44] is easily modified to predict such new decay). Anyhow, since the variance is defined at α = 0, the corrections in Figure 6 should decay as L −2/n . This scaling is explicitly tested in Figure 7, where we show that the difference between numerical data and CFT prediction for the excess of variance indeed decays as −2/n (at fixed x we can replace L with ). Figure 8: Numerical data for f n (α, x) for the particle-hole excitation in the XX chain (symbols) compared with the CFT prediction for the derivative operator, cf. Eqs. (54) and (55). We report results for n = 1, 1.5, 2, 2.5 and x = 1/4, 1/3, 1/2. We show the data for several values of up to 1000. The agreement is very good for small α, but it worsens as α gets closer to ±π and as n gets larger. We now move to the analysis of the charged moments or generalised cumulants generating functions p n (α, x). Again for the vertex operator they are trivially equal to the ground state ones, apart from a phase (see Eq. (46) and Fig. 2). Hence, we focus here on the non-trivial the derivative operator. The numerical results for the function f n (α, x) for n = 1, 1.5, 2, 2.5 are reported in Fig. 8 (for n = 1, these are just the data for the generating function of the full counting statistics of the charge which, surprisingly, has not yet been considered in the literature). The agreement between CFT prediction (55) and numerical data is excellent at small α, while it gets worse for larger values of α and n. This is not surprising; as already discussed, in the ground state the corrections to the scaling decay as L −2/n(1−α/π) [14,15] becoming larger as α moves away from zero; the same remains true for excited states since the insertion of operators does not alter the structure of the Riemann surface (as the flux does [10]). In fact, even in the thermodynamic limit ( , L → ∞, with x kept fixed) one expects convergence only in the region α ∈ [−π, π]; on a lattice with lattice spacing a (which we set to 1) by definition (cf. Eq. (72)) f n (α, x) is periodic with period 2π/a, but this cannot be captured by the CFT working in the limit a → 0; the entire f n (α, x) for any α ∈ R can be reconstructed by periodically continuing it outside the domain [−π, π] (see [15] for details). Anyhow, this effect does not affect the behaviour of the cumulants of the charge Q A that are obtained as derivatives with respect to α evaluated at α = 0. Finally, we recall that for n → ∞, f n (α, x) becomes discontinuous inducing large finite size effects for large n. We can finally discuss the generating functions p n (α, x) themselves. Although p n (α) is just the product of f n (α, x) and the ground state distribution p GS n (α, x), it is still worth to compare the numer- Figure 9: Numerical data for the generating function p n (α) for the particle-hole excitation in the XX chain (symbols). The full lines are the CFT predictions for the derivative operator, Eq. (38) with (54); for the ground-state variance we use the exact results from Ref. [14]. Here we consider n = 1, 2, 3, 4 (from top to bottom) and x = 1/2, 1/3, 1/4 (from left to right). We report several values of up to 1000; also the CFT prediction does depend on through the variance of the ground state (the curves at different follows the same color code as the data). Again, the agreement is very good for small α, but it worsens as α gets closer to ±π and as n gets larger. ical data with the CFT for a twofold reason: (i) The CFT prediction for p n (α, x) displays an explicit dependence on through p GS n (α, x) and in particular through its variance; (ii) Since p n (α, x) decays as a Gaussian as α moves away from 0, the large deviations observed for f n (α, x) in Fig. 8 may get suppressed by multiplying it with the ground-state Gaussian distribution. The data for p n (α, x) are reported in Fig. 9 for n = 1, 2, 3, 4. We clearly observe that the matching of the CFT predictions and numerical data is improved compared to the ones for f n (α, x) as a consequence of the multiplication by the Gaussian. Anyhow, for large n and α clear deviations are still evident, as expected. Symmetry resolved entropies In this section we compute the symmetry resolved entanglement. As a first step we must compute the probability distributions p n (q). The Fourier transform of (72) gives (up to the normalisation factor) the probabilities p n (q). An efficient way to implement such Fourier transform is to write tr(ρ n A e iαQ A ) explicitly as a polynomial in e iα : the contribution of the sector with Q A = q is given by the coefficient of the e iαq term. For n = 1, one can obtain S n (q) using Eq. (22) and computing separately S n , p n (q) Figure 10: Numerical data for the symmetry resolved entanglement entropies S n (q) for the particlehole excitation in the XX chain (symbols) as function of . The full lines are the CFT predictions for the derivative operator. For the ground-state S n (q) we use the exact results from Ref. [14]. Here we consider n = 1, 2, 3, 4 (from top to bottom) and x = 1/2, 1/3, 1/4 (from left to right). In each panel we report three curves with ∆q ≡ q −q = 0, 1, 2. and p(q). The case n = 1 is singular, nevertheless the Fourier transform of the following expression provides an efficient way to compute Sp(q) − ∂ n p n (q)| n=1 and therefore S (using Eq. (23)). We do not discuss here the intermediate results for p n (q) since they depend on too many variables (x, q, n, ) which are difficult to put together in a clear plot. Hence we just discuss the symmetry resolved entropies. The numerical data for S n (q) as function of for few values of x = 1/2, 1/3, 1/4 and n = 1, 2, 3, 4 are reported in Fig. 10. We focus on the most probable values of q with ∆q ≡ q−q = 0, 1, 2. In the plots, the CFT predictions for n = 2, 3, 4 are obtained as numerical Fourier transform of the exact p n (α) obtained in Fig. 9. This is not possible for n = 1; in this case we employ the quadratic approximation (62) which can be analytically continued; within this approximation the von Neumann entropy is just the limit for n → 1 of Eq. (63). We see that the agreement of the CFT predictions with the numerical data in Fig. 10 is very good for all the values of the parameters we considered. Discussion & Outlook In this manuscript, we fully characterised the symmetry resolved entanglement of excited states of two-dimensional CFTs generated by primary operators. The first main result is Eq. (40) for the scaling function of the charged moments in a general theory with a U (1) symmetry. This expression has been then explicitly evaluated in the free compact boson theory (Luttinger liquid) for the vertex and the derivative operators. While for the vertex the outcome is trivial, for the derivative the final result (55) is highly non-obvious and with many interesting physical features discussed in the text. In particular, we found that all the differences of cumulants between the excited states and ground state are universal, i.e. do not depend on the microscopic details and are solely fixed by conformal invariance. From the Fourier transform of these charged moments, we extract the symmetry resolved entanglement Rényi entropies, stressing their universal aspects, such as a term breaking entanglement equipartition at order (log ) −2 within CFT. We tested our analytic predictions against exact numerical calculations in the XX spin chain, finding a perfect agreement. Incidentally, our results for n = 1 are the full counting statistics (FCS) of the charge operator within an interval in these excited states of the CFT. To the best of our knowledge, also these findings for the FCS and for the related probability are new and generalise the results for the ground state [63,64]. While here we focused on low-lying excited states induced by primary operators in CFT, the same method can be applied to any excited state. In this direction, it would be interesting to study excited states generated, for instance, by descendent operators. Results are already available for the total entanglement and Rényi entropies [28][29][30], and therefore they can be generalised to the symmetry resolved ones. However, working out the explicit expressions, as usual, may become quite cumbersome. Another natural development should be to study the symmetry resolved entanglement for excited states that are in the middle of the many-body spectrum. These are characterised by a volume law [65][66][67][68] and their physics is closely related to (generalised) eigenstate thermalisation hypothesis [69][70][71][72]. Our results also can be extended to work out other entanglement-related quantities in their symmetry resolved fashion. For example, a natural extension would be to consider the relative entropy or the trace distance (both measuring distances between density matrices) by combining the results in [10] with those in Refs. [73][74][75][76], where similar replica tricks for such quantities have been introduced. Different symmetry resolved entanglement measures, such as logarithmic negativity in the ground state, have been worked out as well [11], again starting from the proper replica trick [77]. In principle, also for the negativity and its symmetry resolution, one might wonder how to adapt the framework to excited states. However, in this case, some technical issues arise: one should deal with correlation functions on more complicated Riemann surfaces, which cannot be mapped to the complex plane. Still, the problem could in principle be approached through the techniques of Ref. [78] or approximation methods, as the ones of Refs. [79,80] based on operator product expansion and recursive formulas for conformal blocks. In order to match the notations of [27], we rewrite the matrix M in (53) as with The characteristic polynomial now takes the form P M (λ) = det(M − λ) = 1 4 n det[(A − 2λ) 2 + B T B]. A direct calculation shows that B T B = α · I with α = n 2 sin 2 πx . Moreover, the expansion of the Newton polynomial gives while the traces of the powers of A are tr A 2k = 2(−1) k n/2 p=1 (n − 2p + 1) 2k .
2020-03-11T01:00:37.177Z
2020-03-10T00:00:00.000
{ "year": 2020, "sha1": "9f0c628ff45ccf9583c64f5bdea4123147dad5c8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2003.04670", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9f0c628ff45ccf9583c64f5bdea4123147dad5c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
93769358
pes2o/s2orc
v3-fos-license
Evaluation of Chitosan Based Polymeric Matrices for Sustained Stomach Specific Delivery of Propranolol Hydrochloride The objective of the present investigation was to explore the potential of Chitosan based polymeric matrices as carrier for sustained stomach specific delivery of model drug Propranolol Hydrochloride. Briefly, single unit hydrodynamically balanced (HBS) capsule formulations were prepared by encapsulating in hard gelatin capsules, intimately mixed physical mixtures of drug, and cationic low molecular weight Chitosan (LMCH) in combination with either anionic medium viscosity sodium alginate (MSA) or sodium carboxymethylcellulose (CMCNa). The effect of incorporation of nonionic polymers, namely, tamarind seed gum (TSG) and microcrystalline cellulose (MCCP), was also investigated. It was observed that HBS formulations remained buoyant for up to 6 h in 0.1M HCl, when LMCH : anionic/nonionic polymer ratio was at least 4 : 1. It was also observed that LMCH has formed polyelectrolyte complex (PEC) with MSA (4 : 1.5 ratio) and CMCNa (4 : 1 ratio) in situ during the gelation of HBS formulations in 0.1M HCl.The retardation in drug release was attributed to the PEC formation between LMCH andMSA/CMCNa. Incorporation of MCCP (rapid gel formation) and TSG (Plug formation) was found to be innovative. From the data, it is suggested that Chitosan based polymeric matrices may constitute an excellent carrier for stomach specific drug delivery. Introduction It has always been difficult to engineer polymers or polymeric compositions for stomach specific drug delivery [1][2][3].The engineered polymeric compositions should have attributes that are pertinent to high level of gastric retention, generally 5-6 h [4,5], release the drug at zero-order or at a constant rate [6,7], and degrade in vivo to smaller fragments, which can then be excreted from the body; their degradation products must be nontoxic and not create an inflammatory response; and finally, they should degrade within a reasonable period of time [1,2].Although polymers of various origins are available and used in drug delivery systems, natural polysaccharides find much application because of their favorable characteristics like abundant availability, inexpensive, nontoxic, noncarcinogenic, and biodegradable, and more importantly biocompatibility.Of these, Chitosan has received a great deal of attention due to its excellent biocompatibility, biodegradability, and nontoxicity.Chitosan fulfills all the polymeric attributes that are pertinent to high level of retention at applied and targeted sites via mucoadhesive bonds.The mucoadhesive property of Chitosan is due to electrostatic interaction of the protonated amino group in Chitosan with negatively charged silicic acid residues in mucin (the glycoprotein that composes the mucus).This interaction takes place very close to the mucosal surface and thus possesses potential to confer significant gastroretention to the hydrogel.Additionally, the hydroxyl and amino groups may interact with mucus via hydrogen bonding.The linearity of Chitosan molecules also ensures sufficient flexibility for interpenetration.Further, it also possesses cell-binding activity due to its polymer cationic polyelectrolyte structure and to the negative charge of the cell surface [8][9][10][11].Although this material has already been extensively investigated in the design of different types of drug delivery systems, it is still less explored for stomach specific drug delivery systems.The objective of the present investigation is to explore the potential of this wonderful material in combination with anionic/nonionic natural polysaccharides like sodium alginate, sodium carboxymethylcellulose, microcrystalline cellulose, and tamarind seed gum in the fabrication of stomach specific single unit HBS capsule formulations using Propranolol Hydrochloride as model drug.The proposed combinations of cationic (Chitosan) and anionic polymers (sodium alginate/carboxymethylcellulose) are expected to form a low density hydrogel that remained buoyant on acidic dissolution medium (0.1 M HCl) and sustain the release of highly hydrophilic model drug by in situ polyelectrolyte complex formation. Preparation of HBS Capsules Containing Propranolol HCl (PHCL). Single unit capsules were prepared by physically blending PHCL with the required quantity of polymers as mentioned in Table 1, using double cone blender for 15 min, followed by encapsulation into hard gelatin capsules [12]. In Vitro Evaluation of HBS Capsule Formulations (1) In Vitro Buoyancy Studies [12,13].Prepared HBS capsules were immersed in 0.1 M hydrochloric acid (pH 1.2, 37±0.5 ∘ C) in USP paddle type apparatus at 50 rpm.The time for which the capsules remained buoyant was observed. (2) Determination of Drug Contents of HBS Capsules [12,13].PHCL contents were determined by emptying 10 HBS capsules from each formulation as completely as possible.A powder equivalent to average weight was added to 100 mL 0.1 M HCl (pH 1.2, 37 ± 0.5 ∘ C), followed by stirring for one hour at 500 rpm.The solution was filtered through a 0.45 membrane filter and diluted suitably and the absorbance of resultant solution was measured spectrophotometrically at 290 nm. (3) In Vitro Drug Release Studies.In vitro release of PHCL from the HBS capsule formulations was performed in USP dissolution apparatus type II at 50 rpm.Evaluation of drug release was performed by using 900 mL 0.1 M HCl (pH 1.2) at 37 ± 0.5 ∘ C. At predetermined intervals, one mL aliquot was withdrawn and replenished with an equal volume of fresh dissolution medium.Withdrawn samples were analyzed spectrophotometrically at 290 nm. Drug Release Mechanism. Different kinetic models such as zero-order, first-order, and square root (Higuchi) can be applied to interpretation of drug release kinetics.A zeroorder release refers to a uniform or nearly uniform rate of release of a drug from the solid dosage form after coming in contact with an aqueous environment, independent of the drug concentration in the dosage form during a given time period.Dosage forms with zero-order release generally provide maximum therapeutic value with minimal side effects.For many extended release formulations, the rate of drug release initially increases rapidly followed by decreased rate of drug release.This type of drug release is categorized as the first-order release.Such dosage form may not produce uniform concentration levels of the drug in the systemic circulation for a prolonged period of time.The Higuchi release equation predicts that the drug release is caused primarily by diffusion mechanism [14]: where is the amount of the drug released in time and is the release constant from the equation.The data were also subjected to Korsmeyer-Peppas power law [15] as in (2). The Korsmeyer-Peppas model provides an insight into the type of drug release mechanism taking place from swellable polymeric matrices: where / ∞ is the fraction of drug released in time , is the structural and geometric constant, and , the release exponent, is estimated from linear regression fit of the logarithmic release data.Practically, one has to use the first 60% of a release curve to determine the slope obtained from (2) regardless of the geometric shape of the delivery device. A good fit to the Korsmeyer-Peppas equation indicates the combined effect of diffusion and relaxation mechanisms for the release. Statistical Analysis.The differences in average data were compared by simple analysis of variance (one-way analysis of variance) or Student's -test (SigmaPlot 11). Result and Discussion Hydrodynamically balanced (HBS) capsule formulations are the simplest gastroretentive dosage forms.These systems are usually composed of hard gelatin capsules filled with a mixture of gel-forming polymeric substances and an active pharmaceutical ingredient.After immersion in simulated stomach fluid/acidic dissolution medium (pH 1.2) (in vitro) or swallowing (in vivo), the shell of the swollen hydrogel is formed.It controls the release rate of the drug, and it maintains appropriate integrity of the HBS and low apparent density of the systems, ensuring flotation.Such systems are best suited for drugs having a better solubility in acidic environment and for the drugs having specific site of absorption in the upper part of the small intestine [16]. In Vitro Buoyancy Studies. In vitro buoyancy studies were carried out in 0.1 M HCl (pH 1.2) maintained at 37 ± 0.5 ∘ C using USP dissolution apparatus type II at 50 rpm. For efficient buoyancy, swelling of polymer is very vital.LMCH contains -NH 2 -groups bound to polymer chains.In the presence of acidic gelation medium, the polymer chains in LMCH absorb dissolution medium and the binding of H + causes the polymer to swell (NH 3 + ).The air entrapped in the swollen polymeric network is expected to maintain the density less than unity which ultimately confers buoyancy to the hydrogel for extended period of time [17].It was observed that the HBS capsule formulations prepared with PHCL and LMCH alone sank within h.The poor buoyancy could be attributed to the weak gel network formed due to the presence of highly soluble PHCL (225 mg/mL in 0.1 M HCl at 20 ∘ C) in LMCH based formulation that could not hold drug particles in the gel network.Therefore, to prolong the buoyancy, the auxiliary polymers (MSA, CMCNa, and TSG) were incorporated into HBS capsule formulations.These auxiliary polymers are expected to counter the rapid erosion of gel layer, thereby maintaining the integrity of the swollen hydrogel.All the formulations exhibited immediate buoyancy with no lag time (Table 2).HBS capsule formulations, J1-J6, remained buoyant up to 6 h, whereas formulations J7-J9 remained buoyant for 5 h (J7) and 4 h (J8 and J9), respectively. It was also observed that LMCH : auxiliary polymer ratio is critical for buoyancy.The formulations remained buoyant for prolonged period when the LMCH : auxiliary polymer ratio was 4 : 1 (LMCH : MSA; LMCH : CMCNa; LMCH : MSA + TSG; LMCH : CMCNa + TSG).The addition of auxiliary polymer(s) resulted in improvement in swelling of the LMCH based formulations which in turn resulted in increase in bulk volume.The purpose of incorporating insoluble MCCP into the HBS formulation was to increase the porosity of the polymer matrix in order to improve the hydration and subsequent gel formation.It was observed that the whole polymeric matrix was hydrated within 15 min (J3 and J7) compared to 60 min in case of formulation J1.This fast hydration has resulted in rapid gelling but there was no improvement in buoyancy (J3) and in case of formulation J7, buoyancy was even decreased to 5 h.This could be attributed to the formation of a swollen hydrogel, whose density is greater than unity, due to excessive imbibition of aqueous acidic medium.Reversing the LMCH : MSA ratio (1 : 4, J8) or completely removing LMCH with MSA (J9) resulted in decreased in vitro buoyancy.The observed behavior (J8 and J9) could be attributed to the poor swelling of MSA in acidic dissolution medium coupled with the high diffusional driving force exerted by highly hydrophilic PHCL, which resulted in rapid erosion of gel layer leading to loss of gel integrity. Determination of Drug Contents of HBS Capsule Formulations.The drug content determination test is done to ensure that each HBS capsule formulation contains equal amount of drug.For this purpose encapsulated contents of 10 HBS capsules from each formulation were emptied as completely as possible.The contents so removed were then put into 100 mL 0.1 M HCl (pH 1.2, 37 ± 0.5 ∘ C) and stirred for one hour at 500 rpm.The solution was filtered through a 0.45 membrane filter and diluted suitably and the absorbance of resultant solution was measured spectrophotometrically at 290 nm.Drug contents of various formulations are given in Table 3.All capsule formulations were found to contain PHCL contents within limit [13]. In Vitro Drug Release Studies. In the present study, we have chosen Chitosan because of its ability to release loaded drugs slowly in the stomach, since the gel formation by cationic Chitosan is pronounced at acidic pH, which results in marked retardant effects on drug release [9,12].During the preliminary formulation development studies, we have investigated the feasibility of LMCH alone as a carrier for stomach specific delivery of highly hydrophilic model drug.However, polymeric matrices composed of LMCH (200 mg) and PHCL (50 mg) could not float for more than one hour and released almost 80% of PHCL (Burst release) before sinking in the acidic dissolution medium.This could be explained as, being cationic in nature, swelling of Chitosan in acidic gelation medium will be a more entropy-favored process and as the number of ions within the hydrogel structure increases, more and more osmotic and electrostatic forces will be created within the hydrogel structure [18].This leads to increased dissolution medium (0.1 M HCl) uptake and forces a typical hydrogel to behave thermodynamically like a liquid as it occupies more space.Moreover, Propranolol is highly hydrophilic, which further reduces the strength of the aqueous gel layer due to high diffusional driving force and consequently increased erosion.As a result Chitosan hydrogel lost its integrity and became distorted leading to burst release. Considering the above observation, in the present study, to improve the buoyancy and to address the problem of burst release, we have decided to combine LMCH with MSA or CMCNa.It is expected that such a polymeric composition not only maintains the integrity of the hydrogel and thus buoyancy, but also is expected to form in situ polyelectrolyte complex (PEC) between LMCH and MSA or CMCNa.This PEC formation is expected to retard the dilution of outer gel layer of swellable and erodible hydrogel, thereby, retarding the diffusion of PHCL.It has been reported that the PECs formed between a polycation (e.g., LMCH) and polyanion (e.g., MSA or CMCNa) exhibit a very high degree of ordering and crystal like properties and have quite compact structures and are little affected by pH variation of the dissolution medium [19].Keeping this in view, it is expected that if these complexes are formed in situ it might be possible to overcome the initial burst release of PHCL. From the formulation J1, about 65.2% of PHCL (Figure 1) was released within first h, after that remaining drug was released in a slower manner.This observation is opposite to our expectation, that is, retarded drug release.This could be explained as there must be sufficient polymer contents in a matrix to form a uniform barrier.This barrier protects the drug from immediately releasing into the dissolution medium.In this case, it is possible that a uniform gel layer might not have formed to retard the PHCL release.Another reason could be the delayed gel formation.It was observed that when the formulation J1 was placed in dissolution media, due to imbibition of the acidic dissolution medium, the hard gelatin capsule shell disrupted (15 min) and a gel layer around the polymer matrix were formed (∼30 min).It seems that during the first 30 min most of the PHCL particles located at the surface of the polymer matrix, dissolved and released rapidly. Therefore, in formulation J2, it was decided to study the effect of increased concentration of MSA (LMCH : MSA ratio 4 : 1.5) in polymer matrix keeping the LMCH concentration constant.In this case significant retardation ( < 0.01, J1 and J2) in drug release which was observed with only 30% of PHCL (Figure 1) was released within first hour.This could be attributed to quick formation of a hydrogel in which polyelectrolyte complex (PEC) might have formed in situ.The relatively quick gel formation could be attributed to the association/dissociation/binding of various ions to the polymer chains within the hydrogel structure exposed to acidic dissolution medium [20]. To confirm PEC formation (J2) between oppositely charged polymers, Differential Scanning Calorimetric (DSC) on dried gelled polymeric compositions was carried out.Briefly formulation composition which corresponded to blank J2 was accurately weighed and put into bags made up of dialysis membrane (1000-molecular weight cutoff, Sigma Aldrich).The bags were then heat sealed on both sides and exposed to the acidic dissolution medium using USP type II dissolution apparatus.After 30 min of exposure to dissolution medium (0.1 M HCl, pH 1.2), the contents of the bags were removed and dried in an oven overnight at 60 ∘ C and DSC Thermogram (TA Instruments, USA, Model: SDT 2960) was recorded (50 to 400 ∘ C at a heating rate of 10 ∘ C/min).Nitrogen was employed as blanket gas.The characteristic peaks (endotherm and exotherm) were recorded.The DSC Thermograms of LMCH and MSA were The DSC Thermogram obtained from dried gel sample (blank J2) revealed an endothermic peak at 141 ∘ C and an exothermic peak 235 ∘ C (Figure 2(c)).Both these peaks were missing in the individual thermograms of polymers (LMCH and MSA).The first broad endothermic peak at 141 ∘ C could be attributed to the glass transition temperature of LMCH-MSA polyelectrolyte complex.The second exothermic peak at 235 ∘ C is very weak and could be attributed to the slow degradation of polyelectrolyte complex.The formed LMCH-MSA polyelectrolyte complex exhibited its capability of modulating the initial burst as well as the subsequent sustained release of the PHCL. The PEC formation between LMCH and MSA was also confirmed by Fourier Transform Infrared (FTIR) Spectroscopy.To deduce PEC formation, the FTIR (Shimadzu, model 8400S) spectra of LMCH, MSA, and dried gel sample were recorded and compared.The samples were prepared in KBr disks (2 mg sample in 200 mg KBr).The scanning range was 400-4000 cm −1 and the resolution was 2 cm −1 .The FTIR spectra (Figure 3(a)) of MSA showed major absorption bands at 1610 and 1402 cm −1 due to asymmetric and symmetric stretching bond of carboxyl group.The IR absorption band at 3434 cm −1 is attributed to O-H stretching.The FTIR spectra of dried gel sample (Figure 3(c)) exhibited shifting in absorption bands together with appearance of a new IR absorption band.The absorption band due to amides I, II, and III in LMCH was shifted to 1654, 1454, and 1309 cm −1 , indicating change in the environment and there was a new band at 1730 cm −1 .Further, the FTIR spectrum also showed shifting of O-H stretching vibration to 3444.40 cm −1 from 3418 cm −1 attributed to the free -OH groups of both polymers existing in the PEC structure.All these observations could be suggestive of the PEC formation between LMCH and MSA. From formulation J3, only 24.74% of PHCL (Figure 1) was released within first hour.Now this is interesting, as we have already showed that, below the 75 mg MSA concentration, PEC formation with LMCH was not observed and then how could the PHCL release was retarded significantly ( < 0.042, J1 and J3).The formulation J3 was composed of LMCH, MSA, and MCCP (200 + 40 + 10 mg).The purpose of incorporating MCCP was to increase the porosity of the polymer matrix in order to improve the hydration of polymer matrix.It was observed that the whole polymeric matrix was hydrated within 15 min compared to 60 min in case of formulation J1. This fast hydration has resulted in rapid gelling and thus slow diffusion of PHCL. In case of formulation J4, an attempt has been made to study the release of PHCL from LMCH-CMCNa PEC gel.For this purpose MSA was replaced with CMCNa in equal amount, that is, 50 mg.The PHCL release (Figure 1) was found to be significantly retarded compared (39.24% at the end of first h) to formulation J1 ( < 0.00477, J1 and J4).Here also, in order confirm in situ PEC formation, the DSC studies on dried gelled samples [LMCH + CMCNa (4 : 1, J4 blank) prepared in the same manner as LMCH + MSA dried gel] and individual polymers were carried out. The DSC Thermogram of pure CMCNa (Figure 4(a)) showed an endothermic peak at 144.33 ∘ C corresponding to the glass transition temperature of the polymer. The DSC Thermogram of LMCH + CMCNa dried gel (Figure 4(b)) showed two endothermic peaks at 150 ∘ C and 219 ∘ C. Both of these peaks are missing in the individual thermogram of LMCH and CMCNa.The first broad endothermic peak at 150 ∘ C could be attributed to the glass transition temperature of LMCH-CMCNa polyelectrolyte complex.The second exothermic peak at 219 ∘ C is very weak and could be attributed to the slow degradation of polyelectrolyte complex. The PEC formation between LMCH and CMCNa was also confirmed by FTIR spectroscopy studies.Here also the FTIR spectra of LMCH, CMCNa, and LMCH + CMCNa dried gel were recorded and compared.The FTIR spectra In formulation J5, some amount of MSA was replaced with tamarind seed polysaccharide (TSG).Tamarind seed polysaccharide is a nonionic, neutral, branched polysaccharide comprising a cellulose-like backbone.It is dispersible in warm mater to form a highly viscous gel (up to 2800 cps in 3% solution) as a mucilaginous solution with a broad pH tolerance and adhesivity.It possesses properties such as mucomimetic, mucoadhesive, and pseudoplastic properties.It is a multifunctional polymer, which plays the role of release retardant, modifier, and a carrier for novel drug delivery systems for oral, buccal, colon, ocular systems and so forth [21].It was expected that incorporation of TSG to the LMCH-MSA matrix will increase viscosity of gelled polymer matrix which in turn offer more resistance to diffusion of PHCL through the gelled polymer matrix.It was observed that addition of TSG resulted in the formation of a polymer matrix in the form of plug rather than random network of gelled mass.However, contrary to our expectation, about 51% of PHCL (Figure 1) was released within first h; this could be attributed to time period spent on plug formation.When the HBS formulations (J5) were placed in 0.1 M HCl, dissolution medium begins to penetrate into the polymer matrix through the disrupted capsule shells.Total time spent on plug formation ranged from 18 to 20 min and during this period most of the highly water soluble drug was leached out of the system.Once the plug formation was complete the PHCL release was significantly retarded with next 50%; PHCL was released in about 5 h (9,11,8,11, and 9% at the end of 2, 3, 4, 5, and 6th h).In formulation J6, here also the plug formation was observed but the release retardant effect was not statistically different compared to J5 ( < 0.2599, J5 and J6).In case of formulation J7, effect of incorporation of MCCP was again studied, this time with LMCH-CMCNa polymer matrix.As expected polymer hydration took place rapidly and about 37% of drug was released within first hour (Figure 1).It was also observed that the release profiles compared to formulation J4 were Table 2 : In vitro floating characteristics of HBS formulations. Table 3 : PHCL contents in various HBS formulations.
2019-04-04T13:07:21.083Z
2015-08-27T00:00:00.000
{ "year": 2015, "sha1": "6c7806fd00a9426d1c5864650e9d33f968072a12", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2015/312934.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5cba9565d195e3f79077ed4d87f2f5456a8512cd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
10075821
pes2o/s2orc
v3-fos-license
Engineering BioBrick vectors from BioBrick parts Background The underlying goal of synthetic biology is to make the process of engineering biological systems easier. Recent work has focused on defining and developing standard biological parts. The technical standard that has gained the most traction in the synthetic biology community is the BioBrick standard for physical composition of genetic parts. Parts that conform to the BioBrick assembly standard are BioBrick standard biological parts. To date, over 2,000 BioBrick parts have been contributed to, and are available from, the Registry of Standard Biological Parts. Results Here we extended the same advantages of BioBrick standard biological parts to the plasmid-based vectors that are used to provide and propagate BioBrick parts. We developed a process for engineering BioBrick vectors from BioBrick parts. We designed a new set of BioBrick parts that encode many useful vector functions. We combined the new parts to make a BioBrick base vector that facilitates BioBrick vector construction. We demonstrated the utility of the process by constructing seven new BioBrick vectors. We also successfully used the resulting vectors to assemble and propagate other BioBrick standard biological parts. Conclusion We extended the principles of part reuse and standardization to BioBrick vectors. As a result, myriad new BioBrick vectors can be readily produced from all existing and newly designed BioBrick parts. We invite the synthetic biology community to (1) use the process to make and share new BioBrick vectors; (2) expand the current collection of BioBrick vector parts; and (3) characterize and improve the available collection of BioBrick vector parts. Background The fundamental goal of synthetic biology is to make the process of engineering biology easier. Drawing upon lessons from the invention and development of other fields of engineering, we have been working to produce methods and tools that support the design and construction of genetic systems from standardized biological parts. As developed, collections of standard biological parts will allow biological engineers to assemble many engineered organisms rapidly [1]. For example, individual parts or combinations of parts that encode defined functions (devices) can be independently tested and characterized in order to improve the likelihood that higher-order systems constructed from such devices work as intended (Canton, Labno, and Endy, submitted) [2,3]. As a second example, parts or devices that do not function as expected can be identified, repaired, or replaced readily [4,5]. We define a biological part to be a natural nucleic acid sequence that encodes a definable biological function, and a standard biological part to be a biological part that has been refined in order to conform to one or more defined technical standards. Very little work has been done to standardize the components or processes underlying genetic engineering [6]. For example, in 1996, Rebatchouk et al. developed and implemented a general cloning strategy for assembly of nucleic acid fragments [7]. However, the Rebatchouk et al. standard for physical composition of biological parts failed to gain widespread acceptance by the biological research community. As a second example, in 1999, Arkin and Endy proposed an initial list of useful standard biological parts but such a collection has not yet been fully realized [8]. In 2003, Knight proposed the BioBrick standard for physical composition of biological parts [9]. Parts that conform to the BioBrick assembly standard are BioBrick standard biological parts. In contrast to the previous two examples, the BioBrick physical composition standard has been used by multiple groups (Canton, Labno, and Endy, submitted) [10][11][12], and adoption of the standard is growing. For example, each summer, hundreds of students develop and use BioBrick standard biological parts to engineer biological systems of their own design as a part of the International Genetically Engineered Machines competition [13]. Additional technical standards defining BioBrick parts are set via an open standards setting process led by The Bio-Bricks Foundation [14]. The key innovation of the BioBrick assembly standard is that a biological engineer can assemble any two BioBrick parts, and the resulting composite object is itself a Bio-Brick part that can be combined with any other BioBrick parts. The idempotent physical composition standard underlying BioBrick parts has two fundamental advantages. First, the BioBrick assembly standard enables the distributed production of a collection of compatible biological parts [15]. Two engineers in different parts of the world who have never interacted can each design a part that conforms to the BioBrick assembly standard, and those two parts will be physically composable via the standard. Second, since engineers carry out the exact same operation every time that they want to combine two Bio-Brick parts, the assembly process is amenable to optimization and automation, in contrast to more traditional ad hoc molecular cloning approaches. The Registry of Standard Biological Parts (Registry) exemplifies the advantage offered by a physical composition standard such as the BioBrick assembly standard [15]. The Registry currently maintains a collection of over 2,000 BioBrick standard biological parts. Every part in the Registry has a BioBrick part number that serves as the unique identifier of the part (for example, BBa_I51020). The Registry maintains information about each part including its sequence, function, and, if available, user experiences. DNA encoding each BioBrick standard biological part is stored and propagated in Escherichia coli plasmid-based vectors [16][17][18][19]. Biological engineers can obtain parts from the Registry and assemble them using the BioBrick assembly standard in order to construct many-component synthetic biological systems. All BioBrick parts are currently maintained on a set of plasmids that includes pSB1A3-P1010, pSB3K3-P1010, pSB4A3-P1010 (see Naming of BioBrick vectors in Methods). However, these BioBrick vectors are ad hoc designs that were cobbled together from common cloning plasmids such as pUC19 [20][21][22]. As a result, whenever a new vector is needed for use with BioBrick parts, a biological engineer must design and assemble the new BioBrick vector from scratch. Several plasmid-based cloning systems that support the manipulation, propagation, and expression of DNA fragments have been developed [20][21][22][23][24][25][26][27][28][29]. The Gateway ® recombinational cloning system and associated vectors are arguably the closest example of a vector standard in biological research [30,31]. For example, several genomewide collections of open reading frames (ORFeomes) have been compiled using the Gateway ® cloning system [32][33][34]. The Gateway ® system has even been extended to allow assembly of multiple DNA fragments [35,36]. However, the Gateway ® system generally requires customized assembly strategies for each new system and therefore does not provide the advantages afforded by the BioBrick standard (above). Thus, we sought to extend the advantages of BioBrick standard biological parts to the vectors that propagate BioBrick parts. To do this, we developed a new process for engineering BioBrick vectors. The process leverages existing and newly designed BioBrick parts for the ready construction of many BioBrick vectors. To demonstrate the utility of the new process, we constructed seven new Bio-Brick vectors from the base vector. We also successfully used the new vectors to assemble BioBrick standard biological parts. from the base vector, new vectors can be built using plasmid replication origins and antibiotic resistance markers that conform to the BioBrick standard for physical composition. Thus, the base vector enables the ready reuse of vector parts available from the Registry of Standard Biological Parts. Use of the base vector to construct BioBrick vectors ensures standardization and uniformity in any resulting BioBrick vectors. For convenience, the base vector includes both a high copy replication origin and ampicillin resistance marker, so the base vector itself is capable of autonomous plasmid replication for easy DNA propagation and purification [37]. All BioBrick vectors derived from the BioBrick base vector have five key features. First, BioBrick vectors include a complete BioBrick cloning site to support the propagation and assembly of BioBrick standard biological parts [9]. Second, BioBrick vectors contain a positive selection marker in the cloning site to ameliorate one of the most common problems during assembly of BioBrick parts: contamination of the ligation reaction with uncut plasmid DNA [38]. Any cells transformed with the BioBrick vector produce the toxic protein CcdB and do not grow [39][40][41]. Cloning a BioBrick part into the cloning site of the vector removes the toxic ccdB gene. Third, BioBrick vectors contain a high copy origin in the cloning site to facilitate increased yields from plasmid DNA purification [42,43]. Again, cloning a BioBrick part into the cloning site removes the high copy origin in the cloning site thereby restoring replication control to the vector origin. Fourth, BioBrick vectors include transcriptional terminators and translational stop codons flanking the cloning site to insulate the proper maintenance and propagation of the vector from any possibly disruptive function encoded by inserted BioBrick parts [44][45][46][47]. Fifth, BioBrick vectors include verification primer annealing sites sufficiently distant from the cloning site to check the length and sequence of the cloned BioBrick part. The primer annealing sites are identical to those found in commonly used BioBrick vectors, such as pSB1A3-P1010, to support backwards compatibility. Constructing new BioBrick vectors using the BioBrick base vector Constructing new BioBrick vectors starting from the Bio-Brick base vector requires just two assembly steps ( Figure 2). The replication origin and antibiotic resistance marker should each be BioBrick standard parts. To construct a BioBrick vector, assemble the origin and antibiotic resistance marker via BioBrick standard assembly (first assembly step). Then, digest the resulting composite part with restriction enzymes XbaI and SpeI, and digest the BioBrick base vector with NheI to excise the ampicillin resistance marker. Next, ligate the composite origin and resistance marker to the linearized base vector (second assembly step). XbaI, SpeI, and NheI all generate compatible DNA ends that, when ligated with a DNA end from one of the other enzymes, produce a non-palindromic sequence that cannot be cut by any of the three enzymes. Thus, proper assembly of the vector eliminates any BioBrick enzyme sites and ensures that the resulting vector adheres to the BioBrick physical composition standard. Finally, transform the ligation product into a strain tolerant of ccdB expression, such as E. coli strain DB3.1 [48,49]. Assembling BioBrick parts using a new BioBrick vector BioBrick vectors support assembly of new BioBrick standard parts. The new vectors are compatible with prefix or postfix insertions of BioBrick parts as originally described The BioBrick base vector (BBa_I51020) Figure 1 The BioBrick base vector (BBa_I51020). Schematic diagram of BBa_I51020: a BioBrick base vector designed to facilitate construction of new BioBrick vectors. Parts from the collection listed in Figure 5 were used to construct BBa_I51020. [9]. Alternatively, the new vectors also support three antibiotic based assembly (3A assembly; Figure 3; Shetty, Rettberg, and Knight, in preparation) [56]. 3A assembly is a method for assembling one part (the prefix part) upstream or 5' to a second part (the suffix part) in the Bio-Brick cloning site of a BioBrick vector (the destination vector). 3A assembly favors correct assembly of the prefix and suffix BioBrick parts in the destination vector through a combination of positive and negative selection. Briefly, 3A assembly works as follows: Digest the prefix part with EcoRI and SpeI, the suffix part with XbaI and PstI, and the destination vector with EcoRI and PstI. Then, ligate the two parts and destination vector and transform into competent E. coli. Plate the tranformed cells on LB agar plates supplemented with antibiotic corresponding to the destination vector resistance marker. Most of the resulting colonies should contain the composite BioBrick part cloned into the destination vector. To confirm that our new BioBrick vectors function as expected, we assembled new BioBrick standard biological parts using four of the vectors that we constructed. To demonstrate that the composite BioBrick parts were correctly assembled using our new vectors, we performed a colony PCR amplification of the assembled parts and determined that the PCR product length was correct (Figure 4). Each part was also verified to be correct via sequencing with primers that anneal to the verification primer binding sites (BBa_G00100 and BBa_G00102). Discussion We developed a new process for engineering BioBrick vectors from BioBrick parts. The process now makes possible the ready construction of many, new BioBrick vectors using the growing collection of BioBrick parts available from the Registry of Standard Biological Parts. Moreover, new BioBrick vectors can be constructed from the BioBrick base vector in just two assembly steps. Finally, any Bio-Brick vectors derived from the BioBrick base vector have five key features designed to facilitate the cloning, assembly, and propagation of BioBrick parts. We used the process to construct seven new BioBrick vectors and used the vectors to assemble new BioBrick parts. Design of new BioBrick vectors parts To adhere to the BioBrick standard for physical composition, BioBrick vector parts need only be free of the Bio-Brick restriction enzyme sites. However, we chose to design anew all BioBrick vector parts ( Figure 5), so that we could completely specify their DNA sequences. We compiled a list of potentially useful endonuclease sites for removal from all new BioBrick vector parts (Table 1). We targeted each group of endonuclease sites for removal for a different reason. We targeted recognition sites of enzymes that produce compatible cohesive ends to the BioBrick enzymes because such enzymes often prove useful in constructing new variants of BioBrick vectors. We targeted offset cutter sites because they may be useful in alternative restriction enzyme-based assembly methods [57]. We targeted homing endonuclease sites because they are commonly used in genome engineering [58]. We targeted some nicking endonuclease sites because they can be useful for specialized cloning applications [59]. Finally, we targeted several additional restriction endonuclease sites to keep them available for use by new standards for physical composition. Our list of endonuclease sites constitutes a set of target sequences that should be considered for removal from any newly synthesized Bio-Brick part, if possible. The target sequence set will change as the synthetic biology community develops new standards for physical composition of BioBrick parts. Some of the targeted endonuclease sites were naturally absent from the DNA sequences encoding our new vector parts. For any remaining sites, we removed the recognition sequences from the BioBrick vector parts by introducing point mutations. However, the functions of the pSC101 and pUC19-derived plasmid replication origins were sensitive to introduced mutations, so the replication origins used in this work are not free of all targeted endonuclease sites (see Methods). Similarly, issues during synthesis led to an unnecessary redesign of the ccdB positive selection marker, so it too is not free of all targeted endonuclease sites. Construction of BioBrick base vector To realize our designs for new BioBrick vectors, we contracted for DNA synthesis of the four antibiotic resistance markers, pSC101 replication origin and the entire Bio-Brick base vector. However, synthesis of the BioBrick base vector was problematic (see Methods). The issues that arose during synthesis are briefly discussed here, because they are relevant to anyone interested in synthesizing new BioBrick parts. Difficulties during synthesis stemmed from the inclusion of both a ccdB positive selection marker that is toxic to most E. coli strains and a synthetic replication origin that proved incapable of supporting replication of the BioBrick base vector. Commercial DNA synthesis processes currently rely on cloning, assembly, and propagation of synthesized DNA in E. coli. In general, for parts whose function are incompatible with growth and replication of E. coli, the processes of DNA design and DNA synthesis cannot be easily decoupled. Improvements in commercial DNA synthesis are needed that free the process from dependence on in vivo DNA propagation and replication. Conclusion The goal of synthetic biology is to make the process of design and construction of many-component, engineered biological systems easier. In support of this goal, a technical standard for the physical composition of biological parts was developed [9]. Here, we extended the same principles of part reusability and standardization of physical composition to the vectors that are used to assemble and propagate BioBrick parts. Design of BioBrick vector parts and the BioBrick base vector We designed all BioBrick vector parts and the BioBrick base vector using Vector NTI ® Suite 7 for Mac OS X by Invitrogen Life Science Software in Carlsbad, CA. We removed endonuclease recognition sites from the designed parts either manually or using GeneDesign vβ;2.1 Rev 5/26/06 [60]. Construction of BioBrick vector parts We contracted for DNA synthesis of the four antibiotic resistance markers and the pSC101 replication origin to the DNA synthesis company Codon Devices, Inc. in Cambridge, MA. The four antibiotic resistance markers (BBa_P1002-P1005) were easily synthesized as designed. Testing confirmed that the four markers conferred resistance to the corresponding antibiotics. Synthesis of the pSC101 origin was also straightforward. However, testing revealed that our design for the pSC101 origin (BBa_I50040) was nonfunctional as a replication origin. We successfully reconstructed a functional pSC101 replication origin (BBa_I50042) via PCR of an existing plasmid. Thus, we presume that one or more of the introduced point mutations to eliminate endonuclease sites were deleterious to the plasmid replication function of the designed origin. We did not attempt to synthesize New BioBrick vector parts Figure 5 New BioBrick vector parts. The Registry part number, function, and graphical notation of each constructed BioBrick vector part are listed. The part collection includes (1) BBa_G00000: BioBrick cloning site prefix including the EcoRI (E) and XbaI (X) restriction enzyme sites, (2) BBa_G00001: BioBrick cloning site suffix including the SpeI (S) and PstI (P) restriction enzyme sites which, together with the BioBrick prefix, forms a BioBrick cloning site for compatibility with all BioBrick standard biological parts, (3) BBa_P1016: positive selection marker ccdB to improve yield of insert-containing clones during part assemblies, (4) BBa_I50022: pUC19-derived high copy replication origin within the BioBrick cloning site that allows for easy plasmid DNA purification of the base vector and any derived vectors, (5) BBa_B0042: a short DNA sequence that has translational stop codons in all six reading frames to prevent translation into or out of the BioBrick cloning site, (6) BBa_B0053-B0055 and BBa_B0062: forward and reverse transcriptional terminators flanking the BioBrick cloning site to prevent transcription into or out of the BioBrick cloning site, (7) BBa_G00100 and BBa_G00102: sequence verification primer annealing sites for primers VF2 and VR, (8) BBa_B0045: NheI (N) restriction site for insertion of desired replication origin and resistance marker to construct vector of interest, (9) BBa_P1006: ampicillin resistance selection marker to facilitate propagation of the base vector, (10) BBa_P1002-P1005: four antibiotic resistance markers, and (11) BBa_I50042 and BBa_I50032: pSC101 and p15A replication origins. Each part is used either as a component of the BioBrick base vector BBa_I51020 (1)(2)(3)(4)(5)(6)(7)(8)(9) or to construct new BioBrick vectors (10)(11). μL total volume. The PCR conditions were an initial denaturation step of 95°C for 15 mins followed by 40 cycles of 94°C for 30 seconds, 56°C for 30 seconds, and 68°C for 2.5 minutes. Finally, the reactions were incubated at 68°C for 20 minutes. We then added 20 units DpnI restriction enzyme to each reaction to digest the template DNA. The reactions were incubated for 2 hours at 37°C and then heat-inactivated for 20 minutes at 80°C. We purified both reactions using a MinElute PCR Purification kit according to the manufacturer's directions (QIAGEN, Germany). The pSC101 and p15A origin PCR products were used directly for assembly of the BioBrick vectors. Construction of BioBrick base vector We also contracted for synthesis of the entire BioBrick base vector. However, we encountered two issues during synthesis of the base vector. First, troubleshooting efforts during synthesis compromised the design of the base vector: failed attempts to clone the base vector into an E. coli strain intolerant of expression of the toxic protein CcdB led to an unnecessary redesign of the ccdB positive selection marker in the BioBrick base vector (from BBa_P1011 to BBa_P1016 [Genbank:EU496090]). Second, faulty part design adversely impacted the synthesis process: our pUC19-based replication origin design was similarly nonfunctional, so the base vector could not be propagated as specified. Yet, synthesized DNA for the BioBrick base vector was nevertheless provided. We eventually determined that the provided DNA was actually a fusion of two slightly different copies of the base vector: one with the designed, nonfunctional version of the pUC19 origin (BBa_I50020) and one with a functional version of the pUC19 origin (BBa_I50022 [Genbank:EU496091]). To obtain a single, corrected version of the BioBrick base vector, we performed a restriction digest of the provided base vector DNA with EcoRI. We then re-ligated 1 μL of a tenfold dilution of the linearized base vector DNA. For detailed reaction conditions, see Assembly of BioBrick parts using the new BioBrick vectors. We transformed the religated BioBrick base vector into E. coli strain DB3.1 via electroporation and plated the transformed cells on LB agar plates supplemented with 100 μg/mL ampicillin to obtain the corrected BioBrick base vector BBa_I51020 [48,61,62]. Correct construction of the BioBrick base vector was verified by DNA sequencing by the MIT Biopolymers Laboratory. Assembly of BioBrick vectors We assembled the new BioBrick vectors as described (Figure 2). For detailed reaction conditions, see Assembly of BioBrick parts using the new BioBrick vectors. However, we used the synthesized BioBrick base vector BBa_I51019 instead of the corrected BioBrick base vector BBa_I51020, since, at the time, we had not yet identified the issue with the provided synthesized DNA. As a result, we obtained a mixture of new vectors. Four of the constructed vectors have a functional version of the pUC19 origin (BBa_I50022) in the BioBrick cloning site and propagate at high copy (vectors with BBa_I52002: pSB4A5, pSB4K5, pSB4C5, and pSB3K5). The other three vectors have a nonfunctional version of the pUC19 origin (BBa_I50020) in the BioBrick cloning site and propagate at low copy (vectors with BBa_I52001: pSB4T5, pSB3C5, and pSB3T5). We chose to describe all seven vectors here for two reasons. First, all seven new BioBrick vectors can be used for the propagation and assembly of BioBrick parts; the vectors pSB4T5, pSB3C5, and pSB3T5 are just slightly less convenient for plasmid DNA purification. Second, the difficulties that we encountered during construction of the BioBrick base vector are illustrative of the current interdependence of DNA design and DNA synthesis (see Discussion). Assembly of BioBrick parts using the new BioBrick vectors We assembled BioBrick composite parts as described (Figure 3). We performed all restriction digests by mixing 0.5-1 μg DNA, 1X NEBuffer 2, 100 μg/ml Bovine Serum Albumin, and 1 μL each needed restriction enzyme in a 50 μL total volume. Restriction digest reactions were incubated for at least 2 hours at 37°C and then heat-inactivated for 20 minutes at 80°C. We then dephosphorylated the destination vector into which the parts were assembled. (When assembling BioBrick vectors, we dephosphorylated the composite origin and resistance marker to prevent circularization of this DNA fragment.) We performed dephosphorylation reactions by adding 5 units Antarctic Phosphatase and 1X Antarctic Phosphatase Reaction Buffer in a total volume of 60 μL to the heat-inactivated restriction digest reaction. We incubated dephosphorylation reactions for 1 hour at 37°C and inactivated the phosphatase by heating to 65°C for 5 minutes. We purified all reactions using a MinElute PCR Purification kit according to the manufacturer's directions (QIAGEN). We performed all ligation steps by mixing 2-4 μL of each purified, linearized DNA, 1X T4 DNA Ligase Reaction Buffer, and 200 units T4 DNA Ligase in a 10 μL total volume. We incubated the ligation reactions for 20 minutes at room temperature. We transformed all assembled Bio-Brick parts into E. coli strain TOP10 via chemical transformation [63][64][65]. (We transformed the assembled BioBrick vectors into E. coli strain DB3.1 via electroporation [48,61,62].) Transformed cells were plated on LB agar plates supplemented with 100 μg/mL ampicillin, 50 μg/ mL kanamycin, 35 μg/mL chloramphenicol, or 15 μg/mL tetracycline as appropriate. We identified clones with correct construction of BioBrick parts by growth on the plates supplemented with the correct antibiotic, lack of growth on plates supplemented with other antibiotics, length verification by colony PCR (see next section), and DNA sequencing by the MIT Biopolymers Laboratory. Verification of correct BioBrick part assembly via colony PCR To demonstrate the correct assembly of BioBrick parts using the new BioBrick vectors, we performed a colony PCR using primers that anneal to the verification primer binding sites. We picked one colony and diluted it into 100 μL water. Then we mixed 9 μL PCR SuperMix High Fidelity, 6.25 pmoles VF2 primer (5'-TGC CAC CTG ACG TCT AAG AA-3'), 6.25 pmoles VR primer (5'-ATT ACC GCC TTT GAG TGA GC-3'), and 1 μL colony suspension. The PCR conditions were as described previously but using an annealing temperature of 62°C and an elongation time of 3.5 minutes. We diluted the reactions fourfold with water and then performed an agarose gel electrophoresis of 20 μL of each diluted reaction using a 0.8% E-Gel ® . We also electrophoresed 1 μg of 2-log DNA ladder (New England Biolabs, Inc., Ipswich, MA) to verify the length of each PCR product. The gel was imaged with 302 nm transilluminating ultraviolet light using an ethidium bromide emission filter and an exposure time of 614 milliseconds. Materials for all PCR and agarose gel electrophoresis steps in this work were purchased from the Invitrogen Corporation in Carlsbad, CA unless otherwise specified. Reagents for all restriction digest, dephophorylation, and ligation reactions were purchased from New England Biolabs, Inc., Ipswich, MA. All PCR and temperature-controlled incubation steps were done in a DNA Engine Peltier Thermal Cycler (PTC-200) or DNA Engine OPTICON™from MJ Research, Inc. (now Bio-Rad Laboratories, Inc., Hercules, CA). BioBrick vector names take the form pSB#X#. The first number indicates the identity of the origin of replication. The number, corresponding replication origin, expected plasmid copy number and typical purpose of that origin are listed [38]. To expand the list to include additional replication origins, document additions at the Registry of Standard Biological Parts [66]. Naming of BioBrick vectors BioBrick vector names take the form pSB#X#. The letters pSB are an acronym for plasmid Synthetic Biology. The first number denotes the origin of replication ( Table 2). The letter X identifies the antibiotic resistance marker(s) present in the vector (Table 3). Vectors with multiple resistance markers have multiple, successive letters. Finally, the last number in the vector name is a version number to differentiate between the various implementations of the pSB series of vectors ( Competing interests The author(s) declare that they have no competing interests. BioBrick vector names take the form pSB#X#. The letter X indicates the antibiotic to which the vector confers resistance. The letter code and corresponding antibiotic resistance marker are listed. The absence of a letter indicates that no antibiotic is present. Multiple resistance markers in a vector are indicated by successive codes in alphabetical order e.g., AK, StT, AC and AKT. To expand the list to include additional antibiotic resistance markers, document additions at the Registry of Standard Biological Parts [66].
2014-10-01T00:00:00.000Z
2008-04-14T00:00:00.000
{ "year": 2008, "sha1": "9095e263ff04548cac9417797747ca1402a2d96a", "oa_license": "CCBY", "oa_url": "https://jbioleng.biomedcentral.com/track/pdf/10.1186/1754-1611-2-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9095e263ff04548cac9417797747ca1402a2d96a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
255059996
pes2o/s2orc
v3-fos-license
Functional Diversification of Oyster Big Defensins Generates Antimicrobial Specificity and Synergy against Members of the Microbiota Big defensins are two-domain antimicrobial peptides (AMPs) that have highly diversified in mollusks. Cg-BigDefs are expressed by immune cells in the oyster Crassostrea gigas, and their expression is dampened during the Pacific Oyster Mortality Syndrome (POMS), which evolves toward fatal bacteremia. We evaluated whether Cg-BigDefs contribute to the control of oyster-associated microbial communities. Two Cg-BigDefs that are representative of molecular diversity within the peptide family, namely Cg-BigDef1 and Cg-BigDef5, were characterized by gene cloning and synthesized by solid-phase peptide synthesis and native chemical ligation. Synthetic peptides were tested for antibacterial activity against a collection of culturable bacteria belonging to the oyster microbiota, characterized by 16S sequencing and MALDI Biotyping. We first tested the potential of Cg-BigDefs to control the oyster microbiota by injecting synthetic Cg-BigDef1 into oyster tissues and analyzing microbiota dynamics over 24 h by 16S metabarcoding. Cg-BigDef1 induced a significant shift in oyster microbiota β-diversity after 6 h and 24 h, prompting us to investigate antimicrobial activities in vitro against members of the oyster microbiota. Both Cg-BigDef1 and Cg-BigDef5 were active at a high salt concentration (400 mM NaCl) and showed broad spectra of activity against bacteria associated with C. gigas pathologies. Antimicrobial specificity was observed for both molecules at an intra- and inter-genera level. Remarkably, antimicrobial spectra of Cg-BigDef1 and Cg-BigDef5 were complementary, and peptides acted synergistically. Overall, we found that primary sequence diversification of Cg-BigDefs has generated specificity and synergy and extended the spectrum of activity of this peptide family. AMPs encompass a highly diverse array of molecules widespread in multicellular organisms, which were initially described for their direct antimicrobial activities against pathogens [7,8]. AMPs are multifunctional: they are involved in the early establishment and shaping of bacterial microbiota; they maintain tolerance to beneficial microbes and greatly affect community composition in the guts, epithelia, and mucosal surfaces of mammals through direct and indirect activities against commensal bacteria [9,10]. In other animal branches as well, AMPs play an important role in host-microbiota interactions [5,11]. In arthropods such as insects and crustaceans, AMPs regulate microbiota composition [6,12,13]. In cnidarians, they are crucial in shaping microbial colonization during Hydra development [14]. Several families of AMPs have been identified in Crassostrea gigas oysters (mollusks) and characterized in terms of expression, structure, and function [15][16][17]. AMP families in oysters have widely diversified; they are expressed at low concentrations by immune cells, hemocytes, and epithelia [15]. Recent studies have highlighted the role of host-microbiota interactions in oyster health [18,19]. The structure of the oyster microbiota is modified under stressful conditions promoting the development of opportunistic infections [18]. The resulting dysbiosis can be associated with significant mortality. In particular, C. gigas suffers from a polymicrobial disease called the Pacific Oyster Mortality Syndrome (POMS), which is triggered by infection with the OsHV-1 µVar virus and affects the oyster's immune cells. Interestingly, hemocyte infection has been associated with attenuation of AMP expression with a loss of barrier function leading to dysbiosis and fatal bacteremia [20,21]. Oyster big defensins are among the peptide families whose expression is altered during POMS, suggesting that they may contribute to the control of oyster microbial communities [21]. This hypothesis is further supported by the recent finding that a big defensin mediates microbial shaping in another bivalve mollusk, the scallop Argopecten purpuratus [22]. Knowledge of big defensins has significantly increased over the past decade, particularly with the growing availability of next-generation sequencing data. Phylogenetic analyses have shown that big defensins are a family of two-domain AMPs that expanded in mollusks as a result of independent lineage-specific tandem gene duplications, followed by rapid molecular diversification [23,24]. Canonical big defensins harbor an N-terminal hydrophobic domain specific to the peptide family and a C-terminal domain that resembles β-defensins [24,25]. Big defensins have diversified in the oyster C. gigas with up to seven distinct sequences described [24]. Among them, Cg-BigDef1-3 and Cg-BigDef5-6 form two phylogenetically distinct groups [24]. Cg-BigDefs are expressed by oyster hemocytes [23]. To date, functional data have only been acquired on Cg-BigDef1. This was made possible by developing the chemical synthesis of Cg-BigDef1 [26,27]. Synthetic Cg-BigDef1 showed a broad range of antibacterial activities against both Gram-positive and Gram-negative bacteria from clinical and environmental collections [27]. A remarkable feature of its mechanism of action was its ability to self-assemble in nanonets and trap and kill bacteria [27]. In this paper, we first tested the in vivo ability of Cg-BigDefs to control the commensal oyster microbiota by monitoring microbiota composition in oysters injected with Cg-BigDef1. Second, we searched whether primary sequence diversification among oyster big defensins translates into functional diversification. To answer this second question, we cloned the genomic sequences and chemically synthesized Cg-BigDef1 and Cg-BigDef5, which are representative of sequence diversity. Antimicrobial activity spectra of Cg-BigDefs were determined in vitro against a collection of culturable bacteria belonging to the oyster microbiota. Our data support a role for Cg-BigDefs in the regulation of oyster microbiota composition and show that the sequence diversity between Cg-BigDef1 and Cg-BigDef5 generates antimicrobial specificity and synergy against members of the oyster microbiota, including bacteria associated with significant pathologies. In Vivo Activity of Cg-BigDef1 on Oyster Commensal Microbiota We tested the effect of Cg-BigDef1 on oyster commensal microbiota by injecting the synthetic peptide into the adductor muscle (5 µM Cg-BigDef1 relative to oyster flesh volume) of anesthetized oysters. An injection of sterile artificial seawater (ASW), i.e., the solvent used for solubilizing synthetic Cg-BigDef1, was used as a control ( Figure 1A). Since substantial inter-individual variations were observed in oyster microbiota composition [28] and oyster genetics influences microbiota composition [29], we used a pathogen-free oyster family of full siblings for our experiments (i.e., oysters with limited environmental and genetic variation; see Materials and Methods). Microbiota composition was monitored in whole tissue extracts by 16S metabarcoding. We first verified that anesthesia had no significant effect on the homeostasis of oyster commensal microbiota. To this end, we compared the microbiota of eight non-treated control oysters (NTC, i.e., not anesthetized, not injected with ASW) and eight anesthetized control oysters (AC, i.e., oysters kept dry for 12 h and anesthetized for 2 h). This comparison was performed at time 0 before oysters were injected with Cg-BigDef1 or sterile artificial seawater, used as a control. We then examined the effect of Cg-BigDef1 on oyster commensal microbiota by comparing the microbiota of eight oysters injected with Cg-BigDef1 or ASW (control) at three time points after injection (0, 6, and 24 h) ( Figure 1A). To compare microbiota composition over time and conditions, we generated a global dataset from a total of 7,320,778 raw reads obtained by Illumina MiSeq sequencing of the 64 oysters analyzed. Sufficient sequencing depth was confirmed by analyses of rarefaction curves of species richness (Supplementary Figure S1). We retained 6,371,737 sequences corresponding to 632 Amplicon Sequence Variants (ASV) for further analyses after filtering, chimera removal, clustering by dbOTU3, and rare ASVs filtration. Anesthesia had no significant effect on oyster microbiota. Indeed, AC oysters did not differ from NTC control oysters in terms of α-diversity (measured here by the observed richness and Shannon H indices) nor β-diversity (measured by the Bray-Curtis dissimilarity matrix estimates) (Supplementary Figure S2, Table S1) or relative abundance of the 10 most abundant genera in AC and NTC animals (Supplementary Figure S2, Table S2). Moreover, oysters injected with Cg-BigDef1 did not differ from oysters injected with ASW in terms of α-diversity as estimated with the observed richness and Shannon's H indexes (Supplementary Figure S3). By contrast, Cg-BigDef1 altered the oyster microbiota in terms of β-diversity. This was determined using a final matrix of 632 ASVs distributed among the 48 oyster microbiota samples and the three kinetic points after normalization/rarefaction and removal of low abundance ASVs (less than four reads in at least four individuals). Differences are depicted by principal coordinate analysis (PCoA) based on the Bray-Curtis dissimilarity matrix for T6 and T24 ( Figure 1B). Subsequent statistical analyses demonstrated that at T0 (i.e., 10 min after injection with Cg-BigDef1 or ASW), oyster microbiota did not differ between conditions (p = 0.779). The effect of Cg-BigDef1 became visible from T6 (p = 0.00064) to T24 (p = 0.00488) (PERMANOVA based on 100,000 permutations) (Tables S2 and S3), in agreement with the antimicrobial activity of Cg-BigDef1 measurable in vitro within 24 h [27]. No significant differences were observed among the 10 most abundant genera between Cg-BigDef1 and ASW-injected oysters ( Figure 1C, Supplementary Figure S4). Significant differences were only observed at the ASV level. Overall, at T6, differential abundance analysis identified 156 ASVs, which were significantly enriched or impoverished in Cg-BigDef1-treated oysters. Among these, 47 ASVs were affiliated with 44 known genera ( Figure 1D). Similar results were obtained at T24 (Supplementary Figure S5). While differences were observed in microbiota composition, the relative abundance of total microbiota did not vary significantly upon Cg-BigDef1 treatment, as determined by 16S quantitative PCR (Supplementary Figure S6). with ASW and Cg-BigDef1 at T6. Each circle represents an ASV showing significant log 2 Foldchange (adjusted p value < 0.01) between experimental conditions. Positive log 2 FoldChange means enrichment in Cg-BigDef1-injected oysters and negative log 2 FoldChange means enrichment in ASW-injected oysters. Taxa are denoted by their attributed genus followed by the first four characters of the ASV barcode attributed by SAMBA. Note that ASVs without genera annotation were not represented in the figure. Establishment of a Collection of Culturable Bacteria from C. gigas Microbiota To further investigate the role of Cg-BigDefs in controlling the oyster microbiota, we built a collection of culturable bacterial strains representing 21 genera associated with healthy and diseased C. gigas [30,31]. Among them, we included genera repeatedly associated with oyster diseases, such as Arcobacter, Aeromonas, Marinomonas, Marinobacterium, Pseudoalteromonas, Psychrobacter, Sulfitobacter, Tenacibaculum, and Vibrio [21,30,32,33]. We obtained 16S rDNA sequences (V3-V4 loop) for 46 bacteria isolated from oysters with known health status (healthy or diseased). In addition, we purchased three type-strains of the genera of interest that were needed as a reference for the MALDI database (one Pseudoalteromonas and two Alteromonas). All 16S sequences exhibited ≥ 95% identity with a known type-strain sequence included in the analysis ( Figure 2). For 41 strains of the 50 strains in the collection, we acquired molecular mass fingerprints by MALDI Biotyping. Strains with taxonomic assignation by 16S phylogeny but no match in MALDI databases were used to enrich the MALDI databases of marine bacteria https://doi.org/10.12770/2 61d7864-a44c-43ab-b0c6-57fdaf7360ac (accessed on 14 October 2022). Gene Cloning and Chemical Synthesis of Cg-BigDef1 and Cg-BigDef5 To explore the impact of Cg-BigDefs sequence diversification on the control of oyster microbiota, we focused on Cg-BigDef1 and Cg-BigDef5, which belong to two phylogenetically distinct groups within this peptide family, as shown previously in [24]. We first cloned the gene encoding Cg-BigDef5 (Cg-bigdef5 gene; GenBank: OP191676). Two distinct exons were found to encode the two putative domains of the molecule (Figure 3A), as previously found in Cg-bigdef1 [27]. The first exon of Cg-bigdef5 encodes the predicted signal peptide (or predomain, 23 residues), the prodomain (13 residues), and Gene Cloning and Chemical Synthesis of Cg-BigDef1 and Cg-BigDef5 To explore the impact of Cg-BigDefs sequence diversification on the control of oyster microbiota, we focused on Cg-BigDef1 and Cg-BigDef5, which belong to two phylogenetically distinct groups within this peptide family, as shown previously in [24]. We first cloned the gene encoding Cg-BigDef5 (Cg-bigdef5 gene; GenBank: OP191676). Two distinct exons were found to encode the two putative domains of the molecule ( Figure 3A), as previously found in Cg-bigdef1 [27]. The first exon of Cg-bigdef5 encodes the predicted signal peptide (or predomain, 23 residues), the prodomain (13 residues), and the N-terminal domain of the mature Cg-BigDef5 (42 residues). The second exon encodes a short linker (3 residues) and the C-terminal β-defensin-like domain (42 residues), with the canonical spacing of cysteines for big defensins [Cys-Xaa (4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14) -Cys-Xaa (3) -Cys-Xaa (13)(14) -Cys-Xaa (4-7) -Cys-Cys] [27] ( Figure 3B). After posttranslational modifications (which include removal of the preprodomain, oxidation of the three disulfide bridges, glutamine to pyroglutamic acid conversion, and C-terminal amidation by removal of a glycine residue), the calculated molecular weight (MW) of Cg-BigDef1 was 10,692 Da (93 amino acids). The calculated MW for Cg-BigDef5 was 9977 Da (86 amino acids) after the removal of the preprodomain, disulfide bridge oxidation, and C-terminal amidation by removal of a glycine residue ( Figure 3C). Overall the two mature peptides show 62.8 % identity (54/86 identical residues) with a calculated positive net charge of +6 and +7 at pH = 7.4 for Cg-BigDef1 and Cg-BigDef5, respectively. residues) with a calculated positive net charge of + 6 and + 7 at pH = 7.4 for Cg-BigDef1 and Cg-BigDef5, respectively. The N-terminal domain of both peptides is hydrophobic and positively charged in the region preceding the linker due to repeats of basic residues such as arginine in Cg-BigDef1 and lysine in Cg-BigDef5. One remarkable difference between the two big defensins is the length of the linker that connects the two domains, with 10 amino acid residues in Cg-BigDef1 and only three amino acid residues in Cg-BigDef5. Cg-BigDef1 and Cg-BigDef5 were synthesized using a combination of solid-phase peptide synthesis, native chemical ligation, and oxidative folding as previously described for Cg-BigDef1 [26,27] (see Supplementary Figure S6 for HPLC and mass spectrometry characterization). Synthetic Cg-BigDef1 (1-93) corresponds to mature Cg-BigDef1 ( Figure 3C) [27]. Synthetic Cg-BigDef5 (1-86) corresponds to mature Cg-BigDef5 ( Figure 3C) with a substitution of Met14 by norleucine (Nle) (this study). Detailed optimization of Cg-BigDef5 (1-86) synthesis and NMR structure determination will be described elsewhere The N-terminal domain of both peptides is hydrophobic and positively charged in the region preceding the linker due to repeats of basic residues such as arginine in Cg-BigDef1 and lysine in Cg-BigDef5. One remarkable difference between the two big defensins is the length of the linker that connects the two domains, with 10 amino acid residues in Cg-BigDef1 and only three amino acid residues in Cg-BigDef5. Specificity, Synergy, and Complementary Broad-Spectrum Activity of Cg-BigDef1 and Cg-BigDef5 against Bacteria from the Oyster Microbiota We used synthetic Cg-BigDef1 and Cg-BigDef5 to study their antibacterial activities against bacteria from the microbiota of C. gigas, including strains relevant to oyster infections. All assays were performed under physiological conditions, i.e., at a high salt concentration (400 mM NaCl). Cg-BigDef1 showed antibacterial activity against 11/26 strains from C. gigas microbiota ( Cg-BigDef5 tended to be less active than Cg-BigDef1. Still, it showed antibacterial activity against 9/26 tested strains from the C. gigas oyster microbiota ( Table 1). The highest activity was recorded against Marinomonas sp. 14.063 with a MIC of 0.6 µM. Cg-BigDef5 was also active against Alteromonas sp. 15 Overall, Cg-BigDef1 and Cg-BigDef5 were both active at a salt concentration (400 mM NaCl) pertinent to marine bacteria. They showed strain specificity and complementary activity spectra against the marine strains of the oyster microbiota collection: five strains were susceptible to both peptides, whereas six and four strains were only susceptible to Cg-BigDef1 and Cg-BigDef5, respectively. Only Cg-BigDef1 was active against Marinomonas sp. 15.5827, Pseudoalteromonas sp. 15 15.5805. These data show that Cg-BigDef sequence diversity extends the activity spectrum at the inter-genera level. The example of Marinomonas sp. strains, which are susceptible to different Cg-BigDefs, highlights an undiscovered specificity of Cg-BigDefs and illustrates that their sequence diversity extends their activity spectrum at an intra-genus level as well. It is important to note that all the strains mentioned here have a significant role in oyster health: they have been isolated from OsHV-1-infected oysters https://doi.org/10.12770/0d529567-92fd-4dcd-9d9c-70e98ab6f772 (accessed on 14 October 2022), and several of them belong to a set of conserved genera that proliferate during OsHV-1-induced dysbiosis [30]. Finally, we tested the synergies of Cg-BigDef1 and Cg-BigDef5, by the checkerboard microtiter assay, against a Gram-positive and a Gram-negative strain displaying the lowest MICs for both peptides. The two big defensins acted synergistically against both strains. Indeed, synergy was recorded against the Gram-negative Alteromonas sp. 15.5805, with a fractional inhibitory concentration (FIC) index value of 1 (Table 1). Strong synergy was observed against the Gram-positive Bacillus sp. 15.5814 with an FIC value of 0.35 (Table 1). To summarize, Cg-BigDef1 and Cg-BigDef5 exhibit a broad activity spectrum. They show strain specificity, as well as complementary activities. Together they inhibit the growth of 15/26 strains tested. Finally, they act synergistically against both Gram-positive and Gram-negative bacteria. Altogether, these data show that sequence diversification of Cg-BigDefs has generated antimicrobial specificity and extended the activity spectrum of the peptide family against marine bacteria from the oyster microbiota, including strains associated with oyster pathologies. Discussion We found that oyster big defensins (Cg-BigDefs), a family of AMPs that have widely diversified in mollusks, can alter oyster microbiota composition in vivo as a result of direct antimicrobial activity against members of the oyster microbiota. Remarkably, we observed that sequence diversification had generated antimicrobial specificity as well as synergy between Cg-BigDef1 and Cg-BigDef5, thereby extending the activity spectrum of the peptide family and increasing its potency. Until now, it was largely unknown whether AMPs could shape the microbiota of mollusks, while this had been demonstrated in other animal phyla, particularly mammals [4], cnidarians [14], and insects [6]. Furthermore, when available, antimicrobial data have been largely acquired on microorganisms unrelated to molluscan health [27,36,37]. The lack of knowledge on immune/microbiota interactions in mollusks is due to several methodological obstacles and knowledge gaps. Among them, it is worth mentioning (i) the only recent description of molluscan microbiomes, accelerated by facilitated access to next-generation sequencing (for oysters, see [18,19,31]); (ii) the lack of well-characterized culturable microbiota; and (iii) difficulties in producing a sufficient amount of high-quality AMPs and in developing efficient and reliable tools for gene knock-in, knock-out, and knock-down in several molluscan species. These difficulties were circumvented in this work by the chemical synthesis of pure big defensins from C. gigas according to our previously described procedure [26] and the construction of a collection of culturable bacteria isolated from oysters with known health status (identification by 16S phylogeny and MALDI Biotyping). With such tools, we showed that Cg-BigDefs have broad-spectrum activities against bacterial strains from the oyster microbiota, including strains associated with major infectious diseases in oysters. In line with these observations, in vivo, Cg-BigDef1 induced significant changes in oyster microbiota β-diversity. This is consistent with in vivo results recently obtained by Schmitt and collaborators in the scallop Argopenten purpuratus [22]. The authors showed that the big defensin ApBD1 and the bactericidal/permeability-increasing protein ApLBP/BPI1 have the potential to shape the hemolymph microbiota of the scallop, particularly by regulating the proliferation of γ-proteobacteria. Our present work shows that changes in microbiota composition observed in vivo are linked to direct antimicrobial activities of Cg-BigDefs against bacteria belonging to the microbiota. Similar to human α-defensin HD-5 in the mice gut [38], Cg-BigDef1 did not alter the overall bacterial load in oysters. Moreover, Cg-BigDef1 had no negative effects on oyster microbiota diversity, probably due to the specificity of the peptides, which as host-defense effectors, have evolved to acquire antimicrobial activity against given bacterial strains without disrupting the entire oyster microbiota. Supporting this hypothesis, microbiota alterations were mainly visible at the ASV taxonomic level, indicating high specificity. For instance, upon treatment with Cg-BigDef1, we observed a reduced amount of Vibrio (γ-proteobacteria), which was in agreement with the in vitro activity of ApBD1 in the scallop [22]. Changes were visible in whole-tissue microbiota. However, microbiota composition was shown to vary significantly between oyster tissues (hemolymph, gut, gills, mantle) [28]. Therefore, it is likely that more contrasting effects of Cg-BigDefs occur on specific tissue microbiota, particularly in the hemolymph, which carries the Cg-BigDef-producing cells, the hemocytes [23]. With mounting evidence on the regulatory role of AMPs on host-microbiota across animal phyla, including mollusks ( [22], this study), one key question to be addressed in the future is how this affects the functions the microbiota serve in their host tissues. A striking feature of the evolutionary history of big defensins is their extensive diversification in some molluscan species, particularly the oyster C. gigas and the mussels Mytilus galloprovinciallis and Dreissena rostriformis, while they did not diversify in other species (e.g., the scallop A. purpuratus) [24]. The functional consequences of this diversification have remained unexplored. Our present results demonstrate that sequence diversification has generated specificity and synergy among Cg-BigDefs, as evidenced by two members of the Cg-BigDef family, Cg-BigDef1 and Cg-BigDef5. Antibacterial specificity was observed from the bacterial genus down to the strain level within a given genus. Depending on bacterial strains, Cg-BigDefs were bactericidal or simply inhibitory, with contrasting MICs, from 40 nM to 10 µM. This suggests that distinct mechanisms of action can underpin Cg-BigDefs activities against the diversity of bacteria encountered in the oyster microbiota. Activities in the nanomolar range are consistent with receptor-mediator activities [36,39], while activities in the micromolar range are typically reported for membrane-active AMPs [40]. In oyster defensins (Cg-Defs), which have also diversified in oysters, we previously observed that sequence variation altered the activity of the peptide (more or less potent) without affecting the peptide range of activity [35]. Here, we also showed that sequence diversification was key to generating antimicrobial synergy between two members of the Cg-BigDef family. Similarly, sequence diversification generated synergy in two other families of oyster AMPs, the defensins Cg-Defs and the proline-rich peptides Cg-Prps [35]. Although not studied in the present article, synergy also occurred between AMP families, as observed between the bactericidal permeability-increasing protein Cg-BPI, Cg-Prps, and Cg-Defs in the oyster C. gigas and between Attacins and Diptericins in the insect D. melanogaster [41]. Thus, the in vivo effects of Cg-BigDefs on the shaping of oyster microbiota are likely to extend well beyond the observations in this paper, where we tested the effects of only one member of the Cg-BigDef family. This was also the case in the A. purpuratus scallop study. However, in scallops, unlike other molluscan species (oysters and mussels) big defensins have not diversified and the activity of ApBD1 recapitulates that of the entire AMP family. Overall, we have highlighted an important role for sequence diversification in increasing the antimicrobial potential of oyster Cg-BigDefs, by generating both antimicrobial specificity and synergy, an observation that extends at least to two additional peptide families, Cg-Defs and Cg-Prps. We can hypothesize that some species of bivalve mollusks, such as oysters, have diversified their repertoire of AMPs to increase their adaptive potential while constantly exposed to diversified microbial communities. While sequence diversification was shown to be a major asset in terms of antimicrobial defenses, we still do not know how antimicrobial specificity is generated. We have shown that changes in primary structure between Cg-BigDef1 and Cg-BigDef5 (62.8% sequence identity) produced antibacterial specificity. Cg-BigDef1 and Cg-BigDef5 have similar biophysical parameters in terms of size (86-93 amino acids) and positive net charge at neutral pH (+6 to +7 at pH = 7.4, i.e., oyster physiological pH), with conserved domains, as recently determined by NMR ( [34]; Figure 4 top panel). The position and pairing of cysteines are also similar. A major difference observed between Cg-BigDef1 and Cg-BigDef5 was the length and primary sequence of the linker region connecting the N-terminal hydrophobic domain and the C-terminal β-defensin-like domain, whereas the five residues linker of Cg-BigDef5 is exposed to the solvent, the ten residues linker of Cg-BigDef1 plays a key role in the 3D compaction of the protein by being buried at the interface of the two domains and locking their relative orientation [27]. In Cg-BigDef1 and Cg-BigDef5, the orientation of the N-and C-terminal domains differs by around 100 • (dihedral angle value between the β-sheet of the N-term domain and the last strand of the β-sheet of the C-term domain (Figure 4 top panel), leading the α-helix of the C-term domain not involved in the interaction interface between the two domains. We also looked at surface properties, as the surface charge is considered critical for the interactions of AMPs with bacteria [42]. However, no real quantitative difference can be observed between Cg-BigDef1 and Cg-BigDef5. Both are highly cationic, and positive charge repartition is shared by the N-terminal and C-terminal domains, yet the positive surface of each molecule is on opposite sides (Figure 4 middle panel). Since the salt concentration in the oyster is very high (similar to seawater), charges may be shielded and might not be the primary type of interaction that is important for bacterial interaction. Instead, the hydrophobicity of the big defensin surface could play a major role in how the molecules approach their target. As seen in Figure 4 (bottom), Cg-BigDef1 displays a more hydrophobic C-terminal domain than Cg-BigDef5, whereas Cg-BigDef5 displays a more hydrophobic N-terminal domain than Cg-BigDef1. We are still unable to explain which structural determinants play a role in the specificity of Cg-BigDefs. For instance, we do not know whether residues exposed in the linker (i.e., the most diversified residues) play a role in the interaction with microbes. Another unresolved issue is the salt stability of Cg-BigDef activity at high salt concentrations. Indeed, both Cg-BigDef1 and Cg-BigDef5 were active against a wide range of marine bacteria at 400 mM NaCl, in agreement with previous findings for Cg-BigDef1 [27]. This stability of the antimicrobial activity is unique and essential for the peptides to participate as direct effectors in oyster antimicrobial defense. While many questions are still open on the structure-activity relationships of big-defensins and their domains, the molecular tools are now available to unveil the consequences of sequence variation in the interactions of Cg-BigDefs domains with the bacteria and/or in nanonet assembly. The same applies to bacteria from the oyster microbiota, with a collection of culturable bacteria that will be highly useful for testing functional hypotheses. is very high (similar to seawater), charges may be shielded and might not be the primary type of interaction that is important for bacterial interaction. Instead, the hydrophobicity of the big defensin surface could play a major role in how the molecules approach their target. As seen in Figure 4 (bottom), Cg-BigDef1 displays a more hydrophobic C-terminal domain than Cg-BigDef5, whereas Cg-BigDef5 displays a more hydrophobic N-terminal domain than Cg-BigDef1. Middle-Electrostatic potential on the accessible surface of the proteins with red representing negative charges and blue representing positive charges. Bottom-Hydrophobicity potential on the accessible surface of proteins with light blue representing hydrophilic properties and brown representing hydrophobic properties. Both electrostatic and hydrophobic potential were determined using ChimeraX software [42,43]. Conclusions We have demonstrated that sequence diversification in Cg-BigDefs has helped to improve oyster defense against pathogens and to control oyster-associated bacterial communities. Indeed, we highlighted an undiscovered specificity and synergy between Cg-BigDefs, which broadened their activity spectrum. These results pave the way for future studies on the mechanism of action of big defensins, which may vary depending on bacterial targets. Oysters Oysters with limited genetic diversity were obtained as follows. Genitor oysters were collected in 2015 from the Le Dellec area in Brest bay, which is devoid of shellfish farming. The first generation of full siblings was produced, named F14, as described in [21]. From this family, two oysters were used to generate a second generation of full siblings, referred to as F14V (Decicomp project ANR-19-CE20-004). Offspring were kept at the Ifremer hatchery in Argenton (France) up to day 40. Then, they were grown at the Ifremer station in Bouin (France) until they were 10 months old. Isolation of bacteria from oyster flesh and antibacterial assays were performed in Zobell medium at 20 • C. Zobell medium is composed of artificial seawater (ASW) [44] supplemented with 0.4% bactopeptone and 10% yeast extract, pH 7.8. Bacteria were isolated from the flesh of live oysters affected by the Pacific Oyster Mortality Syndrome (susceptible families F11, F14, and F15 from the Decipher project ANR-14-CE19-0023) [21]. Additional bacteria isolated from diseased oysters were provided by the French National Reference Laboratory (Ifremer, La Tremblade, France). Finally, bacteria isolated from healthy commercial or wild oysters were included. Molecular Phylogeny Based on 16S RNA Taxonomic assignment down to the genus was performed for each strain by molecular phylogeny based on the sequence of the V3-V4 region of 16S rRNA obtained by Sanger sequencing. In order to consolidate the phylogenetic tree, at least one Genbank reference sequence corresponding to a Type (T) strain per genus of interest was added from the NCBI database (Table S4). The 95 sequences were trimmed at 405 bp (V3-V4 loop) using BioEdit and aligned using ClustalW. The phylogenetic tree was constructed by the Maximum Likelihood method with the Kimura 2-parameter model [45] using MEGA X software [46] and annotated using ITOL software [47]. The branches are supported by the bootstrap method with 500 iterations. Identification of Bacterial Isolates by Matrix-Assisted Laser Desorption Ionization Mass Spectrometry (MALDI Biotyping) MALDI Biotyping was used to confirm 16S taxonomic assignments or, when the libraries required it, to enrich them with new bacteria absent from MALDI libraries (common for marine strains). For these purposes, a protocol coupling inactivation with 75% ethanol and extraction with 70% formic acid was performed based on the MALDI Biotyper ® protocol (Bruker Daltonics, Bremen, Germany). Briefly, from each plate, one isolated colony was suspended in MilliQ water in 1.5 mL Eppendorf tubes. Ethanol (100%) was added to the suspension, and the tubes were centrifuged twice (13,000 rpm, 2 min). Subsequently, 10 µL of a 70% formic acid solution was added to the pellet. In order to complete the extraction, 10 µL of pure acetonitrile was added. One microlitre of each extract was deposited three times (technical replicates) on a MALDI target (Bruker Daltonics, Bremen, Germany), air-dried, and coated with 1 µL of fresh alpha-cyano-4-hydroxycinnamic acid matrix in a saturating amount in a solution of 50% ACN and 2.5% TFA (Bruker Daltonics, Bremen, Germany). The MALDI MS spectra of these spots were acquired with an Autoflex III Smartbeam MALDI-TOF MS, recording masses ranging from 2000 to 20,000 Da using standard parameters (flexControl 3.4, Bruker Daltonic, Bremen, Germany), and interrogated against the existing databases. For bacteria species not present in the existing MALDI Biotyper ® reference mass spectra libraries, a reference spectrum was created and entered in our local database, as follows: the bacterial extract was spotted 8 times, each spot analyzed three times, for a total of twenty-four recorded spectra per bacterial strain. After manual checking, the twenty better spectra were transformed into an average spectrum by the MBT Compass Explorer software. The BTS (bacterial test standard) serves as a calibrator and contains Escherichia coli extract. The reference libraries used for the analysis are the official Bruker MALDI Biotyper ® spectral library (MBT reference library https://www.bruker.com/en/products-and-solutions/microbiologyand-diagnostics/microbial-identification/maldi-biotyper-library-ruo.html; accessed on 10 October 2022) and the freely available EnviBase exclusively dedicated to the identification of potentially pathogenic Vibrio in marine mollusks (seanoe.org, accessed on 10 October 2022) [48]. Molecular Cloning and Sequence Data Analysis The Cg-BigDef5 gene was PCR-amplified using specific primers (Fw: 5 -AATCAAG-TCAACATGAACAG-3 ; Rv: 5 -TTATCCTAGATTTCTAGGTC-3 ) based on a transcript sequence previously found in publicly available databases [24], cloned into a pGEM-T Easy vector (Promega) and then sequenced using the Sanger dideoxy methodology (Applied Biosystems 3500 Series Genetic Analyzer). Exon-intron boundaries were defined by the alignment of cDNA and genomic sequences. Nucleotide sequences were manually inspected and translated using the ExPASy Translate Tool http://web.expasy.org/translate/ (accessed on 1 September 2022). Prediction of signal peptides and other posttranslational processing was carried out using the ProP 1.0 server https://services.healthtech.dtu.dk/ service.php?ProP-1.0 (accessed on 1 September 2022), while the theoretical isoelectric point (pI) and molecular weight (MW) of the mature peptides were calculated using the Expasy ProtParam Tool http://web.expasy.org/protparam/ (accessed on 1 September 2022). Multiple alignments of amino acid sequences were generated using MUSCLE with default parameters https://www.ebi.ac.uk/Tools/msa/muscle/ (accessed on 1 September 2022). Peptide Synthesis and Net Charge Calculation Cg-BigDef1 was synthesized as already described, using a combination of solid-phase chemical synthesis and native chemical ligation (NCL) followed by a thermodynamically controlled oxidative folding step [26,27,49]. Cg-BigDef5 was obtained following a similar synthetic scheme (see Supplemental Figure S5 and [34] for optimization and details, as well as for 3D structure determination by NMR). Peptide net charges at pH = 7.4 (oyster physiological pH) were predicted using the IPCprotein pKa dataset [50]. MIC and MBC values were determined as previously described [35]. Briefly, big defensins stock solutions were serially diluted in sterile MilliQ water. A total of 10 µL of peptides were incubated with 90 µL of bacterial suspension, brought to the exponential growth phase, and adjusted to A 600 = 0.001 in Zobell medium at 20 • C. Bacteria were grown under shaking in a sterile, non-pyrogenic polystyrene 96-well plate (Falcon). Growth was monitored at 600 nm on a TECAN spectrophotometer with one measurement/h over 24 h. MIC values are expressed as the lowest concentration tested (µM) that results in 100% growth inhibition. For the determination of MBCs, after a 24 h incubation, 100 µL of each well were plated on Zobell agar medium at 20 • C. MBC values are expressed as the lowest concentration tested (µM) for which no colonies could be counted on a Petri dish. Synergies between Cg-BigDef1 and Cg-BigDef5 were measured as previously described using the checkerboard microtiter assay, which enables highlighting a potential reduction of the MIC values of each peptide when used in combination. In this assay, 2-fold serial dilutions of one peptide are tested against 2-fold serial dilutions of the other peptide. Results are expressed by calculating fractional inhibitory concentration (FIC) index values [35]. Microbiota Modifications Induced by Cg-BigDef1 In Vivo A biparental family of juvenile C. gigas oysters (family F14-V, 10 months old, average wet weight of flesh 200+/−27 mg) was used in in vivo assays. All oysters were maintained under controlled biosecurity conditions to ensure their specific pathogen-free status. For anesthesia, oysters were kept for 12 h outside seawater tanks and anesthetized two hours before the experiment in seawater containing 50 g/L MgCl 2 [51]. Control animals (n = 8) were collected before (NTC, non-treated controls) and after the entire anesthesia procedure (AC, anesthesia controls). Before injection into oysters, Cg-BigDef1 was dissolved in sterile ASW at a concentration of 20 µM. Concentration was verified as described above for MIC and MBC determination. Injection of Cg-BigDef1 (50 µL) was performed right after anesthesia by injection into the oyster adductor muscle to reach a final concentration of 5 µM of Cg-BigDef1 in oyster flesh. An injection of 50 µL sterile ASW was used as a control treatment. Oysters (n = 8 per condition) were sampled 10 min (T0), 6 h (T6), and 24 h (T24) after injection. For oyster sampling, shells were removed, and flesh was recovered and snap-frozen in liquid nitrogen. Individual oysters were ground in liquid nitrogen in 50 mL stainless steel bowls with 20-mm-diameter grinding balls (Retsch MM400 mill) and stored at −80 • C until DNA extraction. DNA extraction was performed as described in [21] using the Nucleospin tissue kit (Macherey-Nagel, Düren, Germany). DNA concentration and purity were checked with a NanoDrop One (Thermo Fisher Scientific, Waltham, MA, USA). Sequencing data were processed using the SAMBA pipeline v3.0.1. The SAMBA workflow, developed by the SeBiMER (Ifremer's Bioinformatics Core Facility), is an opensource modular workflow to process eDNA metabarcoding data. SAMBA is developed using the NextFlow workflow manager [53]. All bioinformatics processes are mainly based on the use of the next-generation microbiome bioinformatics platform QIIME 2 [54] (version 2020.2) and the approach of grouping sequences in ASV (Amplicon Sequence Variants) using DADA2 v1.14, [55]). Taxonomic assignment of ASVs was performed using a Bayesian classifier trained with the Silva database v.138 using the QIIME feature classifier [56]. Statistical analyses were also performed with R (R Core Team, 2020) using the R packages Phyloseq v1.38.0 [57] and Vegan v2.6-2 [53]. For α-diversity, we used the full data set to analyze differences in regularity (calculated as H/ln (S), where H is the Shannon-Wiener index, and S is species richness) and species richness (total number of species) using SAMBA pipeline and ANOVA. For β-diversity, the ASV matrix of all 64 libraries was preliminarily normalized. Briefly, after verification of the rarefaction curves produced with the ggrare function [57]), libraries were sub-sampled to 45,361 reads using the rarefy_even_depth function. The normalized ASV matrix was then filtered for low-abundance ASVs to limit the prevalence of putative artifacts due to sequencing errors. For this purpose, only ASVs with at least four reads in at least four samples were retained. We then retained samples associated with ASW and Cg-BigDef1 experimental conditions at T0, T6, and T24. The variation in microbiota composition was then investigated using principal coordinate analyses (PCoA) based on Bray-Curtis distances at each kinetic point. Putative differences between groups were assessed by statistical analyses (Permutational Multivariate Analysis of Variance-PERMANOVA) using the adonis2 function implemented in vegan [58]. The mean relative abundance of the 10 most abundant bacterial genera in the oyster microbiota was also estimated. Results were graphically represented by a heatmap. We used the STAMPS software [59] to represent an extended error bar. Statistical differences were assessed by Welch's t-test with the Benjamini-Hochberg procedure, which controls the false discovery rate (FDR). Finally, we used DESeq2 v1.36.0 [60] to identify ASVs whose abundance significantly varies in oysters injected with Cg-BigDef1 or ASW (control) for the last kinetic point (i.e., T24). Differential abundance was analyzed using a negative binomial method implemented in the DESeq2 package as recommended by [57]. For this latter analysis, we only considered ASVs with an adjusted p value < 0.01. Note that ASVs lacking genera annotation and qualified as "unknown" were not considered for result interpretation. Quantification of Total 16S Bacterial DNA Total of 16S bacterial DNA was quantified by quantitative PCR (qPCR). All amplification reactions were analyzed using a Roche LightCycler 480 Real-Time thermocycler (qPHD-Montpellier GenomiX platform, Montpellier University, Montpellier, France). The total qPCR reaction volume was 1.5 µL and consisted of 0.5 µL DNA (30 ng.µL −1 ) and 0.75 µL LightCycler 480 SYBR Green I Master mix (Roche) containing 0.5 µM PCR primer (Eurogenetec SA). Primers used for total bacteria were 341F 5 -CCTACGGGNGGCWGCAG-3 and 805R 5 -GACTACHV GGGTATCTAATCC-3 , which target the 16S variable V3V4 loops [52]. A Labcyte Acoustic Automated Liquid Handling Platform (ECHO) was used for pipetting into the 384-well plate (Roche). A LightCycler ® 480 Instrument (Roche) was used for qPCR with the following program: enzyme activation at 95 • C for 10 min, followed by 40 cycles of denaturation (95 • C, 10 s), hybridization (60 • C, 20 s) and elongation (72 • C, 25 s). A melting temperature curve of the amplicon was then performed to verify the specificity of the amplification. Relative quantification of 16S bacterial DNA copies was calculated by the 2 −∆∆Cq method [61] using the mean of the measured cycle threshold values of a reference gene (Cg-EF1α (elongation factor 1α), GenBank: AB122066), as a calibrator. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/md20120745/s1, Table S1: Permutational multivariate analysis of variance (PERMANOVA) table of oyster microbiota at ASV level for experimental anesthesia; Table S2: Permutational multivariate analysis of variance (PERMANOVA) table of oyster microbiota at ASV level comparing experimental injections (i.e. oysters injected with ASW/ with Cg-BigDef1); Table S3: Results of ad hoc pairwise PERMANOVA testing for differences in oyster microbiota at ASV level between experimental conditions (i.e. injected with ASW/ injected with Cg-BigDef1); Table S4: List of standard strains used as reference sequences in the 16S phylogenetic analysis; Figure S1: Species richness rarefaction curves; Figure S2: Lack of effect of anesthesia on the oyster microbiome; Figure S3: No changes in oyster microbiota α-diversity after injection of Cg-BigDef1; Figure S4: Differences in oyster microbiota between ASW and Cg-BigDef1 conditions for the top10 genera; Figure S5: ASVs differentially represented at T24 in oyster injected with CgBigDef-1 or ASW; Figure S6: No changes in total bacterial load in oysters injected with Cg-BigDef1; Figure S7: Chemical synthesis of Cg-BigDef5 used in this study.
2022-12-25T05:04:33.904Z
2022-11-26T00:00:00.000
{ "year": 2022, "sha1": "3a737e02d3e12d5b89ca1b286c506ebd3ee48e56", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/20/12/745/pdf?version=1669714725", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a737e02d3e12d5b89ca1b286c506ebd3ee48e56", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257219377
pes2o/s2orc
v3-fos-license
2D Smagorinsky type large eddy models as limits of stochastic PDEs We prove that a version of Smagorinsky Large Eddy model for a 2D fluid in vorticity form is the scaling limit of suitable stochastic models for large scales, where the influence of small turbulent eddies is modeled by a transport type noise. Introduction Recently, a new stochastic approach has been developed in [13,14,18,12,16,10,6] to explain Boussinesq hypothesis that "turbulent fluctuations are dissipative on large scales" [5]. The idea, better explained below in Section 2, is that the large scales satisfy a Navier-Stokes type equation with a stochastic transport term corresponding to the action of small scales. In a suitable scaling limit, we get a deterministic Navier-Stokes equation with an additional dissipative term. The turbulent viscosity is directly related to the noise (namely small-scale) covariance. All the quoted works are related to dimension 2, with the exception of [16] that deals with a 2D-3C model with some three dimensional feature, including a stretching term of small scales over large ones and the possibility of an AKA (anisotropic kinetic alpha) effect in the limit equation. For other approaches to justify Boussinesq hypothesis and turbulent viscosity based on Eulerian formulations of fluid dynamical systems see for instance [3,23,30]. There are also different models based on filtering the systems at the Lagrangian level rather then Eulerian one, we refer to [20,21,7,8] for rigorous analysis and some discussions on the topic. The previous works on the stochastic approach are, however, limited to the case of linear limit dissipation term, namely turbulent viscosity independent of the solution. Smagorinsky type models are excluded from the previous analysis and it was not clear for some time how to incorporate them into this new theory. In this paper we solve this problem. This provides new insight into these models and their motivations. Since our techniques are, at present, well developed for the vorticity equation, while they suffer certain difficulties for the velocity equation, we present the results for vorticity type equations (however, as stated in [29,Section 5], the performances of vorticity-velocity models are sometimes superior to those of velocity-pressure ones). We choose the following form, discussed for instance in [9]: (written in this way so that div (g ′ (ω L ) ∇ω L ) = ∆g (ω L )) with the additional conditions ω L = ∇ ⊥ · u L , div u L = 0 and the initial condition ω L | t=0 = ω L 0 . Here, L stands for the large scale components of fluid vorticity and velocity, see the next section for more discussions; the fields are assumed to be periodic, on a torus. The function g (r) is subject to quite general assumptions which include it is non-decreasing, so that g ′ is not negative. The particular case treated in [9] (see also [25,29,11]) is g ′ (r) = (C s ∆) 2 where ∆ is a subgrid characteristic length-scale and C s is a non-dimensional constant which has to be calibrated and its value may vary with the type of the flow and the Reynolds number. However, similarly to the Smagorinsky model in velocity form, it may be useful to cover more general nonlinearities, see for instance [3,Section 3.3.2]. We prove that this Smagorinsky type model is the limit of the large-scale stochastic model . The limit is taken along a suitable sequence of small-scale noise, namely we assume (roughly speaking) that σ k are smaller and smaller scale (an assumption of scale separation). The notations and assumptions (like the fact that {W k } k are independent Brownian motions and • is the Stratonovich multiplication operation) will be explained in the technical sections. The paper is organized as follows. In Section 2 we describe the heuristic ideas behind the stochastic model. In Section 3 we state our results and introduce some mathematical tools. In Section 4 we show the existence of martingale solutions of the problem (3) above. Lastly, in Section 5 we will show our main result about the convergence of martingale solutions of our stochastic models to a measure concentrated on the unique weak solution of the Smagorinsky model (1), see Theorem 5 below for the rigorous statement. The heuristic idea The idea described in this section is similar to the one given in [12,16], but we repeat it and particularise the models studied here, for completeness and to help the intuition behind the model. Consider a 2D Newtonian viscous fluid in a torus, described in vorticity form by the equations where ω is the vorticity field and u the velocity field. Assume that the initial vorticity ω 0 is the sum of a large scale component ω L 0 plus a small-scale component ω S 0 . Then, at least on a short time interval [0, τ ], it is reasonable to expect that the system represents quite well the evolution of the different vortex structures, as for instance in the small vortex-blob limit to point vortices treated by [26]. The system above is equivalent to the original one, by addition. The next step is considering only the equation for the large scales, isolating the term which is not closed, namely depends on the small scales: Here u L , with div u L = 0, has the property ∇ ⊥ · u L = ω L (namely u L is reconstructed from ω L by Biot-Savart law). The field u S should correspond to ω S by Biot-Savart law but we now introduce a stochastic closure assumption. We replace u S (t, x) by a white-in-time noise, with suitable space dependence where {σ k } k are suitable divergence free vector fields, and χ(t, x) is a scalar stochastic process which will be linked to the large scales, in order to model the idea that the turbulent small scales are more active where the large scales have more intense variations (e.g. larger shear); {W k } k are independent scalar Brownian motions. In the replacement, Stratonovich integrals are used, in accordance with the Wong-Zakai principle (see rigorous results in [10]). Therefore the equation for large scales, now closed and stochastic, takes the form Previous works developed this idea in the case when χ = 1, see e.g. [18,19,10]. Here we assume that χ is a function of ω L , that for notational convenience will be written as for a suitable function f . As said above, the heuristic idea is that turbulence is more developed in regions of high large-scale vorticity, hence the small-scale noise should be modulated by an increasing function f ′ . This is the motivation for the stochastic model (3) presented in the Introduction. Our main purpose is showing that it leads to the Smagorinsky type deterministic equation (1) in a suitable scaling limit of the noise. Functional Setting and Main Results Let us set some notation before stating the main contributions of this work. Let T 2 = R 2 /Z 2 be the two dimensional torus and Z 2 0 = Z 2 \ {0} the nonzero lattice points. Let (H s,p (T 2 ), · H s,p ), s ∈ R, p ∈ (1, +∞) be the Bessel spaces of zero mean periodic functions. In case of p = 2, we simply write H s (T 2 ) in place of H s,2 (T 2 ) and we denote by ·, · H s the corresponding scalar products. In case also s > 0 we denote by ·, · H −s ,H s the dual pairing between H s and H −s . Lastly we denote by H s− (T 2 ) = ∩ r<s H r (T 2 ). In case of s = 0 we will write L 2 (T 2 ) instead of H 0 (T 2 ) and we will neglect the subscript in the notation for the norm and the inner product. Similarly, we introduce the Bessel spaces of zero mean vector fields Again, in case of s = 0 we will write L 2 instead of H 0 and we will neglect the subscript in the notation for the norm and the scalar product. Let Z be a separable Hilbert space, with associated norm · Z . We denote by C w F ([0, T ] ; Z) the space of weakly continuous adapted processes Following the ideas introduced in Section 2, we are interested in the following stochastic model with a more precise noise (cf. [22,13]) {σ k } k∈Z 2 0 is the standard orthonormal basis of divergence free vector fields in L 2 made by the eigenfunctions of the Stokes operator, i.e. is a family of real independent Brownian motions. Moreover we assume that for some α ∈ [0, 1]. This implies in particular that In the sequel, we shall omit the subscript L to save notation. System (4) can be formulated easily in Itô form. Indeed, it holds where the last step is due to the fact (cf. [15, Lemma 2.6] for a proof) the latter being the 2 × 2 unit matrix. Thanks to the computations on the Itô-Stratonovich corrector above, equation (4) can be rewritten as (9) We introduce the real function g : R → R defined as which satisfies g(0) = 0 and From the definition of g it follows that system (9) can be rewritten as The relation between u and ω can be described in terms of the so-called Biot-Savart operator We are now ready to define our notion of solution for system (11). Definition 1 We say that system (11) has a weak solution if there exists a filtered probability space Due to the nonlinearities appearing in equation (11) the existence of weak solutions is a nontrivial fact which will be proved in Section 4. Indeed we will prove the following result. Theorem 2 For each ω 0 ∈ L 2 (T 2 ) there exists at least one weak solution of system (11) in the sense of Definition 1. Moreover Next, following the idea introduced for the first time in [22], we consider a family and we call ω N the corresponding weak solution of equation (11) with {θ N k } k in place of {θ k } k . In order to complete our plan, we want to show that the law of ω N converges weakly to a measure supported on the unique weak solution of the Navier-Stokes equation in vorticity form with Smagorinky correction, namely Remark 3 Taking f (r) = 4 3 C s ∆|r| 1/2 r, C s and ∆ being the same as in (2), we have g(r) = 1 2 (C s ∆) 2 r 2 sign(r), and thus ∆ g(ω) = (C s ∆) 2 div(|ω|∇ω). In this way, we recover the Smagorinsky model of [9]. By a weak solution of (13) we mean the following: Definition 4 We say that ω is a weak solution of equation (13) if In Section 5 indeed we will first show the uniqueness of the weak solutions of (13), then we will show our main result which reads in the following way. (5) and (12). Let ω N be a weak solution of (11) corresponding to θ N , and Q N its law on ) and it converges weakly to the Dirac measure δ ω , where ω is the unique weak solution of equation (13). Preparatory results Before starting, we need to recall some results that we will use in Sections 4 and 5 in order to prove Theorems 2 and 5, see [27,2] for more details on these results. In the following X, B, Y are separable Banach spaces such that where c ֒→ means compact embedding. Lemma 8 Let (Ω, A, P) be a probability space, U and H separable Hilbert spaces. progressively measurable process which belongs to L 2 ([0, T ], L 2 (U, H)) P-a.s., while G n are (F n t ) t∈[0,T ] progressively measurable processes which belong to L 2 ([0, T ], L 2 (U, H)) P-a.s.. If In order to identify our limits we will use the following lemma on interpolation spaces. Galerkin Approximation We introduce a sequence of Galerkin approximations ω n . Given the orthogonal projector Π n : L 2 (T 2 ) → span{e l , |l| ≤ n}, we look for where ω n 0 = Π n ω 0 . Local existence of the solution ω n is a classical fact due to the regularity of the coefficients appearing in the equation, see for example [24,28]. Global existence follows from the following a priori estimates. Lemma 10 P-a.s., ω n satisfies Proof. By Itô formula and recalling the definition of g we have The first and the third terms are identically equal to 0 due to the classical properties of the trilinear form of Navier-Stokes equations and the following relation: where the function F above is a primitive of f. Therefore we are left to show that The last inequality is due to where in the third step we have used (8). Lemma 10 shows in particular that {ω n } n≥1 is bounded in L p (Ω; L p (0, T ; L 2 ))∩ L 2 (Ω; L 2 (0, T ; H 1 )). In order to apply Theorem 6 and Theorem 7 we need some energy estimates in W s,r (0, T ; H −β ), s ≥ 0, r ≥ 2, β > 0 satisfying suitable conditions. To this end we first prove the following Lemma. Proof. It is enough to consider |l| ≤ n. From the weak formulation satisfied by ω n it follows that ω n t − ω n s , e l = ν t s ω n r , ∆e l dr + t s g(ω n r ), ∆e l dr The analysis of I 1 s,t and I 3 s,t follows arguing exactly as in [13,Lemma 3.4] and leads us to For what concerns I 2 s,t with α ∈ [1/2, 1] (the case α ∈ [0, 1/2] being easier), we have by Hölder's inequality and relation (10) that Next, by Sobolev embedding theorem and interpolation inequalities, which, combined the estimates in Lemma 10, yields Lastly we need to deal with I 4 s,t . Recall that θ ∈ ℓ 2 (Z 2 0 ) fulfills θ ℓ 2 = 1, and σ k L ∞ = √ 2; by Burkholder-Davis-Gundy inequality and estimate (7), Then, similarly as for the treatment of I 2 s,t , by Sobolev embedding theorem, interpolation inequalities and Lemma 10 we have Combining the estimates the thesis follows. By Theorem 6, a set bounded in L 2 (0, T ; H 1 ) ∩ W s,r (0, T ; H −γ ) is relatively compact in L 2 (0, T ; H 1−δ ) for each δ > 0 if s > 0, γ > 0, r ≥ 2. On the other side, given δ > 0, if p > r1 δ(s1r1−1)(β−δ) , a set bounded in L p (0, T ; L 2 ) ∩ W s1,r1 (0, T ; H −β ) with s 1 r 1 > 1 is relatively compact in C(0, T ; H −δ ). Since by Lemma 10 we can take p arbitrarily large, it is enough to show the boundedness of {ω n } n in W s1,r1 (0, T ; H −β ) for some β. This is guaranteed by the lemma below. Lemma 12 If β > 3 + 2 r1 , s 1 < 1 2 , s 1 r 1 > 1 there exists a constant C independent of n such that Proof. Thanks to Lemma 10 we need just to consider By Fubini theorem it follows that Let us understand better E ω n t − ω n s r1 H −β : by the definition of Sobolev norms and Hölder's inequality, Thanks to Lemma 11, we have The proof is complete. Combining Lemma 12 with Theorems 6, 7 we have the following tightness result by Markov's inequality: Passage to the limit Arguing as in [13], by Skorohod's representation Theorem, we can find, up to passing to subsequences, an auxiliary probability space, that for simplicity we continue to call (Ω, F , P), and processes (ω n , W n : Of course the convergence above between W n and W can be seen as the uniform convergence of cylindrical Wiener processes W n = k∈Z 2 0 e k W n,k , W = k∈Z 2 0 e k W k on a suitable Hilbert space U 0 . Before going on, in order to identify ω as a weak solution of equation (11) we need further integrability properties of ω. The proof of the proposition below is analogous to Lemma 3.5 in [13], therefore we will omit the details in these notes. Proposition 14 The process ω has weakly continuous trajectories on L 2 (T 2 ) and satisfies Now we are ready to prove Theorem 2. Proof of Theorem 2. Let φ ∈ Π M (L 2 (T 2 )), by classical arguments for each n ≥ M ,ω n satisfies the following weak formulation: P-a.s. for all t ∈ [0, T ], Therefore we will show, up to passing to a further subsequence, P-a.s. convergence of all the terms appearing above, uniformly in time. Indeed, and similarly for the initial conditions. Next, T 0 ω s −ω n s ds → 0 P-a.s. (20) due to the almost surely convergence in L 2 (0, T ; H 1− ). Moreover, due to the almost surely convergence in L 2 (0, T ; H 1− ). Thanks to relation (10) it follows that where Let us show that, P-a.s., both I 1 and I 2 tend to 0. We can control I 1 thanks to Hölder inequality, Sobolev embedding theorem, interpolation inequalities, By Lemma 9, we have for α ∈ (1/2, 1] (the other case being easier) that For what concerns I 2 similar arguments and the Hölderianity of x α lead to ds. By interpolation and Hölder's inequality, In order to deal with the stochastic integral we apply Lemma 8. Since we have the convergence of the Wiener processes, it is enough to show that P-a.s., therefore in probability, The relation above is true, indeed, recall the facts that σ k L ∞ = √ 2 (∀ k ∈ Z 2 0 ), k∈Z 2 0 θ 2 k = 1, and relation (6) we have x is the norm in L 2 (0, T ; L 2 (T 2 )). By Cauchy's inequality, Therefore by Lemma 8,up to passing to a subsequence, uniformly in time, Combining relations (19), (20), (21), (22), (23), (24), (26) we have, P-a.s. for all t ∈ [0, T ], By standard density argument we can find a zero measure set N such that on its complementary relation (27) holds for each φ ∈ C ∞ (T 2 ). Scaling limit Let now {θ N } N be a sequence in ℓ 2 (Z 2 0 ), each satisfying the conditions (5) and moreover lim N →+∞ θ N ℓ ∞ = 0; (28) let ω N be an analytically weak martingale solution in the sense of Definition 1 of The existence of such solution for each N ∈ N is guaranteed by Theorem 2 above. Of course the probability space and the Brownian motions depend from N , however with some abuse of notation, we do not stress this dependence. Arguing as in Section 4 we will show the tightness of the law of ω N in C([0, T ]; H − ) ∩ L 2 (0, T ; H 1− ). This will allow us to prove Theorem 5 following the same ideas of Section 4. Tightness The way of showing the tightness is completely analogous to Section 4 thanks to Proposition 14. Therefore we just sketch the argument. We start with the lemma below. Lemma 15 For each M ∈ N, there exists a constant C independent of N such that for any s, t with 0 ≤ s ≤ t ≤ T , it holds Proof. From the weak formulation satisfied by ω N it follows that All the terms above can be treated analogously to Lemma 11, leading us to the following estimates: Combining them the thesis follows immediately. Thanks to the discussion before Lemma 12 in order to obtain the required tightness in L 2 ([0, T ]; H 1− ) ∩ C([0, T ]; H − ) we need the following result. Lemma 16 If β > 3 + 2 r1 , s 1 < 1 2 , s 1 r 1 > 1 and p > 1, there exists a constant C independent of N such that We omit its proof since it is just a computation based on the definition of the Sobolev norms and the estimate guaranteed by Lemma 15. Combining the lemma above with Theorems 6, 7 we have the following tightness result. Passage to the limit The preliminary part in order to showing the convergence is analogous to Subsection 4.2. Arguing as in [13], by Skorohod's representation theorem, we can find, up to passing to subsequences, an auxiliary probability space, that for simplicity we continue to call (Ω, F , P), and processes (ω N , The convergence above fromW N toW can be seen as the uniform convergence of cylindrical Wiener processesW N = k∈Z 2 0 e kW N,k ,W = k∈Z 2 0 e kW k on a suitable Hilbert space U 0 . Before going on, in order to identify ω as a random variable supported on the weak solutions of equation (11) we need further integrability properties of ω. The proof of the proposition below is analogous to Proposition 14, therefore we will omit the details. Proposition 18 The process ω has weakly continuous trajectories on L 2 (T 2 ) and satisfies Before exploiting the convergence properties of ω N , we are interested in showing the uniqueness of weak solutions of (13). The approach we follow is the so called H −1 -method for active scalars, see for example Theorem 2 and Theorem 5 in [1] for other applications of this method. Lemma 19 There exists at most one solution of (13) in the sense of definition 4. We remark that ∇K : L 2 → L 2 and K div : L 4 → L 4 are bounded operators, hence Since both ω andω belong to C w (0, T ; L 2 (T 2 )), by Grönwall's inequality the thesis follows. Now we are ready to provide the proof of our main theorem. Proof of Theorem 5. Let φ ∈ C ∞ (T 2 ), by classical arguments for each N ∈ N,ω N satisfies the following weak formulation: P-a.s. for all t ∈ [0, T ], Up to passing to a further subsequence, we will show the P-a.s. convergence, uniformly in time, of all the terms appearing above, except the martingale part; this is the only term that will present some differences with respect to the proof of Theorem 2. Therefore, we omit the treatments of the other terms which are similar to the proof of Theorem 2, and concentrate on the martingale part which will be shown to vanish in the limit, uniformly in time. In order to deal with the stochastic integral, applying Burkholder-Davis-Gundy inequality and using the fact that {σ k } k is an orthonormal family of vector fields, we obtain Then by relation (7) and Sobolev embedding theorem, and, using interpolation inequalities and (28) yields Summarizing the above arguments we arrive at ω t , φ − ω 0 , φ = ν t 0 ω s , ∆φ ds + t 0 g(ω s ), ∆φ ds By standard density argument we can find a zero measure set N such that on its complementary relation (30) holds for each φ ∈ C ∞ . By Corollary 17 and Lemma 19, every subsequence L(ω N k ) admits a sub-subsequence which converges to the unique limit point δ ω , where ω is the unique deterministic solution of (13). Then, for example by [4, Theorem 2.6], the whole sequence L(ω N ) converges weakly to δ ω . As a Corollary of Lemma 19 and Theorem 5 we have the following result. Corollary 20 There exists a unique solution of (13) in the sense of Definition 4.
2023-02-28T06:42:16.822Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "3cba2b4372b7a4e1e72852a3e3d02b47290fc9b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3cba2b4372b7a4e1e72852a3e3d02b47290fc9b0", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
524834
pes2o/s2orc
v3-fos-license
Crowdsourcing Inference-Rule Evaluation The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources. However, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed “instance-based evaluation” method that involved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of annotations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators. Introduction Inference rules are an important component in semantic applications, such as Question Answering (QA) (Ravichandran and Hovy, 2002) and Information Extraction (IE) (Shinyama and Sekine, 2006), describing a directional inference relation between two text patterns with variables. For example, to answer the question 'Where was Reagan raised?' a QA system can use the rule 'X brought up in Y→X raised in Y' to extract the answer from 'Reagan was brought up in Dixon'. Similarly, an IE system can use the rule 'X work as Y→X hired as Y' to extract the PERSON and ROLE entities in the "hiring" event from 'Bob worked as an analyst for Dell'. The significance of inference rules has led to substantial effort into developing algorithms that automatically learn inference rules (Lin and Pantel, 2001;Sekine, 2005;Schoenmackers et al., 2010), and generate knowledge resources for inference systems. However, despite their potential, utilization of inference rule resources is currently somewhat limited. This is largely due to the fact that these algorithms often produce invalid rules. Thus, evaluation is necessary both for resource developers as well as for inference system developers who want to asses the quality of each resource. Unfortunately, as evaluating inference rules is hard and costly, there is no clear evaluation standard, and this has become a slowing factor for progress in the field. One option for evaluating inference rule resources is to measure their impact on an end task, as that is what ultimately interests an inference system developer. However, this is often problematic since inference systems have many components that address multiple phenomena, and thus it is hard to assess the effect of a single resource. An example is the Recognizing Textual Entailment (RTE) framework (Dagan et al., 2009), in which given a text T and a textual hypothesis H, a system determines whether H can be inferred from T. This type of evaluation was established in RTE challenges by ablation tests (see RTE ablation tests in ACLWiki) and showed that resources' impact can vary considerably from one system to another. These issues have also been noted by Sammons et al. (2010) and LoBue and Yates (2011). A complementary application-independent evaluation method is hence necessary. Some attempts were made to let annotators judge rule correctness directly, that is by asking them to judge the correctness of a given rule (Shinyama et al., 2002;Sekine, 2005). However, Szpektor et al. (2007) observed that directly judging rules out of context often results in low inter-annotator agreement. To remedy that, Szpektor et al. (2007) and Bhagat et al. (2007) proposed "instance-based evaluation", in which annotators are presented with an application of a rule in a particular context and need to judge whether it results in a valid inference. This simulates the utility of rules in an application and yields high inter-annotator agreement. Unfortunately, their method requires lengthy guidelines and substantial annotator training effort, which are time consuming and costly. Thus, a simple, robust and replicable evaluation method is needed. Recently, crowdsourcing services such as Amazon Mechanical Turk (AMT) and CrowdFlower (CF) 1 have been employed for semantic inference annotation (Snow et al., 2008;Wang and Callison-Burch, 2010;Mehdad et al., 2010;Negri et al., 2011). These works focused on generating and annotating RTE text-hypothesis pairs, but did not address annotation and evaluation of inference rules. In this paper, we propose a novel instance-based evaluation framework for inference rules that takes advantage of crowdsourcing. Our method substantially simplifies annotation of rule applications and avoids annotator training completely. The novelty in our framework is two-fold: (1) We simplify instance-based evaluation from a complex decision scenario to two independent binary decisions. (2) We apply methodological principles that efficiently communicate the definition of the "inference" relation to untrained crowdsourcing workers (Turkers). As a case study, we applied our method to evaluate algorithms for learning inference rules between predicates. We show that we can produce many annotations cheaply, quickly, at good quality, while achieving high inter-annotator agreement. Evaluating Rule Applications As mentioned, in instance-based evaluation individual rule applications are judged rather than rules in isolation, and the quality of a rule-resource is then evaluated by the validity of a sample of applications of its rules. Rule application is performed by finding an instantiation of the rule left-hand-side in a corpus (termed LHS extraction) and then applying the rule on the extraction to produce an instantiation of the rule right-hand-side (termed RHS instantiation). For example, the rule 'X observe Y→X celebrate Y' 1 https://www.mturk.com and http://crowdflower.com can be applied on the LHS extraction 'they observe holidays' to produce the RHS instantiation 'they celebrate holidays'. The target of evaluation is to judge whether each rule application is valid or not. Following the standard RTE task definition, a rule application is considered valid if a human reading the LHS extraction is highly likely to infer that the RHS instantiation is true (Dagan et al., 2009). In the aforementioned example, the annotator is expected to judge that 'they observe holidays' entails 'they celebrate holidays'. In addition to this straightforward case, two more subtle situations may arise. The first is that the LHS extraction is meaningless. We regard a proposition as meaningful if a human can easily understand its meaning (despite some simple grammatical errors). A meaningless LHS extraction usually occurs due to a faulty extraction process (e.g., Table 1, Example 2) and was relatively rare in our case study (4% of output, see Section 4). Such rule applications can either be extracted from the sample so that the rule-base is not penalized (since the problem is in the extraction procedure), or can be used as examples of non-entailment, if we are interested in overall performance. A second situation is a meaningless RHS instantiation, usually caused by rule application in a wrong context. This case is tagged as non-entailment (for example, applying the rule 'X observe Y→X celebrate Y' in the context of the extraction 'companies observe dress code'). Each rule application therefore requires an answer to the following three questions: 1) Is the LHS extraction meaningful? 2) Is the RHS instantiation meaningful? 3) If both are meaningful, does the LHS extraction entail the RHS instantiation? Crowdsourcing Previous works using crowdsourcing noted some principles to help get the most out of the service (Wang et al., 2012). In keeping with these findings we employ the following principles: (a) Simple tasks. The global task is split into simple sub-tasks, each dealing with a single aspect of the problem. (b) Do not assume linguistic knowledge by annotators. Task descriptions avoid linguistic terms such as "tense", which confuse workers. (c) Gold standard validation. Using CF's built-in methodology, We split the annotation process into two tasks, the first to judge phrase meaningfulness (Questions 1 and 2 above) and the second to judge entailment (Question 3 above). In Task 1, the LHS extractions and RHS instantiations of all rule applications are separated and presented to different Turkers independently of one another. This task is simple, quick and cheap and allows Turkers to focus on the single aspect of judging phrase meaningfulness. Rule applications for which both the LHS extraction and RHS instantiation are judged as meaningful are passed to Task 2, where Turkers need to decide whether a given rule application is valid. If not for Task 1, Turkers would need to distinguish in Task 2 between non-entailment due to (1) an incorrect rule (2) a meaningless RHS instantiation (3) a meaningless LHS extraction. Thanks to Task 1, Turkers are presented in Task 2 with two meaningful phrases and need to decide only whether one entails the other. To ensure high quality output, each example is evaluated by three Turkers. Similarly to Mehdad et al. (2010) we only use results for which the confidence value provided by CF is greater than 70%. We now describe the details of both tasks. Our simplification contrasts with Szpektor et al. (2007), whose judgments for each rule application are similar to ours, but had to be performed simultaneously by annotators, which required substantial training. Task 1: Is the phrase meaningful? In keeping with the second principle above, the task description is made up of a short verbal explanation followed by positive and negative examples. The definition of "meaningfulness" is conveyed via examples pointing to properties of the automatic phrase extraction process, as seen in Table 1. Task 2: Judge if one phrase is true given another. As mentioned, rule applications for which both sides were judged as meaningful are evaluated for entail-ment. The challenge is to communicate the definition of "entailment" to Turkers. To that end the task description begins with a short explanation followed by "easy" and "hard" examples with explanations, covering a variety of positive and negative entailment "types" (Table 2). Defining "entailment" is quite difficult when dealing with expert annotators and still more with nonexperts, as was noted by Negri et al. (2011). We therefore employ several additional mechanisms to get the definition of entailment across to Turkers and increase agreement with the GS. We run an initial small test run and use its output to improve annotation in two ways: First, we take examples that were "confusing" for Turkers and add them to the GS with explanatory feedback presented when a Turker answers incorrectly. (E.g., the pair ('The owner be happy to help drivers', 'The owner assist drivers') was judged as entailing in the test run but only achieved a confidence value of 0.53). Second, we add examples that were annotated unanimously by Turkers to the GS to increase its size, allowing CF to better estimate Turker's reliability (following CF recommendations, we aim to have around 10% GS examples in every run). In Section 4 we show that these mechanisms improved annotation quality. Case Study As a case study, we used our evaluation methodology to compare four methods for learning entailment rules between predicates: DIRT (Lin and Pantel, 2001), Cover (Weeds and Weir, 2003), BInc (Szpektor and Dagan, 2008) and Berant et al. (2010). To that end, we applied the methods on a set of one billion extractions (generously provided by Fader et al. (2011)) automatically extracted from the ClueWeb09 web crawl 2 , where each extraction comprises a predicate and two arguments. This resulted in four learned inference rule resources. We randomly sampled 5,000 extractions, and for each one sampled four rules whose LHS matches the extraction from the union of the learned resources. We then applied the rules, which resulted in 20,000 rule applications. We annotated rule applications using our methodology and evaluated each learning method by comparing the rules learned by each method with the annotation generated by CF. In Task 1, 281 rule applications were annotated as meaningless LHS extraction, and 1,012 were annotated as meaningful LHS extraction but meaningless RHS instantiation and so automatically annotated as non-entailment. 8,264 rule applications were passed on to Task 2, as both sides were judged meaningful (the remaining 10,443 discarded due to low CF confidence). In Task 2, 5,555 rule applications were judged with a high confidence and supplied as output, 2,447 of them as positive entailment and 3,108 as negative. Overall, 6,567 rule applications (dataset of this paper) were annotated for a total cost of $1000. The annotation process took about one week. In tests run during development we experimented with Task 2 wording and GS examples, seeking to make the definition of entailment as clear as possible. To do so we randomly sampled and manually annotated 200 rule applications (from the initial 20,000), and had Turkers judge them. In our initial test, Turkers tended to answer "yes" comparing to our own annotation, with 0.79 agreement between their annotation and ours, corresponding to a kappa score of 0.54. After applying the mechanisms described in Section 3, false-positive rate was reduced from 18% to 6% while false-negative rate only increased from 4% to 5%, corresponding to a high agreement of 0.9 and kappa of 0.79. In our test, 63% of the 200 rule applications were annotated unanimously by the Turkers. Importantly, all these examples were in perfect agreement with our own annotation, reflecting their high reliability. For the purpose of evaluating the resources learned by the algorithms we used annotations with CF confidence ≥ 0.7 for which kappa is 0.99. Lastly, we computed the area under the recallprecision curve (AUC) for DIRT, Cover, BInc and Berant et al.'s method, resulting in an AUC of 0.4, 0.43, 0.44, and 0.52 respectively. We used the AUC curve, with number of recall-precision points in the order of thousands, to avoid tuning a threshold parameter. Overall, we demonstrated that our evaluation framework allowed us to compare four different learning methods in low costs and within one week. Discussion In this paper we have suggested a crowdsourcing framework for evaluating inference rules. We have shown that by simplifying the previously-proposed instance-based evaluation framework we are able to take advantage of crowdsourcing services to replace trained expert annotators, resulting in good quality large scale annotations, for reasonable time and cost. We have presented the methodological principles we developed to get the entailment decision across to Turkers, achieving very high agreement both with our annotations and between the annotators themselves. Using the CrowdFlower forms we provide with this paper, the proposed methodology can be beneficial for both resource developers evaluating their output as well as inference system developers wanting to assess the quality of existing resources.
2014-07-01T00:00:00.000Z
2012-07-08T00:00:00.000
{ "year": 2012, "sha1": "3f6f3f3549a023d328512ce94a51cfa1f13d57f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "3eeeec97887c94c163d19564ddbe49287e80eb8f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
268287689
pes2o/s2orc
v3-fos-license
Assessment of pathogenicity and functional characterization of APPL1 gene mutations in diabetic patients BACKGROUND Adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 (APPL1) plays a crucial role in regulating insulin signaling and glucose metabolism. Mutations in the APPL1 gene have been associated with the development of maturity-onset diabetes of the young type 14 (MODY14). Currently, only two mutations [c.1655T>A (p.Leu552*) and c.281G>A p.(Asp94Asn)] have been identified in association with this disease. Given the limited understanding of MODY14, it is imperative to identify additional cases and carry out comprehensive research on MODY14 and APPL1 mutations. AIM To assess the pathogenicity of APPL1 gene mutations in diabetic patients and to characterize the functional role of the APPL1 domain. METHODS Patients exhibiting clinical signs and a medical history suggestive of MODY were screened for the study. Whole exome sequencing was performed on the patients as well as their family members. The pathogenicity of the identified APPL1 variants was predicted on the basis of bioinformatics analysis. In addition, the pathogenicity of the novel APPL1 variant was preliminarily evaluated through in vitro functional experiments. Finally, the impact of these variants on APPL1 protein expression and the insulin pathway were assessed, and the potential mechanism underlying the interaction between the APPL1 protein and the insulin receptor was further explored. RESULTS A total of five novel mutations were identified, including four missense mutations (Asp632Tyr, Arg633His, Arg532Gln, and Ile642Met) and one intronic mutation (1153-16A>T). Pathogenicity prediction analysis revealed that the Arg532Gln was pathogenic across all predictions. The Asp632Tyr and Arg633His variants also had pathogenicity based on MutationTaster. In addition, multiple alignment of amino acid sequences showed that the Arg532Gln, Asp632Tyr, and Arg633His variants were conserved across different species. Moreover, in in vitro functional experiments, both the c.1894G>T (at Asp632Tyr) and c.1595G>A (at Arg532Gln) mutations were found to downregulate the expression of APPL1 on both protein and mRNA levels, indicating their pathogenic nature. Therefore, based on the patient’s clinical and family history, combined with the results from bioinformatics analysis and functional experiment, the c.1894G>T (at Asp632Tyr) and c.1595G>A (at Arg532Gln) mutations were classified as pathogenic mutations. Importantly, all these mutations were located within the phosphotyrosine-binding domain of APPL1, which plays a critical role in the insulin sensitization effect. CONCLUSION This study provided new insights into the pathogenicity of APPL1 gene mutations in diabetes and revealed a potential target for the diagnosis and treatment of the disease. BACKGROUND Adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 (APPL1) plays a crucial role in regulating insulin signaling and glucose metabolism.Mutations in the APPL1 gene have been associated with the development of maturity-onset diabetes of the young type 14 (MODY14).Currently, only two mutations [c.1655T>A (p.Leu552*) and c.281G>A p.(Asp94Asn)] have been identified in association with this disease.Given the limited understanding of MODY14, it is imperative to identify additional cases and carry out comprehensive research on MODY14 and APPL1 mutations. AIM To assess the pathogenicity of APPL1 gene mutations in diabetic patients and to characterize the functional role of the APPL1 domain. METHODS Patients exhibiting clinical signs and a medical history suggestive of MODY were screened for the study.Whole exome sequencing was performed on the patients as well as their family members.The pathogenicity of the identified APPL1 variants was predicted on the basis of bioinformatics analysis.In addition, the pathogenicity of the novel APPL1 variant was preliminarily evaluated through in vitro functional experiments.Finally, the impact of these variants on APPL1 protein expression and the insulin pathway were assessed, and the potential mechanism underlying the interaction between the APPL1 protein and the insulin receptor was further explored. INTRODUCTION Maturity-onset diabetes of the young (MODY) is a rare form of hereditary monogenic diabetes caused by single gene mutations, constituting approximately 1%-2% of all diabetes cases [1,2].A total of 14 MODY phenotypes have been identified, exhibiting significant heterogeneity in their clinical presentations.Notably, approximately 80% of MODY cases are initially misdiagnosed as either type 1 diabetes mellitus (T1DM) or T2DM [3,4]. MODY14, characterized by mutations in the adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 (APPL1) gene, represents one of the least known MODY subtypes.Currently, only two related mutations have been reported, namely [c.1655T>A (p.Leu552*) and c.281G>A p.(Asp94Asn)] [5][6][7].To date, our understanding of MODY14 remains limited.To enhance our comprehension of MODY14 and APPL1 mutations, it is crucial to identify additional cases, conduct comprehensive research, and consolidate knowledge in this field.By doing so, we can gain a deeper understanding of the underlying mechanisms and clinical implications of MODY14, ultimately paving the way for improved diagnostic and therapeutic strategies. The APPL1 gene, situated on chromosome 3p14.3,has 23 exons [8].It exhibits widespread expression in numerous human tissues, such as pancreas, liver, adipose tissue, brain, and muscle [9,10].APPL1 serves as a multifunctional adaptor protein, playing an important role in distinct signal transduction and membrane trafficking pathways.Structurally, it contains three primary domains: A Bin-Amphiphysin-Rvs (BAR) domain; a pleckstrin homology (PH) domain; and a phosphotyrosine-binding (PTB) domain [11].These domains facilitate interactions with various signaling molecules and receptors, thereby regulating intracellular signaling pathways.The BAR domain can recognize and deform membranes with curvature and regulate intracellular trafficking and vesicle formation [12].The PH domain can bind to phosphoinositides, such as phosphatidylinositol-3,4,5-trisphosphate, and target APPL1 to the plasma membrane, where it participates in various signaling pathways [13].Meanwhile, the PTB domain can interact with adiponectin receptors 1/2, tropomyosin receptor kinase A, and other molecules, mediating intracellular signal transduction [14][15][16]. Current studies indicate that APPL1 is an important mediator of insulin sensitization.APPL1 can facilitate the binding of insulin receptor (IR) substrates (IRS) to IR, thereby activating PI3K/Akt signaling pathway and augmenting insulin sensitivity [17].Notably, in this process, the PTB domain can interact with IR and promote insulin signal transduction.In addition, APPL1 participates in adiponectin signaling by binding to adiponectin receptors, thereby enhancing lipid oxidation and glucose uptake [18,19].In summary, further exploration of the interaction and regulatory network of APPL1 with other signaling molecules is warranted.Further, more clinical evidence is required to elucidate the precise role and underlying of APPL1 in diabetes and other metabolic diseases. In this study, we identified five novel APPL1 mutations, including four missense mutations and one intron mutation.To enhance our understanding of MODY14, we performed bioinformatics analysis and in vitro experiments to characterize the functional impact of these mutations.Based on the experimental results and literature review, we discussed their implications for diagnosis, treatment, and molecular pathogenesis.Notably, this article was the first to report cases of MODY14 in Asia on an international scale.Moreover, our study has identified the largest number of APPL1 mutations, providing important data for APPL1 mutation research.By enriching the gene database of MODY14, our discoveries provide new insights into the molecular mechanism and clinical management of this rare diabetes, ultimately guiding optimal treatment strategies, prognosis predictions, and genetic counseling for affected families. Study design The purpose of this study was to determine the pathogenic status of suspected MODY diabetes in patients and evaluate the effects of novel APPL1 mutations on disease development.We performed whole-exome sequencing (WES) to identify patients carrying APPL1 gene mutations and conducted bioinformatics analysis of these mutations.Then, we conducted in vitro experiments to verify the pathogenicity of these mutations.Finally, the molecular mechanisms and signaling pathways involved in MODY pathogenesis were elucidated in this study. Ethical considerations This study adhered to the ethical principles outlined in the Declaration of Helsinki of 1964, along with its subsequent revisions and equivalent ethical standards.Prior to participation, informed consent was obtained from each participating patient or their legal guardian.The Ethics Committee of Shandong Provincial Hospital approved this study. Patients The study cohort consisted of 5 patients from five pedigrees.Patients meeting any of the following criteria were enrolled for WES: Younger than 30-years-old; had a family history of diabetes; or had negative insulin antibodies.Then, the clinical history and blood samples of patients were collected for further pathogenicity analysis. Mutation analysis Genomic DNA was extracted from blood leucocytes from all study participants using Tiangen Biotech DNA kit.We performed WES on blood DNA and applied the SeqCap EZ MedExome Target Enrichment Kit (Roche NimbleGen) to capture human exons and adjacent introns after fragmenting, ligating, amplifying, and purifying genomic DNA.DNA sequencing was carried out using Illumina HiSeq platform, and the resulting data were aligned to the Hg19 reference genome.Mutation calls were made using NextGENe.The identified mutations were further verified by Sanger sequencing. Plasmid construction and transfection WT and mutant human APPL1 plasmids (transcript ID: NM_012096.3)were generated by the transient overexpression vector GV141 (GeneChem, China) [20].HEK293 cells were transfected with the plasmids and cultured in complete medium supplemented with 10% fetal bovine serum (Excell, FSD500, South America), penicillin, and streptomycin.Cells were seeded in six-well plates once they reached 80%-90% confluence.Transfection was performed when the degree of cell fusion reached 70%-90%.We added 2 μg of corresponding plasmids to each well of the six-well plate and transfected them into HEK293 cells.The transfection operation was performed followed the instructions of the Lipofectamine 3000 (Invitrogen, American) transfection kit.To ensure optimal transfection efficiency, the process was carried out on a sterile bench (SW-CJ-IC dual person purification workbench). Real-time PCR After transfection, cells were collected after 24 h and lysed with Trizol (TaKaRa, Japan).Chloroform was added to separate RNA from DNA and proteins.The RNA was precipitated with isopropanol and washed several times with 75% ethanol.The RNA concentration was measured by nanodrop software after extraction.To convert mRNA into complementary DNA, reverse transcription was performed following the instructions of the reverse transcription kit manual (TaKaRa, Japan) using Mastercycler5333 PCR instrument (Eppendorf, Germany).Next, Bestar SybrGreen qPCR mastermix, PCR Forward Primer, PCR Reverse Primer, DNA template, and ddH 2 O were mixed well in a 96-well plate.Finally, qPCR was performed on a real-time fluorescence quantitative PCR instrument (Roche, United States). Immunoblot analysis RIPA and PMSF (Shanghai Shenneng Gaming Company, China) were mixed at a ratio of 99:1 in the six-well plate after 48 h of transfection.Lysis buffer was added to each well, and the cells were scraped with a cell scraper.The lysate was transferred to EP tubes and incubated on ice for 20 min.The lysate was centrifuged for 15 min to extract protein.The protein concentration was determined using an enzyme-linked immunosorbent assay.Then, loading buffer was added to the protein samples and boiled for 10 min.Proteins were separated with different molecular weights by electrophoresis on a 10% SDS-polyacrylamide gel.The membrane was transferred and blocked in 5% milk (skimmed milk powder purchased from Yili Group, China) for 1 h.Next, primary antibodies (Flag mouse anti 1:1000, β-actin mouse anti 1:7500) were added overnight at 4 °C.After recovering the primary antibodies, the membrane was washed with TBST for 10 min × 3 times.The secondary antibodies (mouse anti 1:5000) were added and incubated for 1 h.After washing the film, it was developed under the Alpha Fluorochem Q imaging analysis system (United States). Statistical analysis The experimental data was analyzed using SPSS software (Version 25.0).Measurement data were presented as mean ± SD and analyzed using an independent samples t-test.Statistical significance was defined as a P value < 0.05. Clinical characteristics We studied 5 patients with APPL1 variants, four of whom had experienced elevated fasting blood glucose before the age of 25, while the fifth patient developed diabetes at the age of 39 (Table 1).Patient 1 was diagnosed the diabetes at the age of 13, with a family history of diabetes in both his grandmother and father (Figure 1).His grandmother continued taking oral hypoglycemic drugs, while his father managed diabetes through diet control, resulting in normalized blood glucose level.Patient 1 had obvious polyuria, polydipsia, polyphagia, and diabetic ketoacidosis at the onset of the disease.Insulin antibody testing yielded negative results.The patient used insulin therapy after diagnosis, and then his insulin autoantibodies and islet cell antibodies turned positive after 8 years of insulin therapy.Patient 2 developed diabetes at the age of 21.WES of his family revealed that his father had no diabetes but also carried the same mutation.Only his uncle had diabetes in his family.Additionally, patient 2 presented with obesity and ketoacidosis at the onset of the disease.After the ketoacidosis subsided, the patient shifted to diet control. Patient 3 developed diabetes at the age of 13 and had a diabetic grandfather.Although the patient's father carried the same mutation, he remained unaffected by the disease.The patient also had ketoacidosis when he developed diabetes.He stopped taking medication after 2 mo of treatment with insulin combined with oral drugs and now only controls glucose by diet. Patient 4 developed diabetes at the age of 12, and only his father had diabetes in his family.It is worth noting that the patient's blood glucose reached 19.81 mmol/L 2 h after a meal, accompanied by hyperinsulinemia (insulin > 300.00 mU/ L 2 h after a meal).The patient relied on oral medication at the onset of the disease and transitioned to glucose control through diet and increased exercise. Patient 5 developed diabetes at the age of 39.Both his mother and grandfather had diabetes.Patient 5 had a son and a daughter.His daughter carried the variant but as of the writing of this article had not shown any symptoms of diabetes.Patient 5 had been taking oral hypoglycemic drugs since he was diagnosed with diabetes. Identification of novel variants in the APPL1 gene We identified five variants, of which Asp632Tyr, Arg633His, Arg532Gln and Ile642Met mutations are missense mutations, and 1153-16A>T is an intron mutation (Figure 2A).The Asp632Tyr, Arg633His, and Ile642Met variants are located in exon 21, while the Arg532Gln variant is located in exon 17.The 1153-16A>T intronic mutation is upstream of exon 14.These four missense mutations are located in the PTB domain of APPL1, which can bind to the IR and regulate the insulin signaling pathway (Figure 2B).The Arg532Gln, Asp632Tyr, and Arg633His variants all caused changes in the surface potential of APPL protein, while the Ile642Met variant had no obvious abnormality.Among them, the Arg532Gln and Arg633His variants resulted in a decrease in positive surface potential, while the Asp632Tyr variant led to the elimination of negative surface potential (Figure 2C).These changes in potential indicate that the mutation may disrupt the interaction of APPL1 with other macromolecules.Moreover, amino acid mutations can influence protein function and folding by altering hydrophilicity (Supplementary Figure 1). Bioinformatic analysis To assess the pathogenicity of the four missense mutations, we employed MutationTaster, PolyPhen-2, and Revel for prediction analysis.Remarkably, all three software tools consistently predicted the Arg532Gln variant as pathogenic, with MutationTaster indicating a high likelihood of pathogenicity (Table 2).The three prediction outcomes for the Ile642Met variant were all benign.Moreover, the Asp632Tyr and Asp632Tyr variants were both pathogenic in MutationTaster and benign in PolyPhen-2 and Revel.Multiple alignments of amino acid sequences demonstrated that the residues Asp632Tyr, Arg633His, and Arg532Gln were conserved across various species.This implies that they may exert a detrimental impact on the structure and function of the protein, reinforcing their potential pathogenicity.However, we noticed that in multiple species, the amino acid at position 642 of APPL1 was not isoleucine, but methionine, as in our patients' mutation, indicating that this site may not have a significant influence on the function or structure of the protein (Figure 3).Additionally, the pathogenicity analysis of the intronic mutations showed that the prediction results of MutationTaster and IntSplice were not pathogenic. Functional study of WT and mutant APPL1 in vitro The experiment was used to confirm the pathogenicity of four missense mutations.As shown in Figure 4A Arg532Gln variant decreased by 14% (P = 0.035), indicating that both mutations at these sites resulted in reduced expression of APPL1 mRNA.The Arg633His and Ile642Met variants did not cause significant changes in APPL1 mRNA expression.In the experiment, we also observed that the expression of mutant proteins was consistent with mRNA expression.Compared with WT APPL1, the protein band of the Asp632Tyr-APPL1 variant disappeared, indicating that this variant would prevent the expression of APPL1 (Figure 4B).In addition, the protein expression of the Arg532Gln variant was significantly reduced compared with WT APPL1, indicating that this mutation also inhibited the expression of APPL1 protein.There was no significant change in APPL1 protein expression when mutated at Arg633His or Ile642Met, suggesting that these two mutations may not affect the expression of APPL1 protein. APPL1 pathway analysis and protein docking prediction To further elucidate the role of APPL1, we searched for APPL1-related protein pathways in the STRING database.Our analysis revealed that the insulin-related pathway protein AKT had the highest binding affinity with APPL1 (Figure 5A).In the AKT pathway, APPL1 could also bind to IR, which plays a role in insulin sensitization by interacting with the PTB domain where our four missense mutations were located.As shown in the Figure 5B, the NPEY motif of IR (with TYR-999 as the phosphorylation site) might interact with the amino acids between β5 and C-terminal helix of the PTB domain.Notably, the Asp632Tyr mutation is in the closest proximity to this binding site.Based on this observation, we speculate that this site might be associated with the interaction between APPL1 and IR. DISCUSSION MODY14 is an extremely rare form of inherited diabetes caused by mutations in the APPL1 gene.So far, only two variants [c.1655T>A (p.Leu552*) and c.281G>A p.(Asp94Asn)] of APPL1 were found to be associated with MODY14.In this study, we identified four novel missense mutations and one intronic mutation in APPL1, representing the largest number of de novo APPL1 mutations reported so far.For the first time, we demonstrated that two missense mutations [c.1894G>T (p.Asp632Tyr) and c.1595G>A (p.Arg532Gln)] in APPL1 are pathogenic.Bioinformatics analysis provided compelling evidence for their deleterious effects.In addition, we further investigated the role of APPL1 in insulin signaling and elucidated its potential molecular mechanisms. To determine whether the APPL1 variants of the patients were related to their diabetes symptoms, we performed a comprehensive analysis combining their clinical manifestations and in vitro functional experiments.Preliminary experimental validation showed that the mutations carried by patients 1 (carrying mutation Asp632Tyr) and 3 (carrying mutation Asp632Tyr) were pathogenic.Considering their age of onset, family history, and the results of bioinformatics analysis, these 2 patients were diagnosed with MODY14.Interestingly, patient 1 had some insulin antibodies turned positive after many years of insulin therapy, suggesting a subsequent development of T1DM.On the other hand, the father of patient 3 carried the mutation but did not develop the disease, indicating potential incomplete penetrance of the mutation.Patients 2 and 4 were young at the onset of the disease and only needed medication or diet control.However, their family history did not align well with the autosomal dominant inheritance pattern.Pathogenicity prediction and functional testing both indicated that their APPL1 variants were non-pathogenic.Therefore, based on the comprehensive analysis, we speculated that patients 2 and 4 were more supportive of the diagnosis of T2DM, especially patient 4, who was overweight at the onset of the disease.We hypothesized that overeating and obesity may contribute to an earlier onset of T2DM in this patient.In addition, patient 5 had a more obvious family history of diabetes but an older age of onset.Taking the pathogenicity analysis into consideration, we propose that patient 5 aligns more closely with the diagnosis of T2DM.Therefore, among the five mutations, c.1894G>T (p.Asp632Tyr) and c.1595G>A (p.Arg532Gln) were pathogenic mutations, and patients carrying these mutations had MODY14. Among the three major domains of the APPL1 protein, we found that the four missense mutations were all located in the PTB domain, which can bind to both AKT and IR (mainly).We considered that the pathogenicity of the mutation sites was related to the reduced sensitizing effect of APPL1 in the insulin pathway.After insulin binds to its receptor, APPL1 carries IRS1 and IRS2 to the IR and promotes the binding of the IR and IRS by directly interacting with the IR through its PTB domain [17,21] (Figure 6).The peptide binding site in most PTB domains is located between strand β5 of the central β sandwich and the C-terminal helix [22,23].When we docked APPL1 and the IR, we found that the β subunit of the IR also contained an NPXY motif (NPEY), then the PTB domain docked with it to facilitate the subsequent transmission of insulin signals.Therefore, the mutation of pathogenic sites in the PTB domain of APPL1 may affect the binding of the IR and IRS, leading to an impaired insulin signaling pathway as well as increased blood glucose and insulin resistance.Furthermore, despite the adjacent location of the Asp632Tyr and Arg633His variants, their pathogenicity differs.This observation suggests that the Asp632 site may play a crucial role in binding to proteins associated with the insulin pathway. In addition, the BAR domain of APPL1 can also enhance insulin-stimulated AKT phosphorylation by directly binding to AKT and competitively inhibiting Tribbles homolog 3 (mainly), thereby achieving the effects of lowering blood glucose (activating AKT to inhibit glucagon-induced hepatic glucose production, promoting glucose transporter type 4 translocation and cellular glucose uptake) and insulin resistance [24][25][26][27].It is noteworthy that all the MODY14 patients we identified had mutations located in the PTB domain.Among the previously reported MODY14 patients, the c.1655T>A (p.L552*) mutation was also located in the PTB domain, indicating a high aggregation of mutations in the PTB domain [7].This suggests that compared to the BAR domain, the PTB domain may play a more significant role in insulin pathway signal transduction. Although mutations in APPL1 are relatively rare, recent advancements in exploring its molecular mechanisms and physiological functions have highlighted its key role in regulating glucose metabolism.Through its PTB domain, APPL1 interacts with AdipoR1 and AdipoR2, facilitating the transmission of adiponectin-stimulated signals to downstream targets [28].In addition, APPL1 may provide a way of communication between the adiponectin and insulin signaling pathways, mediating the sensitization effect of insulin on muscle glucose disposal [18,19].A study showed that APPL1 can counteract the high-fat diet-induced insulin resistance and hepatic glucose metabolism disorder, and improve blood glucose levels and insulin sensitivity in mice.Therefore, APPL1 may serve as a potential target for treating diabetes [11].However, it is worth noting that a study reported that the expression of APPL1 in the muscle of T2DM rats was reduced, leading to weakened insulin-induced AKT signal activation [29].To some extent, this consolidates the key role of APPL1 in regulating muscle insulin signaling and metabolism, but in this study, patients 2 and 4 who likely had T2DM did not show a reduction in APPL1 expression in the in vitro functional experiments. This study also had some limitations.First, the functional experiments did not fully replicate the real physiological environment and conditions.Hence, we cannot completely rule out the possibility that these mutations might impact the interaction of APPL1 with other proteins or small molecules.Additionally, we did not verify the pathogenicity of the intronic mutation through functional experiments.We also failed to obtain blood samples from some family members for genetic testing.This may result in imprecise estimates of the mode of inheritance and penetrance of the mutation, and the existence of potential epistatic or modifier factors cannot be definitively determined.In the future, a broader range of cell lines or animal models are needed for in vitro and in vivo experiments to further investigate the impact of APPL1 gene mutations on the insulin signaling pathway and other metabolic pathways.Our study only serves as an initial investigation of the pathogenic mechanism of MODY14.At the protein level, aberrantly folded proteins can be degraded by the Figure 1 Figure 1 Pedigree of 5 diabetes patients with novel adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 variants.Males and females are represented by squares and circles, respectively.The black padding suggests that the patient has diabetes, the arrow represents the progenitor, and the horizontal line indicates a patient who has undergone full exon sequencing."a" indicates that the patient has the same mutation as the proband. Figure 2 Figure 2 Distribution of mutation sites in adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 and adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 protein and potential changes in mutation sites.A: Exon and mutation site distribution of adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 (APPL1) gene; B: Domain and mutation site distribution of APPL1 protein; C: Potential change of mutated APPL1 protein.BAR: Bin-Amphiphysin-Rvs; PH: Pleckstrin homology; PTB: Phosphotyrosine-binding; UTR: Untranslated region. 2 Figure 3 Figure 3 Conservation of mutation sites in multiple species. Figure 4 Figure 4 mRNA and protein expression at mutation sites.A: mRNA expression at the mutation site; B: Protein expression at the mutation site.a P < 0.05; b P < 0.001.APPL1: Adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1; EV: Empty vector; WT: Wild-type. Figure 5 Figure 5 Adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1-related protein networks and the docking between adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 and insulin receptor.AThe adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 (APPL1)-related protein network.The degree of binding between proteins is indicated by the colors from yellow to orange on the nodes.The larger the node, the darker the color, and the higher the degree of binding to the APPL1 protein.The edge shows the association of protein-protein; B: The binding of the phosphotyrosine binding domain of APPL1 to insulin receptor proteins.Red is the phosphotyrosine binding domain, and blue is the insulin receptor protein. Table 2 Pathogenicity analysis of adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 gene mutations APPL1: Adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1.
2024-02-06T16:09:46.797Z
2024-02-15T00:00:00.000
{ "year": 2024, "sha1": "094e3734dd6e4dfbcab5aede3185763e7f80f791", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "094e3734dd6e4dfbcab5aede3185763e7f80f791", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9762949
pes2o/s2orc
v3-fos-license
A comparative analysis of management and prognosis in stage I and II fallopian tube carcinoma and epithelial ovarian cancer. Staging and surgical as well as post-operative treatment of primary Fallopian tube carcinoma (FTC) followed the lines established for primary ovarian cancer (OC). In a nationwide retrospective analysis we were able to find a distinct difference between these two tumours. A total of 262 patients, 68 with FTC and 194 with OC, in stage I and II were included into this study. A univariate as well as a multivariate analysis for survival was performed, including factors such as age, histological type, grading and surgical and adjuvant treatment. A significantly poorer outcome (P = 0.0002) for FTC patients with a 5-year survival of 50.8% compared with 77.5% for OC patients was observed. This finding was persistent and independent of any investigated factor, in univariate as well as multivariate analyses. Therefore, we feel that a more aggressive therapeutic approach to the treatment of FTC even in early stages can be recommended. On the other hand, the retrospective character of our study has to be taken into account. Primary carcinoma of the Fallopian tube (FTC) ranks among the rarest of gynaecological malignancies, with a prevalence reported to be 0.15-1.8% compared with 9.4-15.8% for epithelial ovarian cancer (OC) (Hanton et al., 1966;Dodson et al., 1970;Engeler et al., 1981;Bohme et al., 1992). The average annual incidence of FTC is reported to be 2.9 per million women per year (Pfeiffer, 1989). Since both tumours have their origin in the Mullerian duct, OC and FTC are considered to be closely related (Frick, 1978). Thus, FIGO staging (until September 1991), surgical treatment and post-operative adjuvant therapy of FTC followed the lines established for OC (Hu et al., 1950;Behr et al., 1990;Morris et al., 1990;Pakisch et al., 1990). In most cases 'primary carcinoma of the Fallopian tube' is diagnosed intraoperatively or even as late as in the pathologist's post-operative histological examination; preoperatively, the tumour is mostly diagnosed as 'ovarian carcinoma' or 'malignant process in the adnexa' (Jones, 1965). The present retrospective study analyses data over a 10-year period (First and Second Multicenter Studies on Ovarian Carcinoma in Austria and First Multicenter Study on Carcinoma of the Fallopian Tube in Austria), and aims at evaluating the prognostic characteristics of the two diseases. Patients During the period 1980 to 1990, patients operated on for epithelial ovarian carcinoma or primary carcinoma of the Fallopian tube in stage I and II were entered into this retrospective study. Data on patients with Fallopian tube carcinoma were taken from a retrospective, multicentre analysis, including 23 gynaecological departments, and have been recently reported. (Rosen et al., 1993). Data for ovarian carcinoma were received from the University of Vienna (1st and 2nd Departments of Obstetrics and Gynecology) and were collected and analysed by the second author (P.S.) at the University of Vienna, Austria. They involved patients with OC who had been entered into two multicentre studies, from all over Austria. FTC as well as OC patients were followed until the control date, October 1992. Patients with metastatic tumours, with a history of other malignancies and with borderline tumours were excluded from this study. For the staging of Fallopian tube carcinoma a new FIGO classification, founded in Singapore, 1991, was used, whereas for ovarian carcinoma the FIGO classification was applied. A total of 68 patients with primary cancer of the Fallopian tube (FTC) in FIGO stage I and II were included into this study and were compared with 194 patients with ovarian carcinoma (OC) in the same stages (Table I). The mean age of Fallopian tube patients was 60.4 years. The mean age of ovarian carcinoma patients was 56.1 years. Histological evaluation and grading for FTC followed the criteria of Hu et al. (1950). The histological evaluation of the epithelial ovarian cancer was by WHO criteria (Serov et al., 1973). Histological grading was GI for well-differentiated and G3 for undifferentiated ovarian carcinomas and followed the criteria of Day et al. (1975). Borderline tumours (GO) were excluded from this study. The participating departments provided the study centre with histological specimens, which were evaluated by an independent pathologist (A.R.) for grading and histological type. Total abdominal hysterectomy (TAH) with bilateral salpingo-oophorectomy (BSO) and additional infracolic resection of the omentum together with or without lymphadenectomy was achieved in 22 (32.4%) patients in the FTC group and 96 (49.5%) patients in the OC group (P= NS). Post-operative radiotherapy was performed in 31 (45.6%) women with FTC and 65 (33.5%) women with OC using whole-abdominal irradiation with open-field techniques and total dosage of 45-55 Gy. The source of radiation was cobalt-60 in all patients and was applied within 6 weeks after surgery. Twenty-one women (30.9%) with FTC underwent chemotherapy compared with 59 (30.4%) with OC. The postoperative chemotherapy regimen varied from department to department and changed between the early 1980s and 1990. But in most of the reported cases a cisplatin-containing polychemotherapy regimen was administered with a concentration of 50 mg m2 cisplatin until 1984 and an increasing dosage up to 100 mg m-2 until now. Sixteen (23.5%) patients with FTC and 70 (36.1%) patients with OC did not receive any adjuvant therapy because their tumours were in stage IA and histological grade was GI (P =NS). Statistical methods Results expressed as percentages were subjected to a chisquare test. Survival curves were obtained by the Kaplan-Meier method, and median survival was compared by the Mantel-Cox log-rank test (Kaplan & Meier, 1958;Cox, 1972;Mantel, 1986). Patients who died from any cause other than the primary disease were censored. Five patients with FTC died for reasons other than the primary disease compared with 12 patients with OC. Survival was regarded as the period from first treatment for OC or FTC until the time of death due to this disease or until the control date. Values of P <0.05 were considered to be statistically significant. Cox proportional hazards regression (Cox, 1972), as implemented by the program BMDP 2L (Dixon et al., 1990), was used to analyse the role of prognostic factors in survival, both in a marginal, unadjusted and in a partial, adjusted sense. In this analysis the prognostic strength of a factor is described by estimates of the relative risk, and by the corresponding 95% confidence interval for the relative risk. Twosided P-values permit a judgement as to whether the relative risk differs significantly from 1. Wherever feasible, hazard plots were performed to assess the appropriateness of the proportional hazards assumption that underlies the Cox regression model. Log-likelihood ratio tests were used to determine the significance of factor combinations. Survival Survival data, describing the impact of various prognostic factors, are given in Table II. The results of the Cox analysis are shown in Table III. The fit of the model was checked by considering interaction and polynominal terms in a stepwise modelling process. Based on these analyses it can be concluded that a main-effects model suitably summarises the survival experiences of the patients. The results show that the presence of FTC was the most important adverse prognostic factor, the next being a higher degree of dedifferentiation (G2 + G3). Furthemore, age had a significant influence on survival (Table III). Discussion Carcinoma of the Fallopian tube and of the ovary share similar histological features and arise from continuous structures. Because of this and the limited experience with this disease, FTC is often managed along similar lines to OC (Gurney et al., 1990;Morris et al., 1990). FTC spreads within the abdominal cavity in a manner similar to OC, first contiguously by invasion of adjacent organs (Erez et al., 1967;Benedet et al., 1977;Henderson et al., 1977), second by lymphatic pathways, and third by haematogenous spread (Engstrom, 1957;Benedet et al., 1977;Yoonessi, 1979). Symptoms of FTC are predominantly non-specific (uterine bleeding, pelvic and/or abdominal pain, abnormal vaginal discharge, abdominal distension and ascites with or without intestinal symptoms, and pelvic mass). This might explain the low rate (2%) of preoperative diagnosis (Jones, 1965;Yoonessi, 1979). FTC closely resembles OC with one striking difference, i.e. that in FTC abdominal pain is a frequent and early complaint (Roberts & Lifshitz, 1982). It seems that patients are able to seek medical attention earlier because FTC tends to present at an earlier stage than OC (Rosen et al., 1993). Gurney et al. (1990) emphasises the same biological response of FTC and OC to therapy. We cannot share this view because of the evidently worse prognosis for FTC in stage I and II despite the same treatment and the earlier diagnosis of FTC (Gurney et al., 1990;Rosen et al., 1993). (Figure 1). On the whole, FTC patients have a significantly worse outcome irrespective of their histological type or grading, though within the FTC groups GI tumours proved to have a better prognosis than G2 and G3 tumours. The difference in survival caused by the presence of FTC is persistent in univariate as well as in multivariate analysis and has an influence independent of any applied treatment modality (Tables II, III and Figure 1). Unlike OC, there are no specific therapeutic guidelines available for FTC. The literature offers only retrospective studies and reports on series too small to allow definitive conclusions (Phelps & Chapman, 1974;Morris et al., 1990;Pakisch et al., 1990;Barakat et al., 1991). Our study too, though based on a homogeneous patient series of 68 FTC cases (Rosen et al., 1993), is retrospective and cannot provide conclusive guidelines for therapy. Yet, we feel that some recommendations can be given. Postoperative treatment of FTC, either chemoor radiotherapy, which hitherto followed the example of OC, should be actively pursued, and we think that the decision to apply adjuvant treatment in FTC patients should be made even in earlier stages. Patients with FIGO stage IA, in particular, should receive adjuvant treatment as well, irrespective of their histological grading, and in contrast to OC, so that a benefit from early diagnosis might be achieved. However, to determine definitive guidelines for treatment of FTC multicentric, prospective (probably international) trials will be mandatory.
2014-10-01T00:00:00.000Z
1994-03-01T00:00:00.000
{ "year": 1994, "sha1": "5bf262dc51e3277d7d5efb9826753780d6fcc396", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc1968857?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5bf262dc51e3277d7d5efb9826753780d6fcc396", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225782252
pes2o/s2orc
v3-fos-license
Sustainable Distribution of Responsibility for Climate Change Adaptation To gain legitimacy for climate change adaptation decisions, the distribution of responsibility for these decisions and their implementation needs to be grounded in theories of just distribution and what those affected by decisions see as just. The purpose of this project is to contribute to sustainable spatial planning and the ability of local and regional public authorities to make well-informed and sustainable adaptation decisions, based on knowledge about both climate change impacts and the perceptions of residents and civil servants on what constitutes a sustainable distribution of responsibility. Our aims are: (1) a better understanding of the practical implications of theories about just distribution of responsibility for the choice of local and regional climate adaptation measures; (2) knowledge about what residents and civil servants consider a sustainable distribution of responsibility for climate adaptation measures; and (3) a better understanding of conflicts concerning the distribution of responsibilities and systematic knowledge about the possibilities to manage them. In this interdisciplinary project, we study six municipalities and their residents, and two county administrative boards, all in Sweden, using mixed methods: value theory, document studies, interviews, focus groups, and surveys. Introduction The distribution of responsibility for climate change adaptation is an increasingly important issue in the context of spatial planning. Over recent years, in Sweden and elsewhere, major climate-related events, such as flooding, have resulted in conflicts over what public authorities should do to protect citizens and property against the negative impacts of climate change, and what the private sector, including citizens, should do. With global warming increasing, negative effects on society will grow, and so will conflicts over who should assume responsibility to protect society against those impacts. Climate adaptation is a matter of political and ethical concern. It is therefore not sufficient to see it only as a planning issue that needs efficient solutions, as is the case in much of existing research and in a Swedish public investigation on responsibility for climate adaptation from 2017 [1]. Such an approach will lead to higher levels of societal conflict over climate adaptation and lower levels of legitimacy for climate adaptation measures. Issues regarding who decides what is worth protecting, who implements adaptation measures and who pays for them, as well as who decides what is not to be protected, are issues that need to be discussed transparently. Otherwise measures might not be seen as just by those affected and consequentially not become sustainable. Adaptation decisions, therefore, need input not only from those affected by negative impacts induced by climate change but also from those affected by adaptation measures. For the same reason, adaptation decisions also need input 1. A better understanding of the practical implications of theories about just distribution of responsibility for the choice of local and regional climate adaptation measures. 2. Knowledge about what residents and civil servants consider a sustainable distribution of responsibility for climate adaptation measures. 3. A better understanding of conflicts concerning the distribution of responsibilities and systematic knowledge about the possibilities to manage them. In order to realize these aims, the project will study: the theoretical foundations for sustainable distributions of responsibility applied to climate adaptation (module 1); what is seen as a sustainable distribution of responsibility by Swedish local and regional stakeholders (module 2); and on the basis of the theoretical and empirical studies, analyze conflicting perceptions and possibilities to manage them in order to enable a just distribution of responsibility for climate adaptation (module 3). The research project will contribute to the fulfilment of three of the UN sustainable development goals: sustainable cities and communities; climate action; and reduced inequality. It will also contribute to the fulfilment of two of Sweden's environmental objectives: reduced climate impact; and a good built environment. The project will do this by contributing knowledge about how cities and communities can work with climate adaptation in a sustainable and inclusive way. The project will further enhance awareness about these issues among citizens and policymakers. The project is interdisciplinary and studies ethical, political, and organizational aspects of the distribution of responsibilities regarding local and regional climate change adaptation. This integration of perspectives is crucial to fulfilling the project's purpose and aims. The interdisciplinary character of the project is novel in this type of research and will enable the generation of more comprehensive knowledge of responsibility distributions for climate adaptation and support for policy-making that can be applied across spatial planning issues and geographical contexts. Research Review There is extensive research on climate adaptation (e.g., [2,3]). However, it is only over the past 15 years that research within social sciences and humanities has gained ground and it is now growing rapidly. A growing field is research on climate adaptation policies in different countries [4], many focusing on a European or industrialized country context [5,6]. They take stock of policies and explain differences between countries, to some extent including responsibility issues, and study the efficiency of these policies [7]. However, these studies pay limited attention to normative aspects [8]. Research on normative aspects of local adaptation is still rare [3,9]. Another growing field is research on local climate adaptation, predominantly in cities [10,11], with some studies focusing on normative issues, although limited to different groups in the city [12]. Normative aspects of the distribution of responsibility concerning climate change have mostly been studied in the context of mitigation and as an international issue concerning the distribution among countries or generations [21][22][23][24][25], often with literature on environmental justice as a starting point. This literature deals with a desirable distribution and a just distributive process [26], yet, with some exceptions [21,27,28] gives less attention to local and regional decision-making. Some have discussed Challenges 2020, 11, 11 3 of 13 differences between mitigation and adaptation, and what it implies in terms of just distribution of responsibility [29,30]. Some researchers study how responsibility for climate adaptation is distributed today, with focus on who has responsibility, and to some extent on what grounds [8]. This research is prominent in the Netherlands [31], with fewer case studies from other countries [32], or comparisons between countries [33,34]. These studies focus on the distribution of responsibilities between public and private actors, and between different political levels, with the conclusion that public-private alternatives are necessary for effective adaptation. They also study what happens with the existing distribution of responsibility when the climate is changing, in terms of both conflicts and principles for decision-making. Principles for the distribution of responsibility are located in international agreements, national (predominantly Dutch) law, and environmental research [8,35]. A recent study investigates the normative perspectives in UK citizens' perceptions of climate adaptation, but focuses only partly on responsibility distribution [9]. We will draw on this literature, but provide a more thorough understanding of normative principles for just distribution and how they can be applied to local and regional climate adaptation, coupled with knowledge of perceptions of Swedish stakeholders, which so far is lacking. Project Design The project has three modules (see Table 1 for an overview of tasks within each module). Module 1 is focused on fulfilling aim 1. Responsibility can be understood in many different ways. The literature on responsibility abounds with different taxonomies (e.g., [36][37][38]). Many of these focus foremost on what is often called retrospective responsibility (e.g., [38]). This type of responsibility is connected to accountability, answerability, and liability. Responsibility here means to answer for something after it has happened, either on legal or on moral grounds. In this project we are not interested in retrospective responsibility. Instead, we focus on what Cane [37] calls prospective responsibility-a forward-looking responsibility of, for example, ensuring that decisions are made and action taken. There are substantial overlap between the two types: if you have a prospective responsibility, you could be held accountable if you do not succeed. However, a prospective responsibility does not have to imply this. For example, scientists can be said to have a prospective responsibility for communicating knowledge, but are usually not seen as accountable or liable for their advice. The question of just distribution is discussed in the literature in relation to several issues, including the distribution of rights, wealth, and responsibility. The question of what constitutes a just distribution is basically the same in the different cases, though the way it is discussed differs slightly depending on Challenges 2020, 11, 11 4 of 13 whether the distribution is assumed to have a positive value or a negative value. Responsibility in the context of climate adaptation is an example of the latter. The standard division of theories of just distribution is based on five main categories: equality, guilt/merit, ability, need, and efficiency [39,40]. This division mirrors different normative principles, where the first and fifth aim at realizing certain values (equality and maximizing good respectively), while the other three are centered on purportedly relevant features/roles in acting individuals (or legal persons) [22,40,41]. These five basic distributions can be further divided into sub-categories and combined in different ways. We assume that justification of decisions regarding the distribution of responsibilities for climate adaptation will have to consider all of the above principles to some degree. We will apply these principles to actual adaptation measures. The aim is to provide a better understanding of the practical implications of normative principles for the choice of local and regional climate adaptation measures. We will first identify a range of climate adaptation measures in the academic and grey literature [42] applicable to the Swedish context. For these adaptation measures, we will systematically analyze what the application of each of the normative principles implies for the distribution of responsibility over four categories: initiative/decision, implementation, payment, and residual risk [43]. See Figures 1 and 2 for illustrations of this analysis. The in-depth normative understanding of the distribution of responsibility can be used across spatial planning issues and geographical contexts. The question of just distribution is discussed in the literature in relation to several issues, including the distribution of rights, wealth, and responsibility. The question of what constitutes a just distribution is basically the same in the different cases, though the way it is discussed differs slightly depending on whether the distribution is assumed to have a positive value or a negative value. Responsibility in the context of climate adaptation is an example of the latter. The standard division of theories of just distribution is based on five main categories: equality, guilt/merit, ability, need, and efficiency [39,40]. This division mirrors different normative principles, where the first and fifth aim at realizing certain values (equality and maximizing good respectively), while the other three are centered on purportedly relevant features/roles in acting individuals (or legal persons) [22,40,41]. These five basic distributions can be further divided into sub-categories and combined in different ways. We assume that justification of decisions regarding the distribution of responsibilities for climate adaptation will have to consider all of the above principles to some degree. We will apply these principles to actual adaptation measures. The aim is to provide a better understanding of the practical implications of normative principles for the choice of local and regional climate adaptation measures. We will first identify a range of climate adaptation measures in the academic and grey literature [42] applicable to the Swedish context. For these adaptation measures, we will systematically analyze what the application of each of the normative principles implies for the distribution of responsibility over four categories: initiative/decision, implementation, payment, and residual risk [43]. See Figures 1 and 2 for illustrations of this analysis. The in-depth normative understanding of the distribution of responsibility can be used across spatial planning issues and geographical contexts. The four categories of responsibility capture main aspects of a prospective responsibility. The four are wide categories, incorporating several aspects. For example, the implementing category covers implementation of decisions, monitoring, and evaluation [31]. If necessary the categories will be refined for the empirical analysis. Module 2 Module 2 is focused on fulfilling aim 2. In order to study empirically what residents and civil servants perceive as a just distribution of responsibilities for climate adaptation, we will focus on municipalities and county administrative boards (CABs). According to Swedish legislation [44], authorities are responsible for providing reliable information about local climate change effects, developing plans for threatened areas, and spatial planning. While residents are responsible for protecting their property, authorities have a role in guiding them. There are, however, extensive opportunities for municipalities to interpret how the distribution of responsibility should be structured and which normative principles to follow. The four categories of responsibility capture main aspects of a prospective responsibility. The four are wide categories, incorporating several aspects. For example, the implementing category covers implementation of decisions, monitoring, and evaluation [31]. If necessary the categories will be refined for the empirical analysis. Module 2 Module 2 is focused on fulfilling aim 2. In order to study empirically what residents and civil servants perceive as a just distribution of responsibilities for climate adaptation, we will focus on municipalities and county administrative boards (CABs). According to Swedish legislation [44], authorities are responsible for providing reliable information about local climate change effects, developing plans for threatened areas, and spatial planning. While residents are responsible for protecting their property, authorities have a role in guiding them. There are, however, extensive opportunities for municipalities to interpret how the distribution of responsibility should be structured and which normative principles to follow. Guilt: Those who buy property in riskprone areas have responsibility to protect their property (both private and public property owners) Ability: Municipalities will usually have a large ability based on their knowledge, although other actors are also conceivable (e.g., actors with knowledge of adaptation or wall construction) Need: The more you have to lose the larger your responsibility (property owners and residents) Efficiency: The distribution that is most efficient in terms of e.g., fast construction, expense, and/or reliability (usually municipalities) Equality: Everyone takes part in the implementation to the same extent (not conceivable) Guilt: Those who buy property in riskprone areas have responsibility to build the wall (both private and public property owners) Ability: The more you know about how to build walls or contract wall builders, the larger your responsibility (usually municipalities) Need: The more you have to lose the larger your responsibility (property owners and residents) Efficiency: The distribution that is most efficient in terms of e.g., fast construction, expense, and/or reliability (usually municipalities) Equality: Everyone pays the same amount (fee or tax, not depending on income) Guilt: Those who buy property in riskprone areas pay more Ability: The more money you have, the more you pay (fee or tax) Need: Those who are at risk pay for the wall (property owners and residents) Efficiency: The distribution that is most efficient in terms of e.g., fast construction, expense, and/or reliability Equality: Everyone pays the same amount if something happens (through some form of public insurance, not depending on income) Guilt: Those who built the wall pay for damages that the wall could not stop from happening (e.g., municipalities or private property owners) Ability: The more money you have, the more you pay (tax) Need: Those at risk take an insurance policy Efficiency: The distribution that is most efficient in terms of e.g., managing the residual risk in relation to the cost of building the wall The perception among authorities of what constitutes a just distribution will likely be affected by adaptation decisions already made. Therefore, we will study these decisions in six municipalities and two CABs (see Section 3.4. Case Selection) through document studies. Although not explicitly dealing with normative considerations, these decisions still provide information about normative positions. To study perceptions of a just distribution in the municipalities and CABs, we will interview civil servants from different parts of the administrations working with climate adaptation and spatial planning. The interviews, which will be recorded and transcribed, will be exploratory [45] and semi-structured [46]. The analysis will seek to detect which normative principles the civil servants emphasize in relation to the different responsibility categories. The residents' perceptions of what constitutes a just distribution will be studied using a mixed-methods approach [47] combining a quantitative survey study and a qualitative focus group study. The research data will be collected sequentially, with the results of the survey study forming the basis for the focus group study. Focus groups will be utilized to achieve a more nuanced and in-depth understanding of the results from the survey about why individuals prefer different combinations of distribution principles and about conflicts between different priorities. The data will be analyzed to establish which normative principles residents see as important for a sustainable distribution of responsibility. The survey allows us to study the attitudes towards a just distribution of responsibility among the wider population. This enables us to identify differences between young and old, low-and high-income, more and less educated, and more and less vulnerable residents. There could also be differences between those already affected by climate-related events and those who are not [48]. The survey will reach a random sample of all adults registered as residents in the municipalities, in total 6000 residents. In addition to background questions about age, gender, income, education, etc., and a few questions about their acceptance of climate science and experiences of climate change, we also ask about their attitudes towards different ways of distributing responsibility. The latter is primarily done through a series of statements that the respondents are asked to grade to the extent they agree with the statements from 1 (do not agree) to 7 (strongly agree). The answers will be in the nominal and ordinal scale. Non-parametric statistical methods will be used for the analysis. The purpose of these questions is not to test the respondents' knowledge about legal responsibility or to find out what measures they have actually taken or believe have been taken by different actors, but to understand what they perceive as just distribution. We are aware that questionnaires are never completely free from bias and that respondents may have different reasons for answering the way they do. We believe, however, that this is the best way of getting as close as we can to understanding the distribution of different perceptions of what a just distribution of responsibility means in relation to climate adaptation in the population of the studied areas. Focus groups allow for a broad set of opinions to be voiced and discussed [49]. In our case, this implies that participants need to qualify and justify their normative positions regarding responsibility to the other participants. We will conduct twelve focus group sessions, two in each municipality, with 8-10 participants in each group. The reason for holding two focus groups in each municipality is to reduce the risks connected with focus groups, including that the participants are biased in some direction, and that the discussion is dominated by one person. The focus groups will consist of individuals who live or are active (through, for example, owning a company) in the municipality. Invitations to the focus groups will be advertised in local Facebook groups and sent out through local branches of civil society organizations. The selection of participants will be made to cover as wide a group as possible, yet the participants are not seen as representing a specific group. Instead, the purpose of the selection is to increase the possibility of many different normative principles being voiced. We are interested in the justifications that participants give for their positions. This will provide an in-depth understanding of how they think about a just distribution of responsibility, and where there are consensus and conflicts. Participants will discuss how responsibility can be distributed based on a scenario developed from regional climate change scenarios [50], a general flooding scenario and risk assessments [51] in their municipality. The scenario will be illustrated using maps, photos and other graphics to illustrate the flood and its effects. We will also provide a description of the course and the consequences of the scenario. The conversations will be recorded and transcribed. Observations made during the discussions will be written down. The combination of these methods will give us a good understanding of what different groups of residents and civil servants perceive as a just distribution, and of areas of consensus and conflict. Module 3 Module 3 is focused on fulfilling aim 3. Any distribution of responsibility will benefit some and disfavor others. For a distribution to be truly sustainable, it has to be perceived as morally acceptable by both those disfavored and those benefitted, as well as by public authorities. Based on the results of modules 1 and 2, module 3 is focused on creating a better understanding of conflicts among the perceptions held by different groups of residents and civil servants. We will systematically investigate possibilities for municipalities and CABs to manage these conflicts. In most cases, a mix of normative principles, both for the different categories of responsibility and for different adaptation measures, is needed for finding a sustainable distribution. The task in this module is to find distributions of responsibility that can generate public decisions perceived as just. The analysis will take account of how responsibility distributions affect different groups, as well as their potential for effective climate adaptation. The methodology developed in module 3 will be broadly applicable. Case Selection The studied municipalities are selected to cover different locations and sizes: three from the County of Skåne (Malmö, Ängelholm, and Vellinge) and three from the County of Västra Götaland (Göteborg, Uddevalla, and Skövde), see Figure 3. Challenges 2020, 11, x FOR PEER REVIEW 7 of 13 scenario developed from regional climate change scenarios [50], a general flooding scenario and risk assessments [51] in their municipality. The scenario will be illustrated using maps, photos and other graphics to illustrate the flood and its effects. We will also provide a description of the course and the consequences of the scenario. The conversations will be recorded and transcribed. Observations made during the discussions will be written down. The combination of these methods will give us a good understanding of what different groups of residents and civil servants perceive as a just distribution, and of areas of consensus and conflict. Module 3 Module 3 is focused on fulfilling aim 3. Any distribution of responsibility will benefit some and disfavor others. For a distribution to be truly sustainable, it has to be perceived as morally acceptable by both those disfavored and those benefitted, as well as by public authorities. Based on the results of modules 1 and 2, module 3 is focused on creating a better understanding of conflicts among the perceptions held by different groups of residents and civil servants. We will systematically investigate possibilities for municipalities and CABs to manage these conflicts. In most cases, a mix of normative principles, both for the different categories of responsibility and for different adaptation measures, is needed for finding a sustainable distribution. The task in this module is to find distributions of responsibility that can generate public decisions perceived as just. The analysis will take account of how responsibility distributions affect different groups, as well as their potential for effective climate adaptation. The methodology developed in module 3 will be broadly applicable. Case Selection The studied municipalities are selected to cover different locations and sizes: three from the County of Skåne (Malmö, Ä ngelholm, and Vellinge) and three from the County of Västra Götaland (Göteborg, Uddevalla, and Skövde), see Figure 3. The municipalities are all at risk for future negative climate change impacts. Two municipalities in each region have experienced climate change-related events with a major disruptive effect, whereas one municipality per region has not, see Table 2. The selection of municipalities will ensure variation in location, size, and previous experience of impacts. The CABs of Skåne and Västra Götaland will also be studied. All eight authorities have agreed to participate in the study. We have chosen to focus on flooding (with different causes, such as sea level rise and cloudbursts), as it has been highly associated with climate change in the public debate. It is therefore likely that residents to The municipalities are all at risk for future negative climate change impacts. Two municipalities in each region have experienced climate change-related events with a major disruptive effect, whereas one municipality per region has not, see Table 2. The selection of municipalities will ensure variation in location, size, and previous experience of impacts. The CABs of Skåne and Västra Götaland will also be studied. All eight authorities have agreed to participate in the study. We have chosen to focus on Challenges 2020, 11, 11 8 of 13 flooding (with different causes, such as sea level rise and cloudbursts), as it has been highly associated with climate change in the public debate. It is therefore likely that residents to some degree have considered flooding as a future climate risk and have possibly also considered the normative aspects of flooding. As flooding has similar characteristics to other climate change impacts, such as heat waves, in terms of the relation between public and private stakeholders, the normative perceptions regarding responsibility distribution will probably be similar. Plan of Implementation The first year, 2019, will be devoted to modules 1 and 2. The work in module 1, focused on aim 1, will lay the groundwork for module 2, in which we will conduct document studies at regional and local levels of existing adaptation decisions during the first and second year of the project. Interviews and focus groups, including development of local scenarios, will be prepared, and work with the survey will be initiated. During 2020-2021, we will collect data for module 2 working towards aim 2. The survey, the scenarios for the focus groups and the interview guide will be completed and the interviews and survey conducted. Thereafter, the focus group studies will be conducted. The data will be analyzed. During 2021 and first part of 2022 we will work on module 3 and publish and disseminate the results. Work towards aim 3 will be based on results from modules 1 and 2. We will publish a policy report and conduct stakeholder workshops (see the Section 6. Stakeholder Communication). Plan for Scientific Publication The results will be published with open access in four articles in high-impact peer-reviewed journals (see Table 3). We will present the results at academic conferences focusing on climate change, climate adaptation, risk, and decision-making. During the last year, we will organize a workshop on just distribution of responsibilities for climate adaptation. The workshop papers will be published as a special issue. Ethical Considerations The project will deal with sensitive personal data. Thus, in accordance with Swedish law, the project needs ethical approval. Further, the project will only use information collected from individuals who give explicit consent to participate in the study and to the scientific use of anonymized information (interview and focus group studies) and information in aggregated form (survey study) respectively, after having been informed about the purpose of the study. Gender and Social Aspects Gender and other social aspects are central to the project. Key are income and education, as they determine one's ability to make adaptation decisions, although gender can also be important. In the literature on climate adaptation and risk, "vulnerable" groups are often discussed, including elderly and low-income groups. We will explore these issues through the principle of ability-based distribution (one of the five principles), which implies that those with higher capacity in terms of, for example, knowledge and financial resources, should take on a larger responsibility for adaptation. Societal Relevance The project is concerned with the need to adapt to climate change and questions of how this should be done and who should do it. There are currently conflicts concerning how, for example, effects of extreme weather should be handled [9]. For example, residents feel let down by their municipalities when they fail to prevent repeated flooding. Even in the day-to-day work in municipalities, the issue of who should be responsible for climate adaptation is disputed. One example of this is when CABs reject municipal local plans (detaljplaner) based on a lack of climate adaptation measures. When the effects of climate change worsen, so will the conflicts. Increased pressure at the local level will raise demands from residents for more public support and action. At the same time, increased costs due to climate change impacts will reduce the support capacity. Further, within 50-100 years it is quite possible that municipalities will have to abandon parts of their cities due to raised sea levels (foremost in the region of Skåne) or repeated flooding. These changes will lead to increased tension and to conflicts between different citizen groups, between citizens and public authorities, and within municipal organisations. All adaptation measures will have both benefits and disadvantages. Some parts of the population will suffer under the disadvantages, whereas others will mostly reap the benefits. This inequality concerns climate change impacts, as well as the responsibility for adaptation. For climate adaptation to be sustainable, it is crucial that it is not only seen as a scientifically-based spatial planning issue, but also as a normative one. If municipalities do not take the distributive effects of climate adaptation seriously, the risk of conflicts could increase unnecessarily. If public authorities, in their spatial planning, are aware of the potential conflicts relating to the distribution of responsibilities for climate adaptation, these conflicts can be managed more efficiently. By being transparent and systematic in the treatment of these issues, municipalities can make better-informed decisions while increasing the legitimacy and sustainability of these decisions [9]. The project will provide a knowledge base regarding the application of principles for a just distribution of climate adaptation responsibility, as well as regarding how civil servants and residents understand that responsibility. Further, the project will contribute with a systematic investigation of the possibilities to manage conflicts over these issues. The combined knowledge can guide policy-makers and spatial planners to make sustainable decisions based on knowledge of both climate change impacts and what residents and civil servants see as just adaptation actions. An important point of departure for this project is that even if public authorities do not make decisions to deliberately distribute responsibility for climate adaptation, all decisions implicitly distribute this responsibility. This means that seemingly even non-normative decisions have normative implications [9]. By making this explicit, public authorities can manage conflicts so that residents perceive decisions to be just. A public investigation from 2017 on climate adaptation [1] is focused on the distribution of legal responsibility between different actors in Sweden. Its conclusion is that the current distribution is unreasonable, as private actors in the future will bear disproportionately high risks. However, the investigation does not consider the normative aspects of the distribution any further and thus cannot suggest strategies for how conflicts can be managed; we argue that this is necessary. The investigation only considers legal responsibility, whereas we consider a wider responsibility. Stakeholder Communication The plan for stakeholder communication is based on insights from research on knowledge utilization-foremost, the importance of including stakeholders throughout the research process and the need for meetings and oral communication [52,53]. In our own experience, these aspects are crucial for improving the usability and use of scientific knowledge. The most central stakeholders for the project are Swedish municipalities, which through spatial planning will have a major influence on the practical distribution of responsibility for climate adaptation. The CABs are stakeholders responsible for regional strategies and can reject local municipal plans according to national legislation. A third group of stakeholders is national public authorities including agencies such as the Swedish Civil Contingencies Agency and the National Board of Housing, Building and Planning, in their capacity to decide on national strategies and advise municipalities. A fourth stakeholder group is the public, as they are affected by the decisions on climate adaptation. Our results can influence how public authorities at all levels work with climate adaptation in a more sustainable direction. The project can also make municipal residents more aware of spatial planning issues and climate adaptation and feel more included in planning processes. The project will enable interaction with stakeholders throughout the project duration with the aim of (1) allowing input from stakeholders, (2) enabling communication between different stakeholder groups, (3) communicating the results of the project, and (4) increasing awareness about responsibilities for climate risks in society. The first aim will be reached through the collection of data and through a stakeholder workshop. The interviews with civil servants will, in addition to enabling data collection, also let civil servants share ideas with us, which will be important input during the project. The stakeholder conference is held during year three to discuss what the studied municipalities and CABs, as well as national agencies, see as important aspects of our research and ideas about how they could use the results, which will be utilized for the final deliverables. For our second aim, the focus groups will be crucial. The focus groups will include residents with different perspectives on climate adaptation and on the distribution of responsibility. The stakeholder conference will also be important in this regard, as it allows actors from different political levels to meet. We will reach the third aim through communication of our results towards the end of the project. This communication will be directed at a wider group of municipalities, CABs, and national agencies. An important communication channel is the Swedish Association of Local Authorities and Regions (SKR), which has agreed to spread our results through its newsletter and networks. We will also present our results in workshops, co-organized with CABs, to reach a large number of municipalities. As part of the communication, we will also write a policy report directed at civil servants and politicians at all levels. The policy report will be printed in a report series and will be available online. Further, the main results will be presented at a conference attended by policymakers, organizations, and academics. The aim of the communication during year three is also to increase awareness of the importance of responsibility distribution for climate adaptation decisions, therefore also fulfilling the fourth aim. Throughout the project, we will write debate articles in newspapers and engage in public lectures. The aim is to increase the general awareness of the responsibility for climate adaptation.
2020-06-25T09:06:56.507Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "19de2e21885b63cc20e330fbb6a8e7a283a0d5a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2078-1547/11/1/11/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6c1e82e443bef455705f5aec6b839a3df55ab53f", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
26478418
pes2o/s2orc
v3-fos-license
The Regulation of Matrix Metalloproteinase Expression and the Role of Discoidin Domain Receptor 1/2 Signalling in Zoledronate-treated PC3 Cells Discoidin Domain Receptors (DDR1/DDR2) are tyrosine kinase receptors which are activated by collagen. DDR signalling regulates cell migration, proliferation, apoptosis and matrix metalloproteinase (MMP) production. MMPs degrade extracellular matrix (ECM) and play essential role in tumor growth, invasion and metastasis. Nitrogen-containing bisphosphonates (N-BPs) which strongly inhibit osteoclastic activity are commonly used for osteoporosis treatment. They also have MMP inhibitory effect. In this study, we aimed to investigate the effects of zoledronate in PC3 cells and the possible role of DDR signalling and downstream pathways in these inhibitory effects. We studied messenger RNA (mRNA) and protein expressions of MMP-2,-9,-8, DDR1/DDR2 type I procollagen (TIP) and mRNA levels of PCA-1, MMP-13 and DDR-initiated signalling pathway players including K-Ras oncogene, ERK1, JNK1, p38, AKT-1 and BCLX in PC3 cells in the presence or absence of zoledronate (10-100 μM) for 2-3 days. Zoledronate (100 μM) down-regulated DDR1/ DDR2, TIP mRNAs but did not change MMP-13 (collagenase-3) mRNA. However, zoledronate up-regulated MMP-8 (collagenase-2) mRNA. Zoledronate also inhibited mRNA expressions of K-Ras, ERK1, AKT-1, BCLX and PCA-1; but did not change JNK1, p38 mRNA levels. Zoledronate (100 μM) supressed DDR1/DDR2, TIP expressions; and gelatinase (MMP-2/MMP-9) expressions/activities. Conversely, zoledronate up-regulated MMP-8 expression in PC3 cells. Zoledronate down-regulates MMP-2/-9 expressions in PC3 prostate cancer cells. DDR1/DDR2 signalling and DDR-initiated downstream Ras/Raf/ERK and PI3K/AKT pathways may at least partially responsible for MMP inhibitory effect of zoledronate. Introduction Prostate cancer is the one of the most leading cause of cancer-related death in men in the world.Despite the pharmacological or surgical therapeutic strategies which reduce testosterone levels, the cancer frequently progresses androgen-independently to a metastatic phenotype [1].Therefore, it is essential to establish new potential targets for the therapy. Collagen is a major constituent of extracellular matrix (ECM) and also is a signalling molecule [2].The recently described cell surface receptors for collagen are Discoidin Domain Receptors (DDR1/DDR2).DDRs represent a family of tyrosine kinase receptors which are activated by collagen [3,4].Activation of DDR1/DDR2 triggers downstream sig- Ivyspring International Publisher nalling pathways and plays essential role in cell differentiation, proliferation, migration and contributes to carcinogenesis [5,6].Abnormal DDR function was recently shown in various human cancers [7,8].Accordingly, DDR signalling has been suggested to be a key potential target in cancer therapy. Ras mutations cause activation of downstream effector pathways which are well characterized as Ras/Raf/MEK/ERK and PI3K/AKT cascades [9].These pathways regulate gene expression programs that promote cell growth, proliferation and survival.DDR signalling was reported to activate Ras/ERK MAPK and PI3K/AKT cascades in human cancers [8,9]. Matrix metalloproteinases (MMPs) are a family of zinc-dependent proteolytic enzymes which degrade ECM components, including collagen.[10].MMP up-regulation and excess matrix degradation can lead to inflammation, uncontrolled cell proliferation, angiogenesis, invasion and metastasis [11,12].High MMP activity was noticed in various cancers [13,14].Similarly, MMP inhibitors were known to suppress tumor incidence, tumor growth and metastasis in prostate cancer [15,16]. DDRs enhance tumor cell adhesion, tumor growth, invasiveness and shorten patient survival by stimulating MMP activity and MMP-mediated cell proliferation and migration [17,18].However, the role of DDRs in the regulation of MMP secretion and activity in prostate cancer remained to be elucidated. The inhibitory effects of N-BPs on Ras/ERK and PI3K/AKT signalling pathways were demonstrated in human cancer cells and endothelial cells [22][23][24].However, the effects of N-BPs on DDR signalling and downstream signalling pathways in the regulation of MMPs in prostate cancer cells are not fully understood yet. In the present study, we aimed to investigate the effects of zoledronate which is the most potent N-BP in PC3 androgen-resistant prostate cancer cells and to clarify the possible role of DDR signalling and DDR-associated downstream molecular pathways in these effects. Cell culture PC3 cell line was kindly gift by Dr. K.S. Korkmaz, Ege University, Izmir, Turkey (The cell line were obtained from American Type Culture Collection).Cells were routinely cultured in DMEM Ham's F12 medium supplemented with 5% FBS, 1% penicillin/streptomycin (5 mg/ml) and 1% L-glutamine (200 mM) in a humidified atmosphere containing 5% CO2 at 37 º C. Cells were treated in the presence of 10 μM or 100 μM [23,25] zoledronate for 2 or 3 days.Untreated cells were kept as control in complete medium for the same period of time.At the end of the treatment period, both control and zoledronate-treated cells were collected and stored in -80°C for further investigations. Cell lysis and protein extraction For protein extraction, cells were resuspended in 250 μl of lysis buffer (20mM HEPES pH=7.4,0.1% Triton X-100, 0.2 mM EDTA, 300 mM NaCl).Cells then were collected from culture plates and transferred to Eppendorf tubes.Incubated on ice and centrifuged at 13.000 rpm for 30 minutes and cleared supernatants were collected.Protein concentration in lysates were measured by using Quant-IT protein assay kit (Invitrogen, USA) according to the kit manual. Gelatin zymography Zymograpy samples were normalized with protein concentration of each sample and loaded into 7.5% polyacrylamide gels containing 2 mg/ml gelatin and were subjected to electrophoresis.Following electrophoresis, SDS was removed from the gels by washing in 2.5% Triton X-100.Gels then were incubated at 37 o C for 48 h in incubation buffer (50 mM Tris-HCl, pH 8.0, 50 mM NaCl, 10 mM Ca 2 Cl, and 0.05% Triton X-100).After the incubation period, gels were stained in 0.2% Coomassie Brilliant Blue.Images of the gels were photographed by using Fusion FX7.Gelatinase activity was detected as clear bands on dark backgrounds.Densitometric analysis of bands was performed using ImageJ software.Gelatin substrate digestion levels were quantified as relative proteinase activity (area x optical density/mg protein). Real time quantitative PCR PC3 cells were collected and total RNA was extracted from control and zoledronate-treated (100 μM) cells for 3 days according to total RNA extraction kit (Quiagen RNA Easy Kit, USA) protocol.Total RNA concentrations were determined using a Nanovette Beckman Coulter DU 730 Spectrophotometer (Beckman Coulter, USA).cDNA was synthesized by random priming using Roche cDNA synthesis kit (Transcriptor High Fidelity cDNa Synthesis Kit, Roche, USA) and reactions were performed using a Quan-tiTect Reverse Transcription Kit (Qiagen, USA, 100 ng RNA per reaction) and Roche LC 480 Real-Time PCR (Roche, Germany) with primers for PCA1, MMP-2, MMP-8, MMP-9, TIP, Akt-1, BCLX, ERK1, JNK1, p38, K-Ras 4A, K-Ras 4B and 36B4 which is housekeeping gene.Primers were detailed in Table S2. Immunocytochemistry PC3 cells were routinely cultured in DMEM Ham's F12 medium as mentioned above.Cover slips were placed into 6-well plates and 1ml of medium and 1ml of cell suspension were added on to the each cover slip in the plates and plates were shaken gently.Then, PC3 cells were allowed to adhere on coverslips for 24 hours in a humidified atmosphere containing 5% CO2 at 37°C.After incubation period, 1 ml of medium was added on to the adhered cells.Adhered cells from zoledronate group were treated with 100 μM zoledronate.Some of the coverslips were kept as untreated for obtaining control cells.Control and zoledronate-treated cells were incubated for 72 hours (3 days) in a humidified atmosphere containing 5% CO2 at 37 º C.After 3 days, media of the cells were removed and cells were fixed using 96% ethanol for 15 min.After removing ethanol, cells on the cover slips were treated with 10% formalin then rehydrated through alcohol series and washed with distilled water.Then, they were treated with trypsin solution (00-3008, Digest All 2A, Zymed, San Francisco, California, CA) for 5 min at 37°C.Cells were incubated in a solution of 3 % H2O2 for 5 min to inhibit endogenous peroxidase activity; and following with normal serum blocking solution.Cover slips were again incubated in a humidified chamber for 18 h at 4°C with primary antibodies for TIP, DDR1, DDR2, MMP-2, MMP-8, MMP-9, thereafter with biotinylated IgG, and then with streptavidin conjugated to horseradish peroxidase for 15 min each prepared according to kit instructions (85-9043, Invitrogen, USA).Cover slips were finally stained with DAB (diaminobenzidine, 1718096, Roche, Mannheim, Germany) and counter-stained with Mayer's hematoxylin.Then images of cells were obtained by using a light microscope (Olympus BX-51, Tokyo, Japan) equipped with a high-resolution video camera (Olympus DP-71, Tokyo, Japan).Immunopositivity of each protein was evaluated using immunoscoring by two histologists, who were blinded to the treatment of the samples to prevent ascertainment bias.The immunoscoring procedures were performed semi-quantitatively, by considering the degree and number of positive cytoplasmic staining of cells and by scoring on a scale in the following range: negative (0), weak (1), moderate (2) and strong (3).A mean score was calculated for each sample.Then mean scores were used to categorize immunpositivity as weak (< 1.5) or strong (>1.5). Statistical analysis All data are expressed as mean± S.E.M. Statistical analyses of the data were performed using SPSS software (IBM SPSS PASW Statistics 19 Fix Pack 1 Amos 19, Chicago, IL) for Microsoft Windows.Fold changes in mRNA levels were calculated by using the Delta-Delta Ct method with 36B4 as an internal control.The statistical analysis of the significance between groups was carried out using paired Student's t-test.Chi-square test was used to evaluate the statistical difference of imunostaining between the groups.Difference was considered significant at p≤0.05. Effects of zoledronate on DDR1 and DDR2 expressions We investigated the effects of zoledronate on DDR1 and DDR2 expressions by western blotting.Zoledronate (100 μM) significantly decreased DDR1 expression compared to control cells at day 2 and day 3 (Fig. 1A), whereas zoledronate (10 μM) did not affect DDR1 expression at either time point.Zoledronate (100 μM) significantly reduced DDR2 expression at day 3 but not day 2 (Fig. 1A).However, DDR2 expression was not affected by zoledronate (10 μM) at either day.We also evaluated expression levels of DDR1 and 2 by staining DDR1 and 2 immunocytochemically and by scoring immunopositivities semiquantitatively.Consistent with the western blotting findings, immunocytochemical results showed that expressions of DDR1 and 2 significantly were decreased in 100 μM zoledronate-treated cells compared to control cells at day 3 (Figs.1B and 1C).These results demonstrated that zoledronate inhibited both DDR1 and DDR2 signalling in PC3 cells. Effects of zoledronate on expressions and activities of gelatinases We examined both gelatinase (MMP-2; gelatinase A and MMP-9; gelatinase B) expressions by performing gelatin zymography.Pro and active levels of MMP-9 and active levels of MMP-2 significantly declined in 100 μM zoledronate-treated cells, but not in 10 μM zoledronate-treated cells at day 3 (Fig. 2A).However, pro and active levels of both gelatinase enzymes were not affected by 10 μM or 100 μM zoledronate at day 2 (Fig. 2A).We also observed that MMP-2 production is less than MMP-9 production and pro level of MMP-2 was very low in PC3 cells. Consistent with the zymographic analyses, western blotting analyses revealed the inhibitory effects of 100 μM zoledronate treatment on MMP-9 expression at day 3.Western blotting data showed that MMP-9 expression was significantly reduced in 100 μM zoledronate-treated cells, but did not change in 10 μM zoledronate-treated cells at day 3 (Fig. 2B). These results implied, similar to the previous studies in different cancer cell lines, that zoledronate has inhibitory effects on gelatinase expressions in PC3 cells. Effects of zoledronate on mRNA levels and expressions of collagenases (MMP-8 and MMP-13) To test the inhibitory effect of zoledronate on collagenases, we investigated the alterations of MMP-8 expressions in zoledronate-treated PC3 cells at day 2 or day 3 by western blotting.While exposure to 10 μM zoledronate had no effect on MMP-8 expression level at day 2 and 3, 100 μM zoledronate significantly caused MMP-8 up-regulation at day 3 in PC3 cells (Fig. 3A).Despite the tendency to increase in MMP-8 expression in 100 μM zoledronate-treated cells at day 2, this increment did not attain a signifi-cant level (Fig. 3A).Western blotting was confirmed by immunoscoring data for MMP-8 at day 3, which showed that 100 μM zoledronate significantly increased MMP-8 expression at day 3 in PC3 cells (Fig. 3B). 100 μM zoledronate also caused up-regulation in mRNA levels of MMP-8 at day 3 in these cells (Fig. 3C). To test whether zoledronate may differentially modulate different collagenase enzymes, we also studied mRNA levels of MMP-13 by performing RT-PCR analyses.Distinctively from the effects of zoledronate on MMP-8, MMP-13 mRNA levels were not affected by 100 μM zoledronate treatment at day 3 (Fig. 3D). Effects of zoledronate on type I procollagen mRNA levels and expressions In order to analyse the effects of zoledronate on TIP as the precursor of Type I collagen, we designed a series of experiments and studied the effects of zoledronate on protein and mRNA levels of TIP. 100 µM zoledronate caused down-regulation of protein and mRNA expressions of TIP, but 10 µM zoledronate did not change these levels at day 3 (Fig. 4A).Similarly, immunocytochemical analyses confirmed these evidence completely in 100 µM zoledronate-treated cells at day 3 (Fig. 4B). 100 μM zoledronate also supressed mRNA levels of TIP in PC3 cells at day 3 (Fig. 4C). Effects of zoledronate on Ras/ERK1 and PI3K/AKT signalling pathways and PCA-1 in PC3 cells Collagen binding to DDRs triggers several downstream signalling pathways that can regulate the expression and proteolytic activity of MMPs in cancer cells.In this study, we addressed the question whether down-regulation of MMP expression by zoledronate correlates with inhibition of DDR-initiated signalling pathways including prosurvival Ras/Raf/ERK MAP kinase and PI3K/AKT cascades in PC3 cells.We therefore investigated the effects of 100 μM zoledronate on mRNA expressions of proto-oncogenes K-Ras 4A, K-Ras 4B and prosurvival genes, ERK1, AKT-1 and antiapoptotic BCLX at day 3. The results showed that 100 μM zoledronate down-regulated mRNA levels of both isoforms of K-Ras; K-Ras 4A and K-Ras 4B (Fig. 5).Zoledronate also caused significant inhibition on the mRNA levels of ERK1, AKT-1 and BCLX (Fig. 5).We also examined the effects of zoledronate on PCA-1 mRNA levels as a potential marker gene for prostate cancer.Our results demonstrated that zoledronate supressed PCA-1 mRNA levels (Fig. 5).The evidence is consistent with decreased mRNA levels of antiapoptotic BCLX and indicates that zoledronate alleviated the aggressiveness of prostate cancer in association with inhibition of PCA-1 production. Effects of zoledronate on JNK1 and p38 signalling pathways in PC3 cells We considered the potential regulatory roles of other MAP kinase family members; JNK and p38 in DDR-induced MMP up-regulation in PC3 cells.To determine whether zoledronate inhibits JNK and p38 expressions, we assessed mRNA levels of these two MAP kinase enzymes in 100 μM zoledronate treated cells and control cells at day 3. 100 μM zoledronate changed neither JNK nor p38 mRNA expressions in PC3 cells at day 3 (Fig. 5). Discussion In the present study, we studied the possible effects of zoledronate in PC3 prostate cancer cell line.In the light of the from the previous studies on N-BPs, we hypothesized that zoledronate can down-regulate MMPs in PC3 cells and DDR1/DDR2 signalling, downstream pathways may have a role in MMP inhibitory effect of zoledronate.Thereby, we have here assessed the expression of DDRs and gelatinases (MMP-2 and MMP-9), and major collagenases MMP-8 (collagenase-2) and MMP-13 (collagenase-3) in the presence or absence of zoledronate in PC3 cells for 2-3 days.We demonstrated that zoledronate inhibits DDR1/DDR2 signalling pathways and down-regulates MMP-2 and -9 expression and activities in PC3 cells. Previous studies similarly have shown the inhibitory effects of N-BPs on the regulation of MMP synthesis and activity in various types of cancers.In an early study in bone metastatic prostate cancer cell line subclone PC3 ML cells, alendronate was shown to markedly reduced MMP-2 and -9 secretion [26].Besides, a subsequent study demonstrated that alendronate reduced mRNA level and cellular level of MMP-2 in osteosarcoma cell lines dose-dependent manner [27].In another study in osteosarcoma cell lines SaOS-2 and U2OS, risedronate which is a N-BP reported to inhibit expression and activity of MMP-2 and -9 and tumor cell invasion [28].Accordingly, zoledronate was shown to induce down-regulation of MMP-2 and -9 activities and to inhibit cell invasion and lung metastasis in Euwing's sarcoma cell line [29].In addition, another study in two breast cancer cell lines (MDA-MB-231 and MCF-7) with different metastatic potentials showed that zoledronate suppressed the expression of MMP-2, -9, the membrane type MT1-and MT2-MMP and prevented migration and invasion of cancer cells [20].Surprisingly, we found that zoledronate markedly induced up-regulation of MMP-8 expression but did not change MMP-13 expressions in PC3 cells.Interestingly, MMP-8 was demonstrated to have protective role in cancer through its ability to reduce the metastatic potential of malignant cells in mice and human [30].Similar beneficial effects of MMP-8 in cancer and metastasis were observed in breast cancer, tongue cancer and lymph node metastasis [31][32][33].A recent study indicated that HGF (hepatocyte growth factor) variants inhibit proliferation, migration and invasion by inducing MMP-8 up-regulation and MMP-9 down-regulation in A549 human lung cancer cells [34].Conversely, MMP-13 overexpression was demonstrated to induce tumor growth, invasion, and metastasis in numerous studies [35].Despite to the diverse profile of effect, expression of both collagenases was shown to regulate by DDR receptors in several studies [36,37]. In this context, we revealed first time that zoledronate stimulates an up-regulatory expression of MMP-8 which has a protective role against cancer in PC3 human prostate cancer cells.This effect also may contribute to the useful effects of zoledronate in the cancer therapy. Unlike to our data regarding MMP-13, limited studies reported that N-BPs induces down-regulation on MMP-13 expression in cancer [38].Various effects of zoledronate on MMP-8 and MMP-13 suggest that each collagenase has different expression pattern and substrate specify, and zoledronate affect each collagenase divergently.However, further investigations on MMP-8 will broaden our knowledge of the expression pattern and mechanisms regarding anticancer role of this collagenase. The regulatory role of DDR signalling pathways in cancer progression was demonstrated in numerous cancer types [7,8].In this study, we showed for the first time that zoledronate as the most potent bisphosphonate inhibits both DDR signalling in PC3 cells.This result pointed out the essential role of DDRs as novel therapeutic targets in the treatment of prostate cancer and also indicates that zoledronate down-regulates MMP expression at least partially by inhibiting DDR1 and 2 signalling and downstream pathways in PC3 cells. The Ras family consists of four distinct Ras proteins (H-Ras, N-Ras and K-Ras splice variants: K-Ras4A and K-Ras4B).Ras protein mutations are associated with the activation of several effector pathways which mediate cell proliferation and malignancy [9,39].DDR signalling was reported to initiate several downstream regulatory pathways including Ras/ERK MAPK and PI3K/AKT cascades in the cancer process [3,8].Furthermore, N-BPs were demonstrated to inhibit Ras/ERK MAPK and PI3K/AKT signalling pathways in human cancer cells [24,40]. To explore whether zoledronate inhibit DDR-initiated downstream pathways which cause stimulation of MMP expression under our experimental conditions, we examined the effects of zoledronate on Ras/ERK MAP kinase and PI3K/AKT pathways and antiapoptotic BCLX expression.This finding suggest that inhibition of DDR1/DDR2 signalling and DDR-initiated downstream prosurvival and PI3K/AKT signalling pathways may associate with MMP-2 and MMP-9 down-regulation caused by zoledronate. We also assessed the effects of zoledronate on mRNA levels of JNK and p38 genes from MAPK pathway in PC3 cells.We found that zoledronate did not affect mRNA levels of JNK and p38 genes.Similarly, a study on human gastric cancer cell line, SGC7901 reported that a new bisphosphonate derivative, CP induces gastric cancer apoptosis via activation of ERK1/2 signalling without affecting JNK and p38 signalling pathways [41]. However, a few studies demonstrated that NBs induce p38 signalling in cancer cells [42,43]. According to our findings, zoledronate seems to inhibit DDR activation, without affecting JNK and p38 signalling pathways in PC3 cells.However, this finding needs further investigations, since we have not examined protein expressions of JNK and p38. Furthermore, we examined the effects of zoledronate on PCA-1 expression in the present study.PCA-1 was recently described as potential marker gene for prostate cancer.High expression level of PCA-1 was notified to be positively correlated with invasiveness and severity of cancer [44].In a study in DU145 prostate cancer cells, it was reported that PCA-1 transfection to the cells caused to increase the levels of both antiapoptotic BCLX and DDR1, which makes cell more invasive through MMP-9 up-regulation [45].Similarly, in this study we demonstrated that zoledronate down-regulated PCA-1 mRNA expressions associated with mRNA expressions of BCLX and MMP-9 in PC3 cells.Similarly, Shimada et al. revealed that knockdown of the PCA-1 gene induced apoptosis through reducing BCLX expression in PC3 cells [45].Furthermore, it was reported that PCA-1 regulates activity of DDR1 downstream pathway and PCA-1/DDR1 axis closely involved in malignant potential of androgen-independent prostate cancer cells [45].In the line of these evidences, our results indicate that zoledronate may induce apoptosis of cancer cells and suppress invasion by inhibiting PCA-1 expression in PC3 cells. To our knowledge, type I collagen is the most abundant fibrillar collagen and the component of bone matrix in mammals [35].In the present study, we therefore evaluated the effects of zoledronate on type I procollagen in PC3 cells since collagen is the unique ligand of DDRs.Our results showed that zoledronate inhibited type I procollagen expression in parallel with DDR expression.Similarly, alendronate was reported to decrease plasma collagen levels in bone metastatic prostate cancer cell line PC3 ML cells-injected mice [26].Our results suggest that while zoledronate inhibits activation of DDRs by collagen, it also prevent bone resorption and metastasis. Although in vitro studies suggest that zoledronate may be efficient in prostate cancer, evidences from clinical studies are controversial.A clinical study reported that long-term zoledronate treatment reduces skeletal complications and attenuates bone pain in patients with bone metastases secondary to hormone-refractory prostate cancer.In the same study, researchers also stated that zoledronate is only the bisphosphonate to show significant reduction in skeletal complications [46].Yuen et al reviewed clinical trials on prostate cancer patients with bone metastasis and reported that there was no difference between the treatment and the control groups in prostate cancer death, disease progression, radiological and PSA response [47].Similarly, a study on patients with castration-sensitive prostate cancer and bone metastases reported that zoledronate did not affect the risk of skeletal-related events [48].On the other hand, another recent study (ZEUS: Zometa European Study) demonstrated that zoledronate is ineffective for the prevention of bone metastases in high risk prostate cancer patients at 4 year [49].However, further clinical and in vitro studies which enlighten the role of DDRs and MMPs in different stages of disease may increase the therapeutic value of bisphosphonates in the treatment of prostate cancer.While clinical trials of specific inhibitors for MMPs were unsuccessful in various cancers, knowledge of MMP functions in cancer has gradually increased in the last decade.Recent studies focused on novel regulatory roles of MMPs in cancer progression may cause to more effective therapeutical use of MMP inhibitors in cancer therapy in the future [50]. Taken together, the results from this study may provide new insights into the functions of DDR signalling and downstream pathways as novel therapeutic targets in the regulation of MMP expressions during the cancer cell invasion and metastasis.Our data may also add more learnings to underlying mechanisms of the anticancer effects of zoledronate in prostate cancer progression.
2018-03-24T22:00:51.520Z
2015-08-28T00:00:00.000
{ "year": 2015, "sha1": "ecba67eae153326114ba780c461a7d9df6395ac8", "oa_license": "CCBYNC", "oa_url": "https://www.jcancer.org/v06p1020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ecba67eae153326114ba780c461a7d9df6395ac8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49731218
pes2o/s2orc
v3-fos-license
De-metaphorizing and becoming animal : when the animal looks back . A reading The primacy acquired by nature in our current culture has given way to several issues not strictly connected with an immediate and ‘purely’ ecological interest: there is rather the need to question how we conceive the animal with a focus on the possibility to transcend Western cultural heritage. When trying to give a literary representation of the animal, it is particularly important to adopt some measures which, following the trajectory of a genuine, positive ‘becoming-animal’, will safeguard its independence and avoid reducing it to metaphorically anthropomorphic representations. This essay intends to underline how, from this viewpoint, a few novels coming from the post-colonial area, where animal tales often show how interwoven humans and animals are and how they are constructed in relation to each other supply interesting case studies. My interest focuses particularly on Canadian Marian Engel’s Bear, where the writer tries to deal with unspeakable subjects between a woman and a bear: through an act of radical approach to its physical reality, the former comes first to recognize the latter, and then to accept its Otherness. On its side, the bear seemingly ‘shows’ a post-colonial attitude subversively resisting a typically Western anthropomorphic allegorization. Holding fast to itself and its animality, choosing in a way to ‘stay mute’, the bear keeps the role of a ‘perceptive catalyst’, ‘thought-producing’ and thus ‘world-changing’, according to an aesthetics of perception suggesting that the animal gaze might be the best perspective from which to observe not only our world, but especially our own selves. Introduction The primacy acquired by nature in the culture of our times has brought to the fore several issues which are not strictly connected with an immediate and 'purely' ecological interest.It is not only a question of invalidating the Cartesian dualism, which has deeply influenced the modern perception of the world, with a view to fostering, broadly speaking, a new 'ecological sensibility'.There is also, indeed, a wholly post-modern theoretical emphasis on how "relations to the non-human world are Meoni Acta Scientiarum.Language and Culture Maringá, v. 33, n. 1, p. 89-95, 2011 always historically mediated" (SOPER, 1995, p. 4) and thus on how our culture has developed its tools to build and classify the animal world, 'semanticizing' it in its relationship with the human one. Beyond the ecological interest, there is the need to question how we conceive the animal with a focus on the possibility to transcend Western cultural heritage, underlining and demystifying the "human blinkeredness rather than human fascination with the non-human world" (BAKER, 2000, p. 16).When trying to give a literary representation of the animal, it is particularly important to adopt some measures which, following the trajectory of a genuine and positive 'becominganimal' (as Deleuze and Guattari contend in Mille plateaux), will safeguard the independence of the animal itself and avoid reducing it to metaphorically anthropomorphic representations. From this viewpoint, some novels coming from the post-colonial area, where animal tales often show how interwoven humans and animals are and how they are constructed in relation to each other, supply interesting case studies.The attention here is particularly focused on Canadian Marian Engel's Bear, where the writer tries to deal with unspeakable subjects between 'man' (here a female character, Lou) and 'animal' (namely, a bear): through an act of radical approach to its physical reality, the former comes first to recognize the latter, being then able to accept its Otherness even on the linguistic level.On its side, the bear seemingly 'shows' a post-colonial attitude subversively resisting a typically Western anthropomorphic allegorization and avoiding any easy reduction to passive victim. Considered one of the most articulate feminist fiction-writers of contemporary Canada, Engel was insightfully skeptical about the existence of a monolithic Truth, usually inferred from a reality based on a strictly dichotomic structure which inescapably divides it into "black-white, pro, contra [so that] all the time the shades are neglected" (VERDUYN, 1999, p. 62) .Engel was rather concerned with what lies 'between the lines', those too often 'neglected shades' of a multiple reality in the analysis of which one can find the only way toward a deeper understanding both of others and of ourselves. Through Bear, she offers a peculiar interpretation of the relationship between a human being and an animal, and this interpretation tends to go against a traditional metaphorical exegesis of the latter.Such a relationship takes place in a space which is real and metaphorical at the same time: a sort of 'free zone', a border area whose margin starts undergoing a redefinition due to the upsetting which comes from the close intimacy between the woman and the bear.The situation, interestingly, does not provide any specific answer, but, philosophically, poses questions: that is, through a perplexing and disowning process, it makes us humans think in a different way, introducing new options and alternatives never thought of before, till we are able (in Cixous' words) to think the unthinkable 1 . To briefly recall the novel's plot: Lou, a Toronto archivist, is sent to northern Ontario to catalogue Colonel Cary's vast and valuable library, which, along with his estate, Pennarth, and the island itself, has been left to the Historical Institute Lou works for.Once there, she is caught by surprise by the unexpected presence of a male bear, chained up in a shed behind the house. Bear: reading between the lines to go beyond the line From a certain point of view, this novel seems to proceed counterpointing every metaphorical implication with its literal meaning.At the beginning, Lou feels herself metaphorically aged (her work, we can read, "had aged her disproportionately, [...] she was as old as the yellowed papers she spent her days unfolding"), but near the end we find her literally young again, rejuvenated: naked, in front of her mirror she can see her body is that "of a much younger woman" (ENGEL, 1987, p. 19 and 134).And there is more.In the incipit Lou is "like a mole, buried deep in her office", plunged in a thick metaphorical night: sunrays only seldom enter the room, heavy with dust, and the only odour she perceives is "[the] stink of a winter of nicotine and contemplation" (ENGEL, 1987, p. 11).In the end we find Lou in a totally reversed attitude: she is driving (that is, she is not static, but dynamic), surrounded by a "brilliant night, all star-shine" (ENGEL, 1987, p. 141), smelling pure natural scents around her.And last, but not least, she realizes it is time to shake herself out of the metaphorical winter her life is wrapped into, and leaves for a new task at the beginning of summertime. Lou is a lonely and dissatisfied woman, who has realized that "the image of the Good life long ago stamped on her soul was quite different from this 1 See Cixous (1993).In this respect Derrida's point of view is also interesting.To him, it is the very animal which, more than anything else, encourages the rethinking of the human 'subject,' who's confronted not only with a paradoxically homogeneous otherness, but with the real autre de soi (humanistic philosophy, in fact, underlining the responsibilities of an individual toward the Other, regards as a thinkable 'alien subject' only a human one: the Other is, inevitably, an other human being).See Derrida (1989Derrida ( , 2006)). A gloomy shadow seems to be shrouding her past and hanging sulkily on her future, darkly loomed by the name of "Lou's predecessor36" (ENGEL, 1987, p. 13), Miss Bliss, whose life has been anything but blissful, since she has long ago taken to drink.To avoid all the "vulgarities of the world" (ENGEL, 1987, p. 19), Lou has chosen to spend her life working hard and thus secluding herself from any contact with other human beings: "Oh, she was lonely, inconsolably lonely; it was years since she had human contact.She had always been bad at finding it"; "in a fit of lonely desperation, she had picked up a man in the street", "the Director fucked her weekly on her desk [...].She had allowed the procedure to continue because it was her only human contact" (ENGEL, 1987, p. 2). As Adriana Cavarero insightfully observes, following Hannah Arendt, an individual is not given without the Other, since "the relational essence of identity always assumes [...] the 'other' as necessary" (CAVARERO, 2005, p. 38, my translation).The highest price Lou had to pay in leading such an isolated life has been to lose contact with herself, with her interior life ("she was still not satisfied that this was how the only life she had been offered should be lived"), and with a present which seems to fade "from her view", becoming "as ungraspable as a mirage" (ENGEL, 1987, p. 20). Her journey to the island can thus be seen as a moving from the artificial, sterile monotony of her 'winter' life into a period of 'summer anarchy', an epiphanic and eschatological journey toward, as in Margaret Laurence's words, inner freedom and strength, and ultimately toward a sense of communion with all living creatures.Some critics interpreted this novel as a blurring of the differences between the primitive and the civilized, the animal and the human soul, and as a border-crossing.Indeed, Lou crosses a border, but this is not, say, a definitive action.To be clearer, let me introduce the scenario of today's customs: once crossed the first border, one enters the 'free area' and has to go over a second border to move into, say, another country.As for Lou, she crosses the first border and enters the 'free area', but she does not go on into the Other's realm.Rather, she meets such an Other (in this case, the bear) inside a sort of 'grey zone' where it is possible, and desirable, to uncover the whole process of natural revelation. The border, in this novel, is not just a metaphor, since we find it literally in the very beginning of the narrative where Lou is described as she crosses a river: "The road went north.She followed it.There was a Rubicon near the height of land.When she crossed it, she began to feel free" (ENGEL, 1987, p. 18).Clearly, the use of antonomasia 2 here highlights the importance of this moment, which is also marked by several words semantically linked to some key concepts of this novel: first, there is an upward movement (the Rubicon is said to be "near the 'height' of land" and, once having crossed it, Lou "sped 'north' to the highlands" (ENGEL, 1987, p. 17-18); second, the connotation 'brightness', hinted at through the word 'lightheaded', which proleptically links (as we shall see) the goingbeyond-the-line action to the house Lou will stay in. Moreover, this last word testifies of the 'healthy madness' 3 the heroine is going to experience during her stay at Cary Island.At first, be it said, she feels uneasy about the dizziness she is starting to go through, and tries to tame it, holding fast to her sense of order ("She always attempted to be orderly, to catalogue her thoughts and feelings, so that when the awful, anarchic inner voice caught her out, her mind was stocked with efficacious replies" (ENGEL, 1987, p. 83)), or concentrating on the practical task of cataloguing Colonel Cary's reputedly vast and valuable library ("Book, book.Always when these things happen, pick up a book", (ENGEL, 1987, p. 64)).But in the end she yields to a sense of madness and anarchy till she finds herself led to a higher sanity, and with "an odd sense of being reborn" (ENGEL, 1987, p. 19). A third key concept, coming out of the Rubiconcrossing episode and closely linked to the animal world, is that of smell.The word 'smell' follows Lou along her journey: when on the ferryboat to the island, she will remember a man telling her how "it was now impossible to find a woman who smelled of her own self" (ENGEL, 1987, p. 19).Once in the house, Lou perceives "smell of stove oil.Smell of mice.Smell of dust" and then "another smell, musky, 'unidentifiable' but good" (ENGEL, 1987, p. 24).Moreover, strongly opposing the "stink of nicotine" enveloping her in the incipit, at the end of summer she feels as if "her flesh, her hair, her teeth and her fingernails smelled of bear, and this smell was very sweet to her" (ENGEL, 1987, p. 119-120).What we have here, is a real smell, not a metaphorical one, and, interestingly, it is this very physicality which opens Lou to the Unknown, fostering the first encounters between the two protagonists: as the native Lucy Leroy says, "Bear lives by smell.He like you" (ENGEL, 1987, p. 49); 2 Lou is said to go over "a Rubicon", which metaphorically means "a limit that when passed or exceeded permits of no return and typically results in irrevocable commitment".Available from: <http://www.thefreedictionary.com/rubicon>.and <http://idioms.thefreedictionary.com/Rubicon>.Accessed on: 11 Jan. 2007. Meoni Acta Scientiarum.Language and Culture Maringá, v. 33, n. 1, p. 89-95, 2011 besides, upstairs in Lou's bedroom, the bear "sat for a long time staring at her, smelling at her" (ENGEL, 1987, p. 73) and in a second moment, sniffing on her a man's smell (after a sexual intercourse), the animal will not enter the house (ENGEL, 1987).We are not however allowed to read this behaviour in a romantic way: even if the idea of the bear who chooses not to stay with his beloved, smelling on her signs of her infidelity, is a fascinating one, we must not forget that Marian Engel did not want to give but a physical, real portrait of the animal. When Lou reaches her destination, she immediately realizes how peculiar the house she is going to live in is: an intriguing place where opposite forces will, at the same time, collide and collude, making dichotomies merge, or dissolve.The building ("a classic Fowler's octagon", that is built according to phrenological dictates (ENGEL, 1987, p. 22)), on one side, reveals to be the product of what Lou calls "colonial pretentiousness" (ENGEL, 1987, p. 36), whose artificiality is totally out of context in the 'monstrous'4 Canadian landscape.But, on the other side, it shows the stamp of its last owner, Colonel Cary's niece, a sagacious and resourceful woman who, giving away much of the family finery, asserted her anti-Victorian/anti-colonial attitude in favour of a deeper contact with the island and its native inhabitants, Lucy Leroy and John King. The octagonal mansion can be considered as the reification of a metaphorical border area.In it the two main characters, though coming from different worlds, will be able to meet and come into contact.Even the big central stair is both metaphorically and literally fundamental, so, this playing with the figurative and the literal sense somehow goes on, since it leads to the 'head' of the house, whose name, not accidentally, is Pennarth, an ancient Welsh word meaning "bear's head" (ENGEL, 1987, p. 64); thus, the phrenologic structure and the name of the house are closely linked and they both call forth the presence of the bear.Moreover, the stair sums up two movements, directionally opposite but conceptually similar: an upward one (the bear going upstairs to Lou's bedroom) and a downward one (the sunlight flooding down from the lantern above), so that we are interestingly and meaningfully linked back to the epigraph: 'Facts become art through love which unifies them and lifts them to a higher plane of reality; and in landscape, this all-embracing love is expressed by light'. Here, all the key concepts of the novel are proleptically implied: the 'all-embracing love', the raising, both metaphorical and literal, toward a higher level, the landscape, "her [Lou's] kingdom", (ENGEL, 1987, p. 29) and the light. This building is therefore a place where opposite forces can live together, an 'in-between' space where a more intense communion with nature, and specifically with one of her delegate, can take place.The bulky bear seems somehow to belie what Margaret Atwood said in Survival, that "animals in literature are always symbols" (ATWOOD, 2004, p. 90).Engel's bear is not, or at least not in any simple way.It is an animal with matted fur and rotting teeth, and no vocabulary beyond grunts and whimpers.In the beginning, Lou tries many times to reduce it to a metaphor, which is an anthropocentric attitude, indeed: "[it was] not a creature of the wild, but a middle aged woman", "[a] near-sighted baby", "compar[able] to the man", "a strange, fat, mesomorphic mannikin", "solid as a sofa, domestic", "lover, God or friend" (ENGEL, 1987, p. 36).This is surely typical of the human being, who usually tends to interpret the Unknown according to the rational categories of the Known, which are thus powerfully confirmed: the fundamental and severe ideological criticism, the only one able to subvert and revise the traditional cultural taxonomies, is thus baffled. It is worth observing that Lou, significantly, does not give the animal a specific gender nor a name.In fact, though when referring to the animal she uses the male personal pronoun 'he', we cannot ignore all those passages in the novel where, patently disregarding gender-based grammar rules, the bear is variously described as "indubitably male", "a large-hipped woman" or even "a [...] baby," till in the end Lou herself admits that "she could paint any face on him that she wanted" (ENGEL, 1987, p. 72).Also, it seems to me that in refusing to give the animal a name, and making her heroine calling him, commonly, Bear, Engel declines to ratify the Godgiven wholly human duty, which accords Man the right to subdue and have dominion over everything on Earth.Engel was in fact concerned about the somehow pretentious implications inherent to the act of naming, as we can read in one of her cahiers: "I came here to [...] what, unwind.Stop naming things", and further, about writers' pretentions to be something else, she writes "[a snipe] is not trying to be a writer [...]He does not suffer from a lust to name things" (VERDUYN, 1999, p. 428-431, my italics). Such an attitude is, in my opinion, related to the refusal to condemn the animal to a condition of (ENGEL, 1987, p. 40), and it is also far from lending itself to anthropocentric interpretations.We are in front of a beast that is neither submissive nor subordinate at all.Holding fast to itself and its animality, refusing any easy allegorization, our bear manages to reveal its own different truth to us.But how? The animal gaze We know it: animals cannot speak human languages, and they cannot write either.But they force Man to recognize an 'other' place, an 'other' dimension, which is still very very near, adjoining.To say it in Deleuzian words, this dimension is a sort of 'contiguity' studded with stops, each representing a possible line of flight.Such a 'contiguity', situated between two sets, belongs to none of them though it involves them both, and it is thus the only way to subvert dualisms from the inside. 6From a pragmatic point of view, the becoming-animal in literature would be a 'false' escape way, because unfeasible: it is not possible, in fact, to reproduce an authentic animal voice in a text, as Engel herself writes in one of her notebooks."I'd rather see authors doing their own voices, not pretending to be fishermen and farmers unless they bloody well know what fishermen and farmers think.(Not pretending to be bears, either)" (VERDUYN, 1999, p. 438). The challenge is, therefore, when trying to give a literary representation of the animal, to avoid reducing it to anthropomorphic or metaphorical implications, an action which tends, as we have seen, to restore the Unknown to the Known, rendering it familiar, flattening all the differences.We have to respect the animal's independence, thus following the trajectory of a genuine and positive 'becoming-animal'. Lou realizes quite soon that it is impossible to have any linguistic contact with the bear.At first, 5 According to Benjamin, the act of naming causes 'nature's muteness', which the philosopher refers to when talking about the deep unhappiness ('Traurigkeit') of nature.Nature is sad (traurig) because it is subject to the Word which transfixes it, depriving it of its own gaze.For further reading see Benjamin (1979) and also Derrida (2006). she tries to approach him in a typically human way, asking herself "What do you say to a bear?" and then saying: "Hello" (ENGEL, 1987, p. 33).But she comes to the predictable conclusion that "I am a woman [...].That is a bear.Not a toy bear, not a Phoo bear, not an Airlines Koala bear.A real bear" (ENGEL, 1987, p. 34).Later on, the narrator tells us how Lou is asking herself if the animal "like herself, visualized transformations, waking every morning expecting to be a prince, disappointed still to be a bear", but concludes this reasoning with a resolute "she doubted that" (ENGEL, 1987, p. 89). As a human being and as a woman, she will talk to the bear in her own language, but she will not try to force a similar communicative system on such a peculiar conversational partner: "What does he think?she wondered [...] No, back to the beginning: how and what does he think?"(ENGEL, 1987, p. 59-60), till she admits that "A bear is more an island than a man [...].To a human" (ENGEL, 1987, p. 60, my italics). Since it is not possible to have a linguistic interaction between the two characters, the visual encounter grows in importance.Through the gaze, in fact, Lou and Bear have their first, and extremely significant contact.While being outside "to survey her kingdom", she tries unsuccessfully to deal with the animal, probably "still hibernating".So she sits down to have her breakfast, when all of a sudden "she realized the bear was standing in his doorway staring at her:" Bear.There.Staring.She stared back (ENGEL, 1987, p. 34). As we can clearly see, it is not just the woman who is able to 'look at'; the animal, too, is allowed to perform an action which is usually considered as a human exclusive right.Here, human being and animal are both on the same level: they are both subjects and are able to become somehow aware of each other.Both on a philosophical and an anthropological level, the gazing theme (to look at, or be looked at), so important in this novel, is closely linked to that of knowledge.Engel was strongly against the presumption of those pretending to be 'always' able to analyze and understand 'everything'. 7That is the reason why the uninquiring and unreadable gaze of the bear is so much important to Lou.In his "weak eyes" Meoni Acta Scientiarum.Language and Culture Maringá, v. 33, n. 1, p. 89-95, 2011 (ENGEL, 1987, p. 69), which represent, as Derrida would say, "le point de vue de l'autre absolu" (DERRIDA, 2006, p. 28) she cannot see any analyzing intent: the animal just lies beside her, "staring at her, smelling at her", with no questions, no claims, no impositions.The bear is not like Canadian society, which, as Engel tells us, demands that its women should fit a lessening and alienating model; it does not try to judge her, thus freeing her from any conditioning, inhibition, or limitation.The other's gaze is thus no more a means to express a severe criticism, but rather the chance to be narrated from another point of view, which gives the individual the possibility to gain a deeper self-knowledge.In her everyday living together with an Otherness, represented in the novel by the bear, Lou lets it 'read' her life and 'tell' it back to her, receiving in this way such a tale which reveals her "the finiteness in all its fragile uniqueness" (CAVARERO, 2005, p. 10, my translation).Lou begins therefore to notice details, differences, peculiarities, being wrapped in a temporal dimension which is no longer prone to the constant flow of life; a suspended time not made up of more or less frantic events, but of a calm and serene everyday life, which gives human beings the possibility to concentrate on themselves and to emerge as individuals.Or, to say it in Cavarero's words, instead of the fugitive and discontinuous time of actions, Lou's stay in a 'border area' (such as Pennarth can be considered) offers her the unchangeableness and the duration of narration (CAVARERO, 2005, p. 39, my translation). Along with the process toward a more complete knowledge of her real identity, Lou understands that the solution to her existential issue lays in the proper interpretation of the pronoun 'others': not the other human beings, but an Otherness she is learning to deal and live with, completely respecting its diversity. She, therefore, will somehow be able to bridge the linguistic gap along the lines of the becominganimal: she does not mimic, nor does she try to psychologically identify with the bear.She will be very close to the animal inside that border area in which something can and must pass from one to the other.And such a 'something' cannot be explained, that is: reduced to any interpretable meaning, but only sensed: "What had passed to her from him she did not know" (ENGEL, 1987, p. 136).In this respect, I think it is interesting to underline how here, in this very moment when the novel seems to slightly open itself to possible metaphorical interpretations, Engel, ironically, suggests not to do so, hinting that "certainly it was not the seed of heroes, or magic, or any astounding virtue" (ENGEL, 1987, p. 136). Perceiving an 'other' code, Lou is able to go beyond the limit, to go over the conventional physical and mental perspectives.She is able to draw her own line of flight, that is to 'deterritorialize' herself, which does not mean to avoid one's responsibilities, but to perform an active detachment.Let it be said that, according to Deleuze and Guattari, the becoming-animal is not a question of metamorphoses, not simply a mere changing from a condition to another, or going from one starting point to a point of no return: there must be, in fact, a process of 'reterritorialization'. In this novel, Lou 'deterritorializes' herself during her love nights with the bear, but also finds her way back to a 'reterritorialization' ("she continued to be herself" (ENGEL, 1987, p. 136)), without which the codeexchanging has no sense at all.Lou is wholly aware she can't stay in the border area forever, that is why she tells the bear "you have to go to your place and I to mine" (ENGEL, 1987, p. 131).And she also painfully realizes that trespassing the limit, thus 'invading' the dominion of the 'Other', is not allowed: when she tries to make the bear penetrate her, he will hurt her. The importance of the bear (that is of the Other) in this process of self-knowledge, is undeniable.In the course of the narrative, it takes on more and more not-at-all-humanizing but positive values: if, in the beginning, Lou describes it as "passive", "stupid and defeated", "a middle aged woman defeated to the point of being daft" (ENGEL, 1987, p. 35), afterward it is "wise and accepting", as if "like the books, [it] knew generations of secrets", "an enormous, living creature larger and older and wiser than time, a creature that was 'for the moment' her creature, but that another could return to his own world, his own wisdom" (ENGEL, 1987, p. 117).This extraordinary combination of different qualities makes our bear a particularly interesting living being, a peculiar example of a 'post-modern creature', which does not respect the boundary between man and animal.Neither the aesthetics of modernism, nor the philosophical values of humanism can in fact easily come to terms with those hybrid forms which subvert the boundary concept, particularly the one between humans and non-humans.And this is because in the axiological system of modernism and modernity, according to Steve Baker (2000, p. 99), "there was Acta Scientiarum.Language and Culture Maringá, v. 33, n. 1, p. 89-95, 2011 a widespread urge to homogenize and systematize, to render the world intelligible by eliminating or suppressing inconsistencies, impurities and dissimilarities". Thanks to the bear, Lou experiences an opening in herself, and even if she does not exactly know what passed from it to her, "for one strange, sharp moment she could feel [...] she knew what the world was for.She felt not that she was at last human, but that she was at last clean."Clean and simple and proud" (ENGEL, 1987, p. 136-137). Conclusion In this sui generis appropriation of a role which is usually considered a human prerogative (particularly a male one) the bear is not calling forth a return to the origins, as if he could be read as the beasty or primitive man, dispossessing the civilized one.With such an ambivalent declaration of similarity 'and' dissimilarity at the same time, it just points to a blurred form, to the existence of that border area where you cannot say what animal or human essence is.An area, according to Deleuze and Guattari, of indetermination, of indiscernibility where something can pass from man to animal and vice versa, just because things, beasts, and persons have reached the very point that endlessly precedes any natural differentiation (DELEUZE; GUATTARI, 1991). The attention I have drawn to the gaze theme in this novel, along with a Deleuzian reading of the relationship which takes place between the human and the non-human according to the becominganimal concept, leads me to interpret the bear as the Derridean autre absolu, the real autre de soi, the one undertaking the philosophical task to pose existential questions, keeping the role of a 'perceptive catalyst', 'thought-producing' and thus 'world changing'.All this chimes in with an aesthetics of perception suggesting that the animal gaze might be the best perspective from which to observe not only our world, but especially our own selves, since such a gaze, giving man back the limits of his perception and of his human essence, testifies to the lack of his language and, literally, to the senselessness of his superiority (PRETE, 1993, p. 170, my translation).
2018-07-14T00:12:34.048Z
2011-03-28T00:00:00.000
{ "year": 2011, "sha1": "4c9a53a5120f52baa932822a964bd217b4766f30", "oa_license": "CCBY", "oa_url": "https://periodicos.uem.br/ojs/index.php/ActaSciLangCult/article/download/6251/6251", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4c9a53a5120f52baa932822a964bd217b4766f30", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Sociology" ] }
246135536
pes2o/s2orc
v3-fos-license
Experimental Study on the Effect of Using Smartphones on Pedestrian Flow in Straight Corridors With the development of science and technology, smartphones are widely used in people’s daily lives. An interesting phenomenon is that many pedestrians use smartphones while walking in the public places, which not only harm and even kill in some cases, but also affect the pedestrian traffic safety. At present, most studies focus on the pedestrians in the normal state that they don’t use phones while walking. Few research has been done on the pedestrian flow when they use phones. Therefore, the experiment that the pedestrians use phones while walking in straight corridor was conducted to study the movement characteristics and compared with the normal one. From the trajectories, the lane formation can be found in all experiments and the trajectories when they use phones are more chaotic. When pedestrians distract themselves by using phones, they walk more slowly and the flow is lower, leading to the longer egress time to pass the corridor. The distance from the boundary is defined as the shortest distance between the pedestrians and the wall. When they use phones, they try to avoid collision with the wall and walk away from the wall, so the distance is further than the normal one. The nearest pedestrian distance is defined as the nearest distance among all pedestrians. When they use phones, they distract themselves and don’t have enough time to avoid collision with others, so the nearest pedestrians distance is closer than the normal one. Our findings maybe a new insight for pedestrian flow when they distract themselves by using the phones, talking with others and thinking deeply, which can enrich empirical data and contribute to the simulation model. Introduction With the development of science and technology, smartphones become more and more popular. According to the report released by research firm Strategy Analytics (SA), as of June 2021, there will be 4 billion people around the world who own smartphones [1], which has been a new addiction upon adolescents [2]. The smartphones bring convenience to people's daily life, but cause lots of traffic safety problems while pedestrians use smartphones on the road. Cities like Washington D.C. and Chongqing, have set up the "only for smartphones user sidewalk" to prevent the accidents [3] . The pedestrians using smartphones on the road have been attracted wide attention and become a hot topic in traffic safety. In pedestrian evacuation dynamic, some researches focus on the safety of pedestrian traffic to study the pedestrian movement characteristics, including the fundamental diagram [4][5][6][7][8], lane formation [9][10][11] and so on. In these studies, the distracted behaviors [12] are ignored. Some studies have considered the effects of mobile phone use on pedestrian traffic safety. Nasar et.al [13] analyzed the data from the US Consumer Product Safety Commission on injuries in hospital emergency rooms from 2004 through 2010. They found that the mobile-phone related injuries among pedestrians increased relative to total pedestrian injuries and using a mobile phone while walking puts pedestrians at risk of accident, injury or death. Pesic et.al [14] conducted field observation to find out how the use of mobile phone (talking, texting and listening to music) affects the behavior of pedestrians while they are crossing the street. The result showed that the pedestrians who use mobile phones while crossing the street behave less safely than the pedestrians who do not use mobile phones and that their safety depends on the way they use mobile phones. Mobile phone talking has the greatest effect on the unsafe behavior of pedestrians; texting/ viewing content on mobile phone also influences the pedestrians' behavior though less than speaking, while listening to music has the smallest impact. Hatfield et.al [15] conducted an observational field survey to compare the safety of crossing behaviors for pedestrians using, versus not using, a mobile phone. The results show that talking on a mobile phone is associated with cognitive distraction that may undermine pedestrian safety. Melissa et.al [16] used the multi-agent models that relate reported changes to the locomotion patterns and sensory abilities of distracted pedestrians to the corresponding parameters of a commonly used crowd simulation steering approach. They find even a few of these behaviors significantly alters the flow patterns of the simulated agents. Besides, virtual reality is widely used to study the impact of smartphone distraction on pedestrians' movement. Schwebel et.al [17] considered the impact of distraction while talking, text-messaging, or listening in an interactive, semi-immersive virtual pedestrian street. They found that the pedestrians distracted by music or texting were more likely to be hit by a vehicle and the multimedia devices has a small but meaningful impact on college students' pedestrian safety. Sobhani et.al [18] considered three different conditions:1) not distracted , 2) distracted with a smartphone, and 3) distracted with a smartphone with a virtually implemented safety measure on the road. They find the females have more dangerous crossing behavior especially in distracted condition and the smart LED light safety treatment indeed improves the safety of distracted pedestrians. Joan et.al [19] studied the children's crossing intersections in VR program considering distractions (e.g., noise, pedestrians, park) and discussed possibilities for future VR interventions for injury prevention. At present, most controlled experiments focus on the movement of pedestrian in normal walking condition, only a few consider the effect of distracted behaviors in field observation and VR environment. Also the empirical data applied to the simulation model are lacking. Hence, it is necessary to investigate the distracted behaviors in controlled experimental conditions, which can support the empirical data of pedestrian flow, as well as model development and validation. What's more, the straight corridor is a common geometric structure and widely used for collecting pedestrian movement data [5,[20][21][22][23]. Based on these considerations, the experiment that the pedestrians use phones while walking in straight corridor was conducted to study the movement characteristics and compared with the normal one. In Sec. 2, the experiment setup is briefly described. In Sec. 3, the movement characteristics are analyzed and show results. Finally, Sec. 4 summarizes the paper and makes a conclusion. Set of the experiment The controlled experiments were conducted in September 2019 at the university of Science and Technology of China in Hefei, China. A total of 94 adults (undergraduate and graduate students) were recruited to participate in the experiment. The ratio of male and female is about 1:1. The illustration and a screenshot of the experiment are shown in Fig. 1. The movement of the adults in straight corridor under controlled conditions were studied. The length and the width of the channel is 10m and 1.8m respectively, which was built by using partitions with a height of 1.8m and width of 0.8m. The coordinate system is established from the starting point of the lower left corner of the channel as the origin. In order to investigate the effect of different measurement area on the results, three regions in different position were selected as shown in Fig. 1(a). The entrance of the corridor (x∈[0, 2m]) is region a, the middle (x∈[4m, 6m]) is region b and the exit (x∈[4m, 6m]) is region c. During the experiment, the volunteers were asked to wear a red or orange hat for extracting the trajectories precisely by software Petrack [16]. The experiments were mainly conducted in two types of scenes. One is the normal walking experiment. The volunteers were asked to stand in line in the waiting area before starting the experiment. Then the starter gave a command to start the experiment. During this experiment, the volunteers were asked to walk as usual through the straight corridor in normal state, in this case, some behaviors such as talking to others, using smartphones and pushed others are not allowed. The other is using smartphones while walking. The volunteers were asked to stand in line in the waiting area before starting the experiment. They were required to distract themselves by using smartphones. The specific ways of using smartphones were not restricted according to their personal preferences like games, chatting and so on. When the starting command was given, they kept using the smartphones though the straight corridor. The whole process of the experiments was recorded by two digital cameras with the res-olution of 1920*1080 and the frame rate of 25 fps. Trajectories In the experiments, the pedestrians were asked to walk through the corridor and their trajectories were obtained. Based on the trajectories, the individual instantaneous velocity v i (t) is calculated with Eq. 1. is the position of a pedestrian at the time of t+m, m is a constant, and in this paper m=0.2s [17]. The pedestrians' trajectories obtained from the video recordings with instantaneous speed in different scenarios are shown in Fig. 2. Different color represents the speed of the pedestrian at a certain position. It shows that the three lanes are obvious when they walk in the same direction, which means three pedestrians are allowed to walk side by side in the corridor. When the pedestrians walk as normal, the lane formation is more obvious and the trajectories cross is not chaotic. From the color of the trajectories, the speed of normal walking is higher than using smartphones while walking. To study the lane formation in the corridor, the probability distribution diagrams of pedestrians in y-direction are shown in Fig. 3. It can be seen there are three obvious peaks of probability distribution diagram corresponding to three lanes when they walk as normal. The peaks of two sides are higher than that in the middle. The position of three lanes is 0.3m, 0.9m and 1.4m, respectively. Three peaks of probability distribution diagram are obvious and the peaks are relatively close while walking though the corridor. The pedestrians on both sides of the corridor moved towards the middle, which increases the probability of pedestrians close to middle lane. The position of three lanes is 0.3m, 0.8m and 1.4m, respectively. Therefore, the lane formation will be appeared in two scenarios and the position of lane is close. When the pedestrians used smartphones while walking, they looked down as the phone and might ignore the walking route. The pedestrians on both sides of the corridor might rely on their own perception to make the actual trajectories deviate to a certain extent, resulting in the intersection of trajectories between the middle lane and the lanes on both sides. The distinction among lanes when they walk as normal is more obvious than using phones while walking. To further study the influence of using smartphones on pedestrians' trajectories, the trajectory offset ∆y is defined as the absolute value of y coordinate difference when the pedestrians entered the corridor (x = 0) and exited the corridor (x = 10). It can be calculated by ∆y = |y x=0 − y x=10 | . The trajectory offset probability distribution is shown in Fig. 4. It shows that more than half of the pedestrians can keep walking in a straight line and the trajectory offset is within 0.1m when they walk as normal. As the trajectory offset is more than 0.1m and keeps increasing, the proportion of pedestrians using phone while walking is larger than the normal. The trajectory offset is within 0.4m when they walk as normal, while the trajectory offset of some pedestrians using phones is more than 0.4m. Therefore, the pedestrians using smartphones have larger trajectory offset. If they do not pay attention to the surrounding environment during walking, they are likely to collide with pedestrians and objects around them. The trajectory offset probability distribution. Velocity-density diagram In this study, three areas were selected as the measurement area a, b and c. The length of the measurement area is 2m and the width of the measurement area is 1.8m. The velocity and density in different measurement areas is calculated by using Voronoi method [4]. The velocity and density in stable stage are shown in Fig. 5. It shows when walking in a normal way and using phone, the velocity in area a is larger than that in area b and the velocity in area b is larger than that in area c, which is similar to the density in stable stage. In area a, the pedestrians started walking at the initial stage. When entering this measurement area, the pedestrians will accelerate until they reach the steady speed. The pedestrians stayed in this area for a long time, resulting in a relatively high density. In area c, the pedestrians kept a certain walking speed from entering the measurement area to leaving the measurement area. The pedestrians can pass though the measurement area at a faster speed, resulting in a relatively small density. The result in area b is between b and c. The velocity-density diagram under different walking styles is shown in Fig. 5(c). The density varies from about form 1.2 ped/m 2 to 2.8 ped/m 2 . The speed decreases as the density increases, which due to the distance among pedestrians decreases as the density increases. When the pedestrians made the decision, they were influenced by the distance to others. The larger distance is more benefit for walking as usual. When the density is relatively small, the speed in normal state is obvious higher than the speed in using phone while walking. When the speed is large, a small part of normal walking speed close to the speed in using phones. To compare the speed in each measurement area, the boxplot of the speed is shown in Fig. 6. It seems the speed increases as the position x increases. To investigate the numerical relation between the speed and the position x, the linear regression is adopted. In the normal condition, the fitting relation can be expressed as v=0.03x+0.78. When using the phones, the fitting relation can be expressed as v=0.02x+0.65. To study the difference of speed in two conditions, the Mann-Whitney test is adopted. The result in area a,b and c are all p <0.05, which means the speed in using phones while walking shows significant difference with the normal state. In all areas, the speed in normal state is larger than using the phone while walking, which might due to the pedestrian's attention is distracted by smartphones. On the whole, the speed in normal state is larger than using smartphones while walking, which is due to the pedestrian's attention is distracted by smartphones and influenced by the density, leading to lower speed. So the pedestrians' speed decreases as the density increases, and using smartphones while walking significantly reduces the pedestrians' speed. Figure 6 The boxplot of velocity. Flow To compare the capacity of the two scenes, the reference line is selected at the position of x=5m. The number of pedestrian passing the reference line over time was shown in Fig. 7. When the experiment started, the pedestrians walked from the starting position to the reference line, then passed though the reference line and gone out of the corridor. When the pedestrians started to pass though the reference line, the number of pedestrians increases over time until all pedestrians passed though the reference lines. When the pedestrians walked as normal, the whole process lasts from 6s to 35.08s. When the pedestrians used phones while walking, the whole process lasts from 6.2s to 41.92s. The little difference in the time of the first pedestrian passing the line is caused by the effect of using phones on pedestrians' speed. By calculating the average flow during the whole process, the average flow under normal condition is 3.23 ped/s, while the average flow under using phone condition is 2.63 ped/s. Besides, the specific flow is used to evaluate the flow in an area rather than the reference line. To study the flow rate throughout the corridor, the specific flow is calculated by the Voronoi method [26] over small regions (20cm * 20cm). The result is shown in Fig. 8. From the heatmap, it shows that the color in some regions in normal state is darker, which mean that specific flow is larger than that in the using phones state. That is due to the speed in normal state is larger than that in the using phones states. Therefore, the average flow under normal condition is larger than using the phone while walking. Using smartphones will reduce the average flow and cause influence on the movement of the whole crowd. Distance from the boundary Generally, the pedestrians would keep a certain distance from the boundary. In the previous section, the densities are close in the three measurement areas. The distance from the boundary is defined as the minimum distance between the pedestrian's head and the boundary on two sides in different measurement areas. In the density stable stage, the distance from the boundary is shown in Fig. 9.To study the difference of distance in two conditions, the statistical test is adopted. The result in area a, b and c are all p <0.05, which means the distance in using phones while walking shows significant difference with the normal state. In different areas, the average distance from the boundary is larger than that using smartphones while walking. On the one hand, the trajectory will be offset and the offset is larger than the normal condition. From the previous lane formation analysis, the pedestrians on both sides tended to move to the middle, resulting in the increase of the distance from the boundary. On the other hand, the pedestrians who using smartphones while walking would try to avoid collision with the boundary psychologically and they might be familiar with the scene to a certain extent at the start stage, which made them move to the middle of the corridor. The pedestrians walking as normal could try to keep in a straight line, as long as the distance from the boundary is within a certain range. They would not change their walking route deliberately to avoid the conflict with the boundary. Therefore, the pedestrians using smartphones while walking in straight corridor are more likely to move to the middle to avoid colliding with the boundary. The distance between the pedestrians and the boundary in different areas. Nearest distance among pedestrians The pedestrians prefer to keep a certain distance from surrounding objects to avoid collisions, which makes them feel comfortable. In the previous section, the densities are close in three measurement areas. The nearest distance among pedestrians is defined as the minimum distance between the chosen pedestrian and other pedestrians, which is calculated by the head trajectory. In the density stable stage, the nearest distance among pedestrians is shown in Fig. 10. To study the difference of nearest distance in two conditions, the statistical test is adopted. The result in area a, b and c are all p <0.05, which means the nearest distance in using phones while walking shows significant difference with the normal state. In different areas, the mean of the nearest distance among using smartphones pedestrians is less than the normal condition. Although the pedestrians will keep a certain distance from others, they were distracted by using smartphones and react slowly to the movement of pedestrians around them to some extent. As a result, the pedestrians keep walking and shorten the distance to others. Therefore, using smartphones while walking will reduce the nearest distance among pedestrians, which might increase the probability of collision and conflict with others. Summary In this paper, a series of experiments were carried out to investigate the effect of using smartphones on uni-directional pedestrian flow in straight corridor. The movement characteristics are analyzed and compared with the normal one. Based on the trajectory data, the lane formation can be found in all experiments and the trajectory when they use phones is more chaotic. The pedestrians using mobile phones while walking have larger trajectory offset. When pedestrians distract themselves by using phones, they walk more slowly and the flow is smaller, leading to the longer egress time to pass the corridor. The boundary distance is defined as the shortest distance between the pedestrians and the wall. When they use phones, they would try to move to the middle of the corridor and avoid collision with the wall, so the boundary distance is further than the normal one. The nearest pedestrian distance is defined as the nearest distance among all pedestrians. When they use phones, they distract themselves and don't have enough time to avoid collision with others, so the nearest pedestrians distance is less than the normal one. Although they avoid the collision with the boundary, they are more likely to collide with other pedestrians. In short, when they use smartphones while walking, they are more likely to collision with the moving objects. Our study has carried out a straight corridor experiment to study the effect of using smartphones on the movement characteristics of pedestrian traffic. The findings maybe a new insight for pedestrian flow when they distract themselves by using the phones, talking with others and thinking deeply, which can enrich empirical data and contribute to the simulation model. In the future, more scenes including single-file, bottlenecks, uni-and bidirectional, Tjunction, Y-junction will be carried out to obtain richer movement characteristics of pedestrian traffic and more different ways of using phones will be considered, thus providing more basic theoretical basis for pedestrian traffic safety.
2022-01-22T16:06:42.572Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "0224cdade5182b39493af5bf31cfa08bf4101544", "oa_license": "CCBY", "oa_url": "https://collective-dynamics.eu/index.php/cod/article/download/A120/157", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1f29982d4b277e01b24a7294189aceecbcf22ace", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
219608168
pes2o/s2orc
v3-fos-license
Surgical implantation of electronic tags does not induce medium-term effect: insights from growth and stress physiological profile in two marine fish species Telemetry applied to aquatic organisms has recently developed greatly. Physiological sensors have been increasingly used as tools for fish welfare monitoring. However, for the technology to be used as a reliable welfare indicator, it is important that the tagging procedure does not disrupt fish physiology, behaviour and performance. In this communication, we share our medium-term data on stress physiological profile and growth performance after surgical tag implantation in two important marine fish species for European aquaculture, the sea bream (Sparus aurata) and the European sea bass (Dicentrarchus labrax). Blood samples after surgical tag implantation (46 days for the sea bream and 95 days for the sea bass) revealed no differences between tagged and untagged fish in cortisol, glucose and lactate levels, suggesting that the tag implantation does not induce prolonged stress in these species. Moreover, the specific growth rates were similar in the tagged and untagged fish of both species. Surgical tag implantation does not have medium-term consequences for the stress physiology and growth performance of these two marine fish species in a controlled environment. These observations support the use of accelerometer tags as valuable tools for welfare monitoring in aquaculture conditions. This study also shows that tagged fish can be sampled during experiments and considered a representative portion of the population, as they display growth and physiological parameters comparable to those of untagged fish. Background Over the past decades, telemetry applied to aquatic organisms has greatly developed in terms of tag miniaturization, battery life, software and hardware [1]. These tags are precious tools for the characterization and monitoring of behaviour in a wide range of organisms, including fish [2]. Moreover, electronic tags can also be equipped with environmental sensors that can record diverse data, such as temperature, depth and salinity, while monitoring physiological parameters, such as heart and ventilation rates or muscle activity [3][4][5][6]. Although these physiological sensors have mainly been used in the wild in the context of conservation and ecology, they have progressively been employed in aquaculture, serving as welfare indicators of common stressors (e.g. slaughtering practices, water quality and stocking density) [4,[7][8][9]. Open Access Animal Biotelemetry Telemetry studies assume that tagged fish are physiologically representative of the entire population. Therefore, it is essential that the tag does not negatively affect growth performance, physiology and survival. The implantation method and site and the tag's size are important factors for preventing the disruption of the physiological state, normal movement, and growth performance of tagged fish [10][11][12][13] and avoiding bias in the collected data. The maximum tag weight generally considered acceptable is no more than 2% of the fish's body weight in air (the so-called "2% rule") [10,11]. However, in some cases, the "2% rule" is not enough to avoid negative effects on the fish's health and welfare, such as stress, inflammation or obstruction of internal organs, or on its buoyancy and swimming performance [10,14]. In particular, stress is considered as "a condition induced by a factor (a stressor) that evokes an endocrine response (e.g. cortisol release) that could be beneficial as well as disadvantageous" [15]. Thus, due to many factors listed above, surgical implantation of electronic tag may induce stress for fishes. Most of our knowledge about the link between surgical implantation of electronic tag and stress is mainly based on salmonids [14,16,17]; therefore, more species-specific information is needed. In this study, we collected data from two different experiments, on the European sea bass (Dicentrarchus labrax) and the sea bream (Sparus aurata), two of the most important species for European aquaculture [18,19], aiming to evaluate growth performance and the physiological stress profile of tagged fish at least 46 days after intraperitoneal surgical implantation. Their physiological stress profile was assessed by comparing the means of plasma stress indicator values (cortisol, glucose and lactate levels) with those of untagged fish, while growth was assessed by comparing the specific growth rates (SGR) between tagged and untagged fish. Animals Sea breams (mean weight ± SD: 314.6 ± 49.1 g) were obtained from the commercial hatchery Ittica Caldoli (Lesina, Italy). After 3 weeks of acclimation, ID100 radio frequency identification (RFID) tags (Trovan, Netherlands) were implanted in the fish, which were then separated into three fiberglass tanks of 1.2 m 3 (n = 115 fish per tank; ~ 30 kg/m 3 ), forming triplicates. The implantation of pit-tag was performed under anaesthesia conditions (hydroalcoholic clove oil solution; 30 mg/L) under the skin in the region near the first dorsal fin. The fish were reared in marine water at a constant temperature of 18 °C, salinity of 35 PSU and a pH of 7.1. The water was completely replaced three times a day, and the oxygen levels were continuously monitored by an automatic system programmed to maintain the dissolved oxygen concentration above 5 ± 1 ppm. European sea bass fish (mean weight ± SD: 335.5 ± 62.4 g) were obtained from the commercial hatchery Panittica Pugliese SpA (Torre Canne, Italy). After 3 weeks of acclimation, RFID tags (ID100) were implanted in the fish, which were then separated into three fiberglass tanks of 1.2 m 3 (n = 35 fish per tank; ~ 10 kg/m 3 ), forming triplicates. The implantation of pit-tag in sea bass was performed under similar conditions (anaesthesia and area of implantation) as for sea bream. The fish were left undisturbed for 2 months before the start of the experiment. The water parameters (temperature, salinity and oxygen) were constant and similar to those for the sea breams. Throughout the experimental period, all fish were exposed to a 12L:12D photoperiod and were fed 1% of their body mass using commercial feed (Skretting Marine 3P, Italy) dispensed by automatic feeders for 3 h every morning. Experimental procedure At the beginning of the experiment (t 0 ; Fig. 1), the fish were gently removed from their rearing tanks and anaesthetized with a hydroalcoholic clove oil solution (30 mg/L) [16,17]. Morphometric parameters (body weight and total length) were recorded to calculate the SGR (see "Growth measurements and SGR calculations" section). Tag implantation At the beginning of the experiment (Day 0) for sea bass and 18 days later for sea breams (Day 18) (Fig. 1), V9AP acoustic accelerometer tags (Vemco Systems Inc., Nova Scotia, Canada) were implanted in nine randomly selected sea bass and five randomly selected sea breams (at least two fish from each tank, except one fish from one tank for the sea bream experiment), as described in Carbonara et al. [7]. Briefly, the fish were subjected to fasting for 24 h before implantation and were anaesthetized using a hydroalcoholic clove oil solution in doses of 30 mg/L [20,21]. The transmitter was inserted into the body cavity through a 1.5-cm incision. The incision was then carefully sutured, and the fish were injected with antibiotic (sodic ampicillin-cloxacillin; 1 mg/kg 24/h) [22] before being returned to their home tanks until the end of the experiment (t 1 ; Fig. 1). The mean tag weight in air accounted for 1.63% ± 0.32 and 0.90% ± 0.21 of the sea bream and sea bass body mass, respectively. All tagged fish recovered within a few days, and no mortality linked to the surgical procedure was observed [7]. To evaluate possible tag effects, 12 untagged sea breams and 9 untagged sea bass were randomly selected as controls (at least three fish per tank; Table 1) and were monitored during the experimental period. Growth measurements and SGR calculations At t 1 (Days 46 and 95 after tagging the sea breams and sea bass, respectively; Fig. 1), the tagged and untagged fish were once again gently removed from their rearing tanks and anaesthetized with clove oil solution as described above. Their body weight was measured (in grammes) to calculate the differences in SGR between t 0 and t 1 . The SGR was calculated according to the following equation [23]: where W is the total weight at the end (t 1 ) and the beginning of the experiment (t 0 ), and T is the number of feeding days between t 0 and t 1 . Blood sampling and stress indicator analysis After the morphometric measurements (2-3 min after anaesthesia inducement), blood samples of 0.5 mL were immediately taken from the first branchial arch of the tagged and untagged fish using a heparinized syringe. The samples were then centrifuged at 15,000g for 3 min, and plasma was collected and stored at − 20 °C until further processing, described below. The plasmatic cortisol, glucose and lactate concentrations were measured as described in Carbonara et al. [7]. Briefly, the cortisol concentration was determined using solid-phase competitive chemiluminescent enzyme immunoassays with a cobas Cortisol II kit (Roche, Switzerland). The glucose and lactate concentrations were determined using kits 17630H and 17285 (Sentinel Diagnostics, Italy), respectively, based on the enzymatic colorimetric Trinder reaction (GOD/PAP for glucose and PAP for lactate). Statistical analysis Statistical analyses were performed using the R software version 3.6.2 [24] at a 95% level of significance. Homoscedasticity of the data was a priori tested using the Shapiro-Wilk test. The appropriate statistical test (either the Wilcoxon test or the t test) was then performed to compare the SGRs and physiological stress indicators (cortisol, glucose and lactate) between the tagged and untagged fish of each species. Results In terms of growth performance, the SGR was similar between the tagged and untagged fish for both the sea bream (W = 38, p = 0.44) and the sea bass (t = − 0.58, p = 0.56; Fig. 2) between t 0 and t 1 , which correspond to a period of 64 days for the sea breams and 95 days for the sea bass. At t 1 , the plasma concentrations of stress indicators were overall similar between the tagged and untagged fish of both species (Fig. 3). More specifically, the plasma cortisol concentration showed no statistically Discussion Our results show that after a relatively long period (46 days for the sea bream and 95 days for the sea bass) following surgical implantation of accelerometer tags, the tagged fish were comparable with the untagged fish in terms of both growth and stress physiology in aquaculture conditions. To our knowledge, this is the first report concerning stress physiological indicators for the sea bream and the European sea bass, two important species for European marine aquaculture. These findings support the use of accelerometer tags in these two species in aquaculture conditions. Surgical implantation of accelerometer tags is perceived as a stressor for fish, causing cortisol release into the blood [25], which is the main stress hormone in teleost fishes [26]. It is a relatively acute response of organisms coping with stressors before regaining homeostasis, but it may last only a few days, depending on the species. For instance, in rainbow trout (Oncorhynchus mykiss), a heart rate increase was observed during the first 72 h following surgical implantation of a heart rate sensor, after which it was stabilized [27], suggesting that fish regain homeostasis relatively quickly after this stressful event. Jepsen et al. [25] reported similar observations in Chinook salmon, where physiological stress indicators were higher up to 24 h following tag implantation, but were comparable with those of untagged fish at most 7 days later. In our experiments, 46 and 95 days after tag implantation in sea breams and sea bass, respectively, the levels of all monitored stress indicators (cortisol, glucose and lactate) were found to be similar to those of untagged fish and consistent with the levels reported in the literature regarding these species [7,28]. Our results confirm that tag implantation does not induce chronic stress in either the sea bream or the sea bass, as observed in various other fish species [25,29]. It is thus important to emphasize that tag implantation does not exert longterm adverse effects on a high-stress responder species such as the European sea bass [30][31][32]. Nonetheless, although we did not directly investigate the acute stress response to tag implantation by measuring physiological stress indicators after the surgical procedure, we did observe that generally, the tagged fish did not eat for 2 to 4 days post-operatively (personal observations), probably because of surgery-induced stress. Indeed, stress and growth are closely related; stress is known to inhibit food intake and, consequently, limit the energy available for biological processes, including growth [33]. Therefore, it appears that acute stress is indeed induced by tag implantation, but it only lasts Fig. 3 Stress physiological profile of untagged (white bars; n = 12 sea bream and n = 9 European sea bass) and tagged fish (orange bars; n = 5 sea bream and n = 9 European sea bass) at t 1 . a Cortisol (ng/ mL), b glucose (mg/dL) and c lactate (mg/L). Values are mean ± SD. See main text for statistics a few days in these species. Moreover, this period of no food intake has no long-term consequences on growth, as shown by the similar SGRs between the tag and untagged fish of both species. It has been demonstrated in different fish species that when the "2% rule" is applied, growth performance is generally not impacted [11,25,34]. The similar growth rates between tagged and untagged fish can be explained by compensatory growth, which is a period of unusually rapid growth following a period of undernutrition [35]. It is noteworthy that we observed similar growth rates between the tagged and untagged fish in two different stocking densities (~ 10 kg/m 3 for the sea bass and ~ 30 kg/m 3 for the sea bream), which suggests that tagged fish can compensate growth and continue their normal life under different rearing conditions. Conclusion In conclusion, surgical implantation of accelerometer tags does not cause medium-term changes in the stress physiological profile and growth of either sea breams or sea bass reared in a controlled environment. Future studies are needed to investigate exactly how long these species take to recover from stress induced by tag implantation and thus be considered "normal" fish, displaying normal behaviour (e.g. feeding) and basal levels of stress indicators. Our study confirms (i) that the implanting process of accelerometer tags does not affect the basic growth and stress physiological indicators of tagged fish and (ii) that tagged fish can be sampled 46 or 95 days post-surgery for sea bream and seabass, respectively, during experiments and considered representative of the population, as they display growth and physiological parameters comparable to those of untagged fish.
2020-06-11T09:08:04.795Z
2020-06-08T00:00:00.000
{ "year": 2020, "sha1": "491c309f5a7785cf636d0b1545821d6a72a1cfec", "oa_license": "CCBY", "oa_url": "https://animalbiotelemetry.biomedcentral.com/track/pdf/10.1186/s40317-020-00208-w", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a34b5278bcdaf37893f8b5b5574884ad738caa35", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
225631981
pes2o/s2orc
v3-fos-license
Yield and Yield Attributes of Maize as Influenced by Organic Manures and Inorganic Fertilizers under Maize-Chickpea Cropping Sequence Maize (Zea mays L.) is one of the most important cereal crops in the world agriculture economy both as a food for human being and a feed for animals. There is no cereal on the earth, which has such immense potential like maize and therefore, it occupies the pride place as, “Queen of cereals”. In the world, maize ranks third amongst the food crops, next to rice and wheat. Maize is grown in almost all the states of India and it contributes nearly 9 per cent in the national food basket. Introduction Maize (Zea mays L.) is one of the most important cereal crops in the world agriculture economy both as a food for human being and a feed for animals. There is no cereal on the earth, which has such immense potential like maize and therefore, it occupies the pride place as, "Queen of cereals". In the world, maize ranks third amongst the food crops, next to rice and wheat. Maize is grown in almost all the states of India and it contributes nearly 9 per cent in the national food basket. Maize is important not only because of its great adaptability to divergent conditions but also due to its high responsiveness to management practices particularly nutrient management. There is an increasing interest in the use of organic manures as a source of nutrient supply to crop production for sustainable soil productivity, ecological stability and to minimize the requirement for chemical fertilizers. Indian soils are poor in organic carbon due to its tropical climate. With the increasing soil degradation and cost of chemical fertilizers, there is a need to integrate them with organic sources, which are good for soil health besides supplying nutrients for longer period. Combined application of available organic source along with optimal dose of inorganic fertilizers assure high and sustained productivity in a cereal-legume cropping system due to regulated nutrient supply and reduced the losses of nutrients, beside lowering cost of production. Such information is lacking for the system as a whole on the light texture of semi-arid part of central India. Sulphur is a key element for higher crop production because it is require for the formation of protein, vitamins, and enzymes. Also constituent of amino acids and involve in various metabolic activities including photosynthesis, respiration and legume rhizobium symbiotic nitrogen fixation. Hence, the experiment was planned on maizechickpea cropping sequence to study the effect of organic manures and inorganic fertilizers on yield and yield attributes of maize with certain objectives. Materials and Methods The field experiment entitled, Direct and residual effect of organic manures and inorganic fertilizers on maize (Zea mays L.)chickpea (Cicer arietinum L.) cropping sequence was carried out during kharif and rabi seasons of the years 2015-16 and 2016-17 at College Farm, Department of Agronomy, B. A. College of Agriculture, Anand Agricultural University, Anand. The texture of the soil is loamy sand. The soil is very deep and fairly moisture retentive. The soil of experimental site had low organic carbon (0.45%), low available N (250.88 kg/ha), medium available P 2 O 5 (48.54 kg/ha) and SO 4 -S (15.24 mg/kg); and high available K 2 O (315.84 kg/ha). There were total four levels of organic manure like (M 1 : no manure, M 2 : FYM 10 t/ha, M 3 : castor cake 1.0 t/ha and M 4 : vermicompost 2.5 t/ha); two levels of inorganic fertilizer (F 1 :75% RDF and F 2 : 100 % RDF) and two sulphur levels (S 1 : 0 kg S/ha and S 2 : 20 kg S/ha). The experimental design was Randomized Block Design (Factorial) with four replications. Recommended dose of fertilizer (120-60-0 kg/ha) was applied by urea and DAP chemical fertilizer and sulphur was applied in form of Gypsum to maize crop and residual effect was study on chickpea crop var. GG 2 (Gujarat Gram 2). All agronomical practices and plant protection measure was followed for better and successful crop production. The chickpea seeds were treated bio-fertilizer (Rhizobium thiogangnaticum) before sowing. The observation on growth and yield attributes were recorded by randomly selected five plants from net plot area and tagged all plants for further observations. The data of various parameters were statistically analyzed using analysis of variance (ANOVA) technique and the treatments were compared at 5% levels of significance (Cochran and Cox, 1967). Effect of organic manure Data presented in Table1 revealed that application of FYM @ 10 t/ha was recorded significantly highest plant height(63.60, 152.90 and 189.40 cm at 30, 60 DAS and at harvest, respectively) and number of leaves/plant (8.55 and 13.19 at 30 and 60 DAS, respectively). The increased in plant height by FYM application might be due to improve in soil physico-chemical and biological properties there by better availability of plant nutrients and moisture which enhance plant growth. The significantly highest plant height and number of leaves/plant due to more activities of meristematic tissues of the plant producing more number of trifoliate, which ultimately increased total photosynthetic surface area of the plant. This contributed towards higher production of leaf. The results similar to those of the finding of Rajkumara et al., (2009) andMundra et al., (2011). Dry matter accumulation/plant at 30 (32.15 g) and 60 DAS (97.42 g), leaf area/plant at 30 DAS (2000.7 cm 2 ) and at harvest (3963.30 cm 2 ) ad leaf area index at 30 DAS (1.67) and at 60 DAS (3.30) was recorded significantly highest due to application of vermicompost @ 2.5 t/ha (M 4 ). Total dry matter accumulation and leaf area/plant would be more meaningful criterion for assessing complete vegetative growth. Higher dry matter accumulation and leaf area might be due to the fact that organic manure provided favourable conditions which helped in availability of water, air and nutrients which might have attributed to better canopy growth. All these together contributed to increase production of dry matter/plant. These findings are in agreement with the results of Meena et al., (2011). Increasing leaf area index by application of organic manure might be due to addition of organic matter and other nutrients through organic manure. This might be attributed to increased root growth owing to better soil physical condition and consequently exploitation of greater soil volume by roots for nutrient absorption. The results are in harmony with those of and Rajkumara et al., (2012). Application of FYM @ 10 t/ha (M 2 ) recorded significantly the highest value of absolute growth rate (2.19 g/plant/day) and crop growth rate (18.21g/m 2 /day) at 30-60 days of the crop. The increased in LAI and dry matter accumulation/plant might be attributed to better absorption of nutrients, imparted by sufficient air and moisture in the rhizosphere which helped in increasing expansion of leaf lamina and thereby increased dry matter accumulation. These results are in line of the results reported by Manjhi et al., (2016). The data presented in Table2 indicated that application of 10t FYM/ha reported highest number of length of cob (16.68 cm) and girth of cob (14.47 cm) but response on no. of cobs/plant was found non-significant. The increased in length and girth of cob under organic manure application might be due to adequate supply of plant nutrients directly to the plants and created favourable soil environment to increase nutrients uptake especially nitrogen, phosphorus and potash by seed, ultimately increased the water holding capacity of soil for longer time, which resulted in overall increase in growth of the plant resulting in more number of cobs/plant (Srinivasanarao et al., 2010). Average weight of cob/plant (129.7 g) and seed index (24.4 g) were recorded significantly highest under FYM (M 2 ) application over castor cake (M 3 ) and vermicompost (M 4 ) application, The increased in average weight of cob/plant under organic manure application was might be due to ascribed to the fact that after proper decomposition and mineralization of applied organic manures. The slow release of nutrients during the entire crop growth period might also result into better plant growth. The increased in seed index under organic manure application was due to supply of nutrients to crop during growth period and thereby the better growth of crop, which helped the supply of sufficient photosynthates at the seed filling stage. This led to higher seed index under organic manure application. Perusal of data presented in Table 3 revealed application of FYM@10/ha recorded significantly highest seed yield (4249 kg/ha) and straw yield (6420 kg/ha). An application of organic manure might have increased the availability of both the native and applied nutrients in the soil and substantially enhance their uptake by the plant, leading to overall improvement in the growth and yield attributing characters like number of cobs/ plant, length and girth of cob, average weight of cob and seed index. Secondly, maximum yield under the treatment of FYM might be due to the beneficial effects of FYM by way of regulated liberalization and balanced supply of nutrients, tilting microbial dynamics in favour of the crop growth and creation of salutary soil environmental condition for the crop growth. Similar results were also reported by Rajkumara et al., (2012), Ashoka et al., (2013) and Mukherjee (2014) during investigation. Effects of inorganic fertilizer Application of inorganic fertilizers effects was found non-significant in case of plant population at 20 DAS and at harvest (Table1) and number of cobs/plant (Table2) during course of investigation. Data presented in Table1 indicated that increased in plant height at 30 (61.80 cm), 60 DAS (151.50 cm) and at harvest (187.40 cm) and number of leaves/plant at 30 (8.11) and 60 DAS (12.72) was increased with the increase levels of fertilization i.e F 2 (100 % RDF). The increased in plant height and number of leaves/plant might be attributed to increased uptake of nutrients, which is a structural component of protein molecules and protoplasm which might have increased synthesis of protein and carbohydrates in favour of increasing cell division and elongation. Another result might be due to ascribed to favourable effect of nitrogen on expansion and division of cells with thinner cell walls, promoted vegetative growth and encouraged the formation of foliage by producing more carbohydrates, utilized in building up of new cells. These results were akin to those reported by Mukherjee (2014). Significantly the highest dry matter accumulation/ plant (31.03 g) and at 60 DAS (94.58 g), leaf area/plant at 30 DAS (1804.50 cm 2 ) and at harvest (3503.30 cm 2 ), leaf area index at 30 DAS (1.50) and at 60 DAS (2.92), AGR (2.12 g/plant/day) and CGR (17.65 g/m 2 /day) at 30-60 DAS recorded due to application of 100 % RDF (F 2 ). Dry matter production is the net resultant effect of different plant metabolic processes. Nutrient supplied to increase meristemic growth, number and size of the vegetative plant parts and number of leaves, induce greenness in plant leaves by increasing synthesis of chlorophyll, absolute growth rate, crop growth rate etc. All these parameters have helped in higher dry matter accumulation in plant parts. The increased in dry matter accumulation due to nutrient application was related to the favourable effect of nitrogen and phosphorus on plant growth as evident from plant height (Tetarwal et al., (2011). The reason for the higher leaf area at both period was the higher growth of morphological character. Also the effect of N on protein synthesis and meristematic growth through hormonal fusion was resulted in higher leaf area/plant. In case of the increased in LAI could be attributed to increase in vegetative growth such as plant height, number of leaves/ plant and dry matter accumulation/plant. Further, the relationship between leaf area and nitrogen changed with time and over most of the growth periods. Higher AGR and CGR recognized that nitrogen and phosphorus are the most important major plant nutrients, which plays vital role in plant growth and development. It was also due to the accelerated number of leaves/plant under 100% RDF, to which most of the photosynthates were diverted to the increasing sink. These results was also confirmed by Kumar and Singh (2001) and Kumar et al., (2002) An application of 100 % RDF (F 2 ) recorded significantly the highest length of cob (15.89 cm),girth of cob (13.81 cm), average weight of cob (125.70 g) and seed index (24.10 g). The higher yield attributes under F 2 might be due to better nourishment of the crop as evident from higher removal of N, P and K by crop and fact that nitrogen might have hastened vigorous vegetative growth of the maize which might have stimulated the rate of photosynthesis and resulted into higher diversification of photosynthesis from vegetative to reproductive sink by Dechassa et al., (2013) and Mukherjee (2014). Application of 100 % RDF treatment (F 2 ) was recorded significantly the highest seed yield (4033 kg/ha) and straw yield (6134 kg/ha). The seed yield and straw yield was increased by 9.6 % and 7.5 % higher over F 2 treatment. Application of 100% RDF recorded significantly the highest seed yield might be due to different levels of RDF was related to the differences in size of photosynthetic surface and to the relative efficiency of total sink activity, possibly a function of number of cobs/ plant, length and girth of cob, average weight/coband seed index, chlorophyll contents, which in turn influenced the direction of movement of substrates. Watson (1952) stated that nitrogen had far more potent influence on the total photosynthesis of plants through its effect on the leaf area. Further, almost all growth, yield attributes and chemical characters were closely associated in the production of seed. All these might have cumulatively produced higher seed yield under the 100% RDF. Effect of sulphur Data presented in Table1 indicated that application of sulphur did not gave significant response on plant population, plant height at 30 and 60 DAS, number of leaves/plant, dry matter accumulation, leaf area/plant, leaf area index, AGR, CGR, number of cobs/plant, length of cob, girth of cob, seed index and straw yield. At harvest, significantly the tallest plant (184.60 cm) was observed by application of @ 20 kg/ha (S 2 ). Increased in plant height by application of sulphur might be due to effect of sulphur which has a function to metabolise growing parts of plants. It is directly related with cell division enlargement and elongation. Significantly the highest average weight of cobs (124.20 g) and seed yield (3973 kg/ha) was observed under treatment S 2 (20 kg S/ha). Increased higher average weight of cobs by application of sulphur might be due to stimulatory effect of S on tillering through cytokines synthesis and rapid conversion of synthesized carbohydrates into protein, consequent to increase in the number and size of growing cells, resulting ultimately into more average weight/cob (Srinivasanarao et al., (2010). Significantly, the highest seed yield was recorded with the treatment S 2 . Further, treatment S 2 recorded 6.3 per cent higher seed yield over treatment S 1 . These might be due to the fact that sulphur application improved over all nutritional environment of the rhizosphere as well as plant system which could be more advantageous for profuse vegetative and root growth which activated higher absorption of phosphorus, sulphur and nitrogen from the soil and improved metabolic activities inside the plant. Sulphur plays a vital role in the synthesis of chlorophyll, a part of active centre of some enzymes. It also affects various metabolic processes, which ultimately helps in growth and development of plants. Interaction effects The treatment combination M 2 F 2 (Table4) gave significantly highest plant height at 60 DAS (162.6 cm), length of cob (17.82 cm), average weight of cob (131.0 g),seed yield (4547 kg/ha) and straw yield (6564 kg/ha). The higher growth, yield attributes and yield under treatment combination M 2 F 2 might be due to the nutrients from fertilizers were available to the crop at the early stages and through organic manures at later stages of the crop growth. Organic manure also supply essential nutrient to the soil and increase the availability of nutrient for longer period due to slow release and application of inorganic fertilizer consequently supply of photosynthates for the formation of yield components. The girth of cob (15.01 cm) was significantly higher under treatment combination of M 4 F 2. Girth of cob was increased by application of organic manure and inorganic fertilizers might be due to stimulating effect on the progressive development of roots due to increase in nutrient availability especially nitrogen and phosphorus in soil which favourably improved growth The overall results inferred that kharif maize crop fertilized with recommended dose of inorganic fertilizer 120-60-00 kg NPK/ha and 20 kg sulphur/ha alongwith FYM @ 10 t/ha gave maximum growth and yields of maize.
2020-09-03T09:04:56.623Z
2020-07-20T00:00:00.000
{ "year": 2020, "sha1": "41c6f23ad40debe3716001dbc43844606b769562", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-7-2020/Y.%20C.%20Lakum,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e349a25103e88922bb9f0a9b4c25744bce6c445c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
55934048
pes2o/s2orc
v3-fos-license
A SURVEY OF IMPLEMENTATION OF OPPORTUNISTIC SPECTRUM ACCESS ATTACK WITH ITS PREVENTIVE SENSING PROTOCOLS IN COGNITIVE RADIO NETWORKS Recently, the expansive growth of wireless services, regulated by governmental agencies assigning spectrum to licensed users, has led to a shortage of radio spectrum. Since the FCC (Federal Communications Commissions) approved unlicensed users to access the unused channels of the reserved spectrum, new research areas seeped in, to develop Cognitive Radio Networks (CRN), in order to improve spectrum efficiency and to exploit this feature by enabling secondary users to gain from the spectrum in an opportunistic manner via optimally distributed traffic demands over the spectrum, so as to reduce the risk for monetary loss, from the unused channels. However, Cognitive Radio Networks become vulnerable to various classes of threats that decrease the bandwidth and spectrum usage efficiency. Hence, this survey deals with defining and demonstrating framework of one such attack called the Primary User Emulation Attack and suggests p r e v e n t i v e Sensing Protocols to counteract the same. It presents a scenario of the attack and its prevention using Network Simulator-2 for the attack performances and gives an outlook on the various techniques defined to curb the anomaly. Keywords----primary user emulation, primary user, sensing technique, network simulator, effective spectrum usage, secondary users, ma l i c i o u s users I. INTRODUCTION Wireless networks have involved a lot of interest in the research area due to their potential applicability in innumerable real-world practical applications.However, due to the distributed nature and their usage in critical applications without human interventions, sensitivity and criticality of data communicated, these networks are highly vulnerable to security and/or privacy threats that can unfavorably affect their performance.These issues become further critical in cognitive networks in which the nodes have the capabilities of changing their transmission and reception parameters according to the radio environment under which they operate, in order to achieve reliable and efficient communication and optimum utilization of the network resources. The increasing demand for spectrum in wireless communication has made efficient spectrum utilization a big challenge.To address this important requirement, Cognitive Radio (CR) technology has evolved as the answer.A CR is an intelligent wireless communication system that is aware of its surrounding environment, and adapts its internal parameters to achieve reliable and efficient communication and optimum utilization of the resources [1].The cognitive technique is the process of knowing through perception, planning, reasoning, acting, and continuously updating and upgrading with a history of learning [4].It has the ability to know the unutilized spectrum in a license and unlicensed spectrum band, and utilize the unused spectrum opportunistically.The incumbents or primary users (PU) have the right to use the spectrum anytime, whereas secondary users (SU) can utilize the spectrum only when the PU is not using it.Each country has its own spectrum regulation rules.A certain band available in one country might not be available in another.Traditional wireless networks with a preset working frequency might not work in cases where the manufactured wireless nodes are deployed in different regions.On the other hand, if nodes are equipped with cognitive radio capability, they can overcome the spectrum incompatibility problem by changing their communication frequency band.Therefore, CR wireless devices have the potential to be operated almost anywhere in the world [4]. Design of a CR network poses many new technical challenges in protocol design, power efficiency, spectrum management, spectrum detection, environment awareness, novel distributed algorithms design for decision making, distributed spectrum measurements, quality of service (QoS) guarantees, and security [1].In CNs, the cognitive engine in a sensor node has many radio parameters under its control.The cognitive engine determines the suitable values of these parameters over time in order to optimize its multi-goal objective functions.Various attacks are possible on the learning algorithms of the cognitive engines so that these algorithms produce suboptimal outputs [1].Since these attacks are targeted on the learning algorithms, they are also known as the belief-manipulation attacks.The cognitive radio may have three goals such as achieving low-transmit power, high rate of transmission, and high security in communication.Based on the application currently under use, the cognitive engine assigns different weights to these three goals to maximize its overall objective function.An attacker can compromise a user by breaking the Dynamic Spectrum Access (DSA) mechanism by implementing spectrum misuse or by exhibiting selfish behavior [1].For example, the attacker node can transmit in an unassigned band or it can ignore the cognitive messages sent by the other users in the network.Hence identification of various possible attacks on CNs is critical in order to design appropriate security schemes to defend against those attacks. A well-known malicious attack is the primary user emulation attack (PUEA).In PUEA, malicious users mimic the primary signal over the idle frequency band(s) such that the authorized secondary users cannot use the corresponding white space(s) [6].This leads to low spectrum utilization and inefficient cognitive network operation.The PUE attack means that an attacker sends out primary-user-alike signals during the spectrum sensing period of secondary users, thus "scaring away" the secondary users since they are unable to distinguish the signals from primary users and the attacker [2].The goal of the adversary is to mislead the SUs regarding the available spectrum opportunities, thus preventing them from utilizing idle channels [5].This attack is particularly easy to launch in CRNs due to the highly flexible and software-based air interfaces of CR nodes.The PUE attack can be catastrophic, since it severely interferes with spectrum sensing process. II. DESIGN The steps for development of an attack can be (shown in Fig. 1): 1.Consider two wireless networks.2. Users check the availability of channel in one of the two networks.3. Secondary users sense the channel according to the channel availability.4. SUs check for free/available channel (i.e.unlicensed channel).5.The band width may be limited to access maximum number of users.6.The attackers will be formed.7. The attacker emits signal similar to the Primary user's signal.8. Secondary users will be informed that there are no unused channels.9. Secondary users won't get access from any access point. The conditions that would lead to effective PUE attacks are: little or no PU-SU interaction, different signal characteristics of PU and SU signals, primary signal learning and channel measurement and avoiding interference with primary network [7].Some potential consequences of PUE attack are Bandwidth wastage, QoS degradation, connection unreliability, Denial of Service and interference with primary network [7].Mitigating such a threat would allow high global operability and hence, can become an effective solution for rapid deployment of mobile users during rescue missions, disaster relief operations and emergencies, like the 9/11 attack on the twin towers in the US.-Always assume sensory input statistics are "noisy" and subject to manipulation; -Be programmed with some amount of "common sense" to attempt to validate learned beliefs; -Compare and validate learned beliefs with other devices on the network; -Expire learned beliefs to prevent long-term effects of attackers; and -Attempt to perform learning in known-good environments Fig. 3 Representing the PUEA Node 14 in Fig. 3 tries to sense for any available channel by requesting the base station of WLAN network (shown in green).Since it does not avail any channel for transmission, due to the malicious node, it experiences packet loss. A. Robust PUE Detection method This algorithm, as stated in [8], analyzes the effect of forged reports on the location process of a given emitter and provides a set of countermeasures in order to make it robust to undesired behaviors or false feedback. It has considered Least Square (LS) methods over a linearized set of TDoA (Time Difference of Arrival) error equations (by means, for example, of Taylor-Series Estimations) for stationary networks such as CRNs.LS estimation methods are iterative schemes that start with a rough initial guess (xv; yv; zv) and improve the guess at each step (xv + δx; yv + δy; zv + δz) by determining the local linear least-sum squared-error correction (δx; δy; δz).The target is to iterate the method until the components of the correction are below a given threshold, that is to say, that the estimation converges. B. The algorithm 1. Obtain a linear estimation of the measurement errors. According to this, given a set of n TDoA measurements ґ i taken by the pairs made up of the BS and each one of the CRs, the measurement errors assuming a prediction (x v ; y v ; z v ) can be expressed as in (2), with ƒ i (x; y; z) as in (1) the real TDoA measurement for the pair BS and anchor node i for position (x; y; z) 2. From the 1st-degree Taylor polynomial of e, the matrix representation of the linearized forms of the distance error can be expressed as in (3), with A an n-by-3 matrix with the Taylor coefficients and δ a 3-by-1 column vector with the corrections (δ x ; δ y ; δ z ). 3. Assuming that ệ is full rank, the value of δ that minimizes the sum of quadratic errors ệ T ệ can be computed as in (4). 4. However, in the real world, measurements performed by different nodes are subjected to different errors and then their measures may contribute to the LS estimation with different weights.Moreover, measurement errors are often correlated.Consequently, localization methods, instead of the previous approach, often minimize ệ T Wệ, with W an n-by-n matrix with the assigned weights to every measure.In such case, the most common approach is to define W =R -1 with R the matrix of covariances between measures.Therefore, the optimal δ can be derived as in (5). 5. False reports provided by compromised nodes can severely undermine the location method, thus leading to false positives or negatives regarding the detection of primary users.Consequently, there is a need for identifying false measurements in order to discard them for the location process.This task could be accomplished by comparing measurements from different nodes and looking for large deviations.However, measurements can considerably vary depending on the position of the CR within the CRN.Therefore, the most intuitive way would be to group nodes into clusters and compare measurements among nodes belonging to the same cluster.Usually, outlier measurements may be (badly) detected by means of LS fitting, but it is recommended to use Least Median Square (LMS) fitting instead.LMS aims to minimize the median of the residue squares as in (6) increasing its robustness to deviated measurements. 6.However, the process of minimizing the median of the residue squares is prohibitive and then the final position estimation should be obtained with a mixed solution: a. Divide the set of n CRs into c several clusters of equal size b.Apply the location process described separately in every cluster obtaining an estimation of the position of the emitter for each cluster (x v1 ; y v1 ; z v1 )….(x vj ;y vj ; z vj )….. (x vc ; y vc ; z vc ) c. Compute the median of residue squares for each cluster j as, wherer i = ν p ґ i -ƒ i (x vj ; y vj ; z vj ) is the residue for node i of cluster j and ƒ i (x vi ; y vi ; z vi ) as in ( 1) is an "error-free" TDoA measure for the position estimation obtained by means of LS method for cluster j. d.Select as tentative estimation (xv; yv; zv) the one given by the cluster with the lowest median of residues squares.e. Compute the residue squares for all the n nodes considering the tentative estimation (xv; yv; zv) f.Perform new position estimation by applying a LS method assigning a different weight to each node's measurement according to its residue square.This is an implementation of Weighted Least Squares (WLS) method Finally, as compromised nodes are likely to report false data repeatedly, a trust mechanism should be integrated into the system so as to keep track of node's behavior over time. C. RSSI based PU localization The algorithm given in [11], proposes a PU authentication system that securely and reliably delivers PU activity information to SUs.The direction of arrival (DOA) and the received power level are exploited jointly to obtain the transmitter's location and hence detect the malicious devices.That is, given the locations of the primary TV stations, the secondary user can distinguish the actual primary signal from the malicious user's signal by estimating the transmitter's DOA and the power level [6]. Received Signal Strength (RSS) based detection approach analyzes the PUE attack in the CR network without using any location information.Thus, this detection approach does not need dedicated sensor networks [7].The PUE attackers are assumed to be distributed randomly around the SUs.Hence, Received Signal Strength (RSS) seem to be the most suitable for detecting PUE attacks. Location verification is achieved by using two techniques [3]: 1) Distance Ratio Test (DRT), which uses the received signal strength indicator (RSSI) of a signal source and 2) Distance Difference Test (DDT), which uses relative phase difference of the received signal as the signal is received at different receivers. It is assumed that the location information of some of the CR nodes in the network is always known a priori either because these nodes are fixed or they use trusted GPS information.These CR nodes perform DRT and DDT operations within their coverage areas and also serve as the Location Verifiers (LVs).The LVs exchange the location information of incumbent transmitters through a cognitive pilot channel.This authentication approach is intended to prevent the PUE attack in CR networks. With RSS-based techniques, assuming that the transmission power and the path loss model are known, it is possible to estimate the distance from the source to the reference node.When transmission power is not known, differences between RSS measured at pairs of receivers can be considered removing in this way the dependency on the actual transmit power.A set of at least three RSS measurements is then used to estimate the position of the emitter by applying trilateration [8].Although RSS measurements are relatively inexpensive and simple to implement in hardware, they are susceptible of high errors due to the dynamics of indoor/outdoor environments mainly due to multipath signals and shadowing.Now, DRT uses a Received Signal Strength (RSS) based method, where two dedicated cognitive nodes measure the RSS of the signal source and calculate the ratio of these two RSSs to check whether it coincides with their distances to the true PU (e.g., a TV broadcast tower).Using DDT, the arrival time of the transmitted signal from the source is measured by the two dedicated cognitive nodes [7].The product of the time difference and the light speed is then compared to the distance difference from the true PU to the two dedicated nodes in order to identify the source. Fig. 4 RSSI The model [11] uses localization schemes to estimate and authenticate the location of PU.The scheme is based on Received signal power.It is calculated as follows: (8) Where, Pr-Received signal power Pt-Transmitted signal power a-Constant do-Reference distance d-Calculated distance w-Weight FC-Fusion Centre Certain assumptions taken with this regard are-All nodes must be loosely time synchronized, Location of PU should be fixed and known to all SUs, Fusion Center should be used to make decision about presence of PU, All SUs must be connected to FC using a secure link and There is should be no LOS (Line of Sight) path between every SU and PU. But, this model fails all the localization based solutions for PUEA as the attacker can use a multi antenna array or MIMO technology with directional antennas to send PU-TX like signals to different SUs with various power levels faking the presence of PU.That is, a malicious user can be at a location where it has the same DOA and comparable power level as that of the actual primary transmitter. D. Time of Emission Estimation The assumptions taken for this algorithm, as stated in [11], are that the Secondary Users and Fusion Center must be loosely synchronized and must have a secure communication.The Fusion Center cannot be compromised as it knows locations of all users (secondary as well as primary) and has a good computational power and storage.The model proposes ways to eliminate the attacker based on certain calculations that are needed for the algorithm.But, attacker capabilities must also be kept in minds, as these can use antenna arrays, but transmitting with a beam formation at different locations at different times is restricted.Multiple Attackers can coordinate as the Attackers know location of all nodes which can ultimately lead to SU being compromised.Now, the proposed approach must have Sensors that measure Time of Arrival (TOA), a Fusion Center which estimates Time of Emission (TOE) and must have Robustness against Multiple coordinated attackers, multiple compromised secondary users and Node with an Antenna Array.This algorithm has got its reach to every SU which receives PU like signals from the malicious nodes.In short, 1. Access point checks the user location.2. Distance ratio is calculated where the user is located.3. Frequently, the beacon messages are sent to check the user access probability.4. Checks for the user probability ratio in order to detect the actual user available. Localization based transmitter verification takes place in access points.6. Channel identification and differentiation of the user's location would be done.7.This reduces the faked primary user count. E. PUE Database Assisted Detector based on Action Recognition This model, prescribed in [10], introduces a relational database system in order to overcome the problem of intensive computation.This approach records the feature vectors of primary users in the database system, then it monitors each user's FFT (Fast Fourier Transform) sequence and compares the unknown users' feature vectors with those in the database.PUs they have a limited number of feature vectors, which means the resulting database is stable and limited in size.In case that an unknown user's feature vector has a match entity in the database, this approach will continue to double check its action in the frequency domain using artificial neural network.Otherwise, this unknown user will be classified as a PUE. The algorithm makes the following assumptions:(i) All the users, including the malicious users and primary users, are located within the same frequency band; (ii) Each user's transmission power is much higher than the ambient noise in the channel; (iii) The actions and the corresponding feature vectors of primary users are known, and they are different from the other users. Two different experiments can be conducted in order to validate the performance of the database assisted classifier.The first experiment uses a computer simulation based on Simulink, while the second experiment is based on a hardware implementation using the Universal Software Radio Peripheral (USRP) Software-Defined Radio (SDR) platform. In the Simulink experiment, the classification time is highly related to the number of primary users.When there are more primary users in the system, it costs more time to get the conclusion.However, it is noted that with a larger number of primary users, it is approximately a linear growth, because the classification time is dominated by the database searching time.Higher SNR (Signal-Noise Ratio) values yield better algorithm performance in terms of successfully classifying primary signals and PUE signals.It is very reliable and robust.Now, in the SDR platform, the percentage of correct classification can be as high as 87.8%, which means that the majority of the classification results are correct, so the proposed algorithm possesses the potential to be a viable PUE detector operating under real world conditions.Hence, it is a good candidate for the real world implementation. F. Intense Explore System Model For novel Intense explore model [12], an infrastructure based network of CRs is considered, where multiple nodes (or Secondary Users, SUs) may be associated with a centralized fusion centre.For the sake of simplicity, existence of only one fusion centre is assumed.The fusion centre will collect the diagnose results from the cooperative secondary user in a regular interval.The main objective of diagnosing neighboring secondary users signal is to anticipate that any of these secondary users may become a malicious user in future and threaten the cognitive radio network with PUE attack. G. Light weight IDS using CuSum The conventional IDSs (Intrusion Detection System) usually follow either misuse or anomaly based attack detection methods.The misuse based detection method uses signatures of already known attacks.However, the misuse based approach cannot discover new types of attacks effectively [13].On the other hand, as its name implies, the anomaly based detection methodology relies on finding the "anomaly", which represents an abnormal mode of operation in the system.However, many of the existing statistical detection techniques may not be adequate for designing an IDS for CRN as it presents a unique challenge.Specifically in CRN, a centralized IDS may not be able to detect a malicious attack and notify the secondary users quick enough, and therefore, it is important to facilitate lightweight yet effective IDSs in the secondary users themselves.It uses time-series Cumulative Sum (CuSum) hypothesis testing [13].The reason behind choosing CuSum is due to its low complexity and overhead.Each secondary user is assumed to have an IDS.The IDS operates in two steps, namely learning or profiling phase and detection phase. Learning phase- To effectively detect anomalies due to various types of attacks, the IDS needs to be designed in such a manner that it may learn the normal behavior of protocol operation, traffic flow, primary user access time, packet delivery ratio (PDR), signal strength (SS), and so forth.The IDS may learn these information by constructing a statistical profile during normal CRN conditions or with acceptable (i.e., low) level of suspicious activities.The acquired information can facilitate the detection phase of the IDS to discover unknown intrusions or attacks against the targeted CRN. Detection phase- The proposed IDS detection phase relies on finding the point of change in the CRN operation as quickly as possible under an attack.Assume that the IDS operates over equal time-rounds, ∆ n (where n = 1, 2, 3,...).Let the mean of F n during the profiling period be represented by m.The idea is that the IDS continues to monitor a significant change in the value of m that can be considered as the influence of the attack.m remains close to one until an anomaly occurs.However, an assumption of the nonparametric CuSum algorithm suggests that the mean value of the random sequence should be negative during the normal conditions and becomes positive upon a change.Therefore, a new sequence G n =β-F n is obtained where β is the average of the minimum/negative peak values of F n during the profiling period.During an attack, the increase in the mean of G n can be lower bounded by h=(2β).Then, the CuSum sequence Y n is expressed as follows: (12) Where x + =x, if x>0; otherwise x + =0. A large value of Y n strongly implies an anomaly.The detection threshold θ is computed as follows: (13) where tdes denotes the desired detection time, which should be set to a small value for quickly detecting an anomaly. At the detection phase, the IDS computes Yn over time.Yn remains close to zero as long as normal conditions prevail in the CRN.Upon an attack, Yn starts to increase.When Yn exceeds θ and as long as the SS measured at the secondary user is high, the IDS generates an alert of a possible attack.IDS will be able to detect the attack with low detection latency. IV. CONCLUSION In this paper, an overview of Primary User Emulation attack has been given, with its design strategy.In order to overcome this attack, found in Cognitive Radio Networks, a survey of some of the best techniques has been briefly specified.A gist of the methods is given in Table 1.Further work will be to develop prototypes of such methodologies. Fig. 7 Fig. 7 Relational database with artificial neural network In the Intense Explore algorithm, two sets of secondary users (SUs) such as At and Bt, are considered.The fusion center takes the decision about the suspected malicious user based on the reports from the At.Each users in At is assumed to be sensing their neighboring users in Bt.Assume that if any two SUs in At report the same sensing result about the same SU in Bt say Bj, whereby the energy level of it exceeds the threshold, then it is suspected to be the malicious SU.Thus the fusion center alerts all other SU about the suspected user as the malicious SU.The energy detection of Bj is done by a separate function specified as Energy detection.The energy detection function exploits spectral correlation property of cyclostationary feature for detecting the energy.This function reports At about the suspicious secondary user Bj, if any.This algorithm proactively identifies the suspected malicious Secondary user.The algorithm is robust and throughput loss along with detection latency can be minimized to for about 65%.The Intense Explore algorithm and Energy detection function is as follows: Algorithm: Intense Explore 1) Input: Set of SUs 2) Output: Decision report from fusion center 3) for each slot t do 4) f ← Fusion Center 5) At ← Set of cooperative SUs 6) Bt ← Neighboring SUs of At 7) for each Ai in At do 8) Assume Bt as neighboring SUs of Ai 9) for each Bj in Bt do 10) //Call function for Energy detection of Bj 11) R(Ai,Bj)←Energy Detection (Bj)
2018-12-05T11:14:54.675Z
2015-09-30T00:00:00.000
{ "year": 2015, "sha1": "218eab76f52cbd70440f3b88b0ec00490c4feed1", "oa_license": "CCBYSA", "oa_url": "https://mgesjournals.com/ijsrtm/article/download/192/186", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "218eab76f52cbd70440f3b88b0ec00490c4feed1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
233863764
pes2o/s2orc
v3-fos-license
Pregnancy pharmacoepidemiology: How often are key methodological elements reported in publications? Purpose: Publications are an important information source for clinicians, researchers, and patients. Key methodological elements must be reported for maximum transparency. We identified key methodological elements necessary for fully understanding pharmacoepidemiological research in pregnancy and quantified the proportion of studies that report these elements in a sample of publications. Methods: Key methodological elements were identified from guidelines from regulatory agencies, literature, and subject-matter knowledge: source of information to determine pregnancy start; mother- or father-infant linkages (process, success rate); unit of analysis; and whether non-live births and fetuses with various anomalies were included in the study population. We conducted a literature review for recent observational studies on medical product utilization or safety during pregnancy and estimated the prevalence of reporting these elements. Results: Data were extracted from a random sample of 100 publications; 8% were published in epidemiology/pharmacoepidemiology journals; 85% were medical product-safety studies. Of included publications, 43% reported the source for determining pregnancy start; 57% reported whether the study population included multifetal pregnancies; 39%, whether it included more than 1 pregnancy per woman; 27%, whether it included fetuses with chromosomal abnormalities; 60%, fetuses with major congenital malformations; and 93%, non-live births. Of the 20 studies with mother-infant linkage, 35% described the process; 21% reported the linkage success rate. Among studies with more than one pregnancy/offspring per woman, 22% reported methods addressing sibling correlation. Conclusions: In this sample of pregnancy-related pharmacoepidemiology publications, completeness of reporting can be improved. A pregnancy-specific checklist would help to increase transparency in the dissemination of study results. INTRODUCTION Publications are an important source of information for clinicians, researchers, and patients seeking information on medical product utilization and safety in pregnancy (hereafter, "pregnancy pharmacoepidemiology"). In order for those publications to have their full impact, reporting of the methods have to be complete. However, features of study design that are specific to research on the utilization and safety of medical products in pregnancy are sometimes omitted from publications. Such key design elements include the source of information on pregnancy start date (needed to understand the precision of the study exposure window) and the exact composition of the study population (e.g., whether multifetal pregnancies, fetuses/infants with chromosomal abnormalities, or fetuses/infants with minor congenital malformations were included in the study population), which can affect the prevalence of some outcomes and potentially impact relative risk or prevalence ratio estimates. Poor reporting can be perceived to indicate poor study quality, 1 thereby hampering the reader's ability to assess the risk of bias. Also, the results of studies that do not report their key methodological attributes in a complete manner can be difficult to interpret. Missing or incomplete information can limit researchers' ability to compare results across studies and can also impact the acceptance of and understanding by clinical, research, and patient audiences. Thorough reporting in health-related research is currently considered an important characteristic of a well-written study report, as evidenced by the 457 guidelines on health research reporting listed in the EQUATOR Network's library of reporting for health research. 2 Documents in this library include guidelines for reporting on research on obstetrics/pregnancy and on pharmacoepidemiology, but not specifically on pregnancy pharmacoepidemiology. 6 In this review, we first identified key methodological elements that we deemed should be reported by studies on pregnancy pharmacoepidemiology, taking into consideration regulatory guidelines from the United States (US) Food and Drug Administration and the European Medicines Agency, relevant literature, and subject-matter knowledge. We then assessed the prevalence of reporting of those elements in a sample of publications. Key methodological elements in pregnancy pharmacoepidemiology The authors reviewed the recently updated US and European regulatory guidelines for conducting observational studies on the pregnancy safety of medical products 3,4 and the recent literature and used their expertise to identify methodological elements that are considered key in research on pregnancy pharmacoepidemiology. When there was disagreement, authors discussed the issue until reaching consensus. The research team initially identified a relatively extensive list of data elements. Some elements were later found to be duplicative or not informative. The final list consisted of 17 key elements organized in four domains: (1) source of information on start and end of pregnancy; (2) composition of the study population, including, as separate elements, whether multifetal pregnancies, more than one pregnancy per woman, fetuses with chromosomal abnormalities, fetuses with major or minor malformations, and non-live births were included in the study population; (3) mother-infant and father-infant linkages, including, for each linkage that was sought, whether the linking process was described and the success rate was reported, and, if applicable, whether information had been obtained from maternal or infant files; and . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint (4) analytical aspects, including the explicit mention of the unit of analysis for pregnancy (e.g., pregnancy, unique woman) and fetal or infant outcomes (e.g., fetus, child, singleton pregnancy), the gestational age at start of follow-up (e.g., median gestational age at first contact with the health care system, mean gestational age at enrollment), and whether intrafamily correlation had been considered in the study design or analysis (e.g., restricting the study population to one pregnancy per woman, implementing robust variance analyses). For practical reasons, this literature review had a target size of 100 publications. To restrict the number of publications, the following process was implemented: (1) all retrieved records underwent level 1 screening; (2) each publication that passed level 1 screening and was available for level 2 screening was assigned a random number with a uniform distribution between 0 and 1, and publications were sorted from smallest to largest on this number; and (3) publications underwent level 2 screening in this order until reaching the target size of 100 included publications. Data extraction and reporting of key methodological elements Data were extracted by one researcher and quality checked against the original source by a second researcher. Data extracted included study characteristics (e.g., study size, study design, whether the study had been requested by a regulatory agency, and whether it was published in an epidemiology or pharmacoepidemiology journal or in a journal with a different focus) and whether the key methodological elements were explicitly reported in the publication. For example, for the element "Has the intrafamily correlation been considered?" the data extractor recorded "yes" if the publication mentioned that only one sibling per family was included, that robust variance was used to account for sibling correlation, or if the study was a sibling design, among other possible strategies. If the publication did not include information on the topic, the data extractor recorded "no." Some methodological elements were not applicable to a given study (e.g., in studies of stillbirth, linkage to infant files is not possible); additional criteria are listed in Appendix B, Notes tab. When a publication reported on more than one population or . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint analysis, the data extractor recorded "yes" if the information on a given element was reported for at least one of the populations or analyses. Other specifications for data extraction can be found in Appendix B, Look-ups and Notes tabs. We calculated the prevalence of reporting of each key element as a percentage in which the denominator was the number of publications for which the element was applicable, and the numerator was the number of publications that reported on that element. Literature search and selected articles Our search identified 1,981 unique articles for level 1 screening, of which 406 progressed to publication date in some instances). Of publications in this review, 8% were published in journals with a focus on epidemiology or pharmacoepidemiology and 92% were reported in general medical journals, journals focusing on a medical specialty, or journals with another focus. No publication reported that the study had been conducted to fulfill a request from a regulatory agency, but one was conducted as part of an evaluation of a state vaccination . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint program. 6 Of the 100 publications, 14% were for medical product-utilization studies; 86% had a medical product-safety component. Data sources used were health care claims data (14% of publications), electronic medical records (14%), paper medical records (9%), health care registries from the Nordic countries (8%), and pregnancy exposure registries (3%); the remainder used other data sources. The median study size was 2025 women, pregnancies, or offspring (mean, 126,843). Reporting of key methodological elements Denominators for percentages reported in this section vary, as some elements were applicable to only a subset of the 100 included studies (Tables 1-4). Source of information on start and end of pregnancy: Of the publications for which these elements were applicable, 43% reported on the source for determining the start of pregnancy, and 57% reported on the source for determining the end of pregnancy ( Table 1). Composition of the study population: Reporting in this domain ranged from 27% of publications reporting on whether fetuses with chromosomal abnormalities were included in the study population to 93% reporting on whether non-live births were included (Table 2). Of publications included, 57% reported whether multifetal pregnancies were included in the study population, 39% reported whether more than one pregnancy was included in the study population, 60% reported whether fetuses with major malformations were included, and 36% reported whether fetuses with minor malformations were included. Mother-infant and father-infant linkages: Mother-infant linkage was sought in 20 publications; 35% of the publications described the process and 21% reported the success rate (Table 3). Of . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. the 20 publications, 65% specified which information had been obtained from maternal records and which information had been obtained from infant records. Father-infant linkage was sought by two studies in this sample; one described the process and the other reported the linkage success rate. Analytical aspects: The unit of analysis was reported in 98% of the publications with pregnancy outcomes and in 94% of the publications with fetal or infant outcomes (Table 4). Gestational age at start of follow-up was reported by 43% of publications, and whether the intrafamily correlation was considered in analysis or by design was reported by 22% of publications. DISCUSSION To our knowledge, no published consensus exists on which key methodological elements should be reported in studies on the utilization or safety of medical products in pregnancy. Based on regulatory guidelines, the literature, and subject-matter knowledge, we developed a list of key methodological elements for pregnancy pharmacoepidemiology that includes elements related to the source of information on start and end of pregnancy, composition of the study population, mother-infant and father-infant linkages, and analytical aspects. In a sample of 100 publications on pregnancy pharmacoepidemiology published in 2015-2019, we observed that completeness of reporting was heterogeneous across these key elements. At one end of the spectrum, nearly all publications reported the unit of analysis for pregnancy outcomes or whether non-live births were included in the denominator of calculations. At the other end, only about one fifth of publications reported on whether intrafamily correlation had been considered or the success of mother-infant linkage. Father-infant linkage was sought in only two publications. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint We propose that a concise checklist for pregnancy pharmacoepidemiology studies might help improve the reporting of these key elements. A checklist could be structured like a section of the ENCePP Checklist for Study Protocols, 7 with tick boxes for "yes," "no," and "not applicable" and a cell to specify the document section in which the information is provided. Such a checklist could be applied to study protocols and publications. Some of the elements that we identified are not applicable to all studies. For example, the source of information for pregnancy outcomes is irrelevant in a cross-sectional study that describes current use of medications in hospitalized women who are currently pregnant. 8 We did not consider that reporting on mother-infant linkage was applicable to hospital-based studies or to studies that derived their data from questionnaires. In general, though, we believe that for studies that link maternal and infant records (or records from other family members), the process and success rate should be reported, even if the process is simple and the success rate is expected to be close to 100%, as may be the case with countrywide Nordic health care registries. This is because some readers may be unfamiliar with the data sources used in a study, and processes may change over time. Also, reference to a previously published paper does not help a reader who may not have immediate access to that publication. Although we did not assess completeness of reporting in relation to linkage to registries of major congenital malformations, 9 with vital statistics records, 10 or other data sources, ideally, the linkage methods and success rates should be reported. None of the sampled publications reported that the underlying study had been conducted to meet regulatory requirements, although our sample included pregnancy exposure registries that are generally established to meet such requirements. One study was conducted as part of an evaluation of a state vaccination program. 6 For transparency and to provide context for the study . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint design, we recommend that information on whether studies have been conducted to meet regulatory requirements be included in scientific publications. We noticed ambiguity in the use of "multiple pregnancies," which was used to refer to multifetal pregnancies 11,12 but also to more than one (singleton or multifetal) pregnancy in the same woman. [13][14][15] There is no ambiguity in obstetrical practice, where one pregnancy per woman is evaluated at a given time, but ambiguity can arise in studies where each woman can contribute with more than one pregnancy during her longitudinal follow-up. Although the context generally provides clarity on the intended meaning, we recommend avoiding ambiguous language. Limitations of this review include the relatively small sample of publications included in the literature review. Although the results presented here may not be generalizable to the entire body of publications on pregnancy pharmacoepidemiology research, they can support recommendations regarding completeness of reporting. A degree of subjectivity was involved in deciding whether some elements were applicable to a given study. To address this, the authors developed internal guidance in relation to specific types of data sources (presented in Appendix B, Notes tab) and decided to assume that, for consistency, when in doubt, they would consider that the element was applicable. For this reason, this review may have overestimated the extent to which the elements were not reported. Furthermore, authors of included studies may not have reported some elements in our list because they may not have considered these elements to be relevant for their study. This might have been true for the element "whether pregnancies carrying fetuses with chromosomal abnormalities or malformations were included in the study population." For example, for studies in which the exposure or the outcomes were not obviously related to chromosomal abnormalities . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint or malformations, or in drug utilization studies, one might assume that such pregnancies were included in the study without an explicit mention. In other publications, authors may have reported that infants with major malformations were excluded, implying that infants with minor malformations were included. We believe that more thorough reporting would bring more clarity than harm. In addition, we observed variation in the amount of information provided on gestational age at the start of follow-up. For example, "The median gestational age at recruitment was 39 days (range, 4-91 days)…" 16 is more specific than "Pregnant women who were diagnosed…at 16-20 weeks' gestation." 17 However, both contain information on gestational age at enrollment. This review ascertained key methodological elements that are specific to pregnancy pharmacoepidemiology. Other elements, common with observational studies in other areas, were not assessed; this was done to avoid overlap with existing checklists and to keep our list of elements as focused as possible. In conclusion, completeness of reporting the methods used in pregnancy pharmacoepidemiology studies can be improved. This would facilitate the interpretation of study results and the comparison of results across studies. A purpose-made checklist would help to increase transparency in the dissemination of results of studies on utilization or safety of medical products in pregnancy. ETHICS STATEMENT All authors are employees of RTI Health Solutions, a unit of RTI International, which is an independent, not-for-profit organization that conducts work for government, public, and private . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint TABLES . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. Note: A total of 100 publications were reviewed. For some publications, some elements evaluated in this review were not applicable; those publications were removed from the denominator of percentages reported in that row. For example, for a cross-sectional study on the utilization of medications in hospitalized women who are currently pregnant, the element "source of information on date of birth" was not applicable. This publication was removed from the denominator for the column "Studies with information." Cell color key: yellow, cells with percentages of 26%-75%. Note: A total of 100 publications were reviewed. For some publications, some elements evaluated in this review were not applicable; those publications were removed from the denominator of percentages reported in that row. For example, for a cross-sectional study on the utilization of medications in hospitalized women who are currently pregnant, the element "source of information on date of birth" was not applicable. This publication was removed from the denominator for the column "Studies with information." Cell color key: yellow, cells with percentages of 26%-75%; green, 76%-100%. To quantify any loss of study participants To assess the potential for bias from loss of study participants that is differential on key characteristics To quantify any loss of study participants To assess the potential for bias from loss of study participants that is differential on key characteristics Note: A total of 100 publications were reviewed. For some publications, some elements evaluated in this review were not applicable; those publications were removed from the denominator of percentages reported in that row. For example, for a cross-sectional study on the utilization of medications in hospitalized women who are currently pregnant, the element "source of information on date of birth" was not applicable. This publication was removed from the denominator for the column "Studies with information." Cell color key: orange, cells with percentages of 0%-25%; yellow, 26%-75%; green, 76%-100%. Figure 1. Screening and selection of articles into the literature review Note: This is a PRISMA chart. 33 Abbreviation: PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. The abstract and the full-text publication are written in English  The publication presents original observational research that focuses on the utilization or safety of a drug/vaccine/biologic/device in pregnancy and presents numerical results (i.e., study protocols are not eligible; studies on surgery or procedures are not eligible)  Exposure groups (i.e., the exposed and the unexposed) are based on the use of medical products in pregnancy (with or without a diagnosis of a condition); studies that determine exposure groups based on diagnoses only are not eligible  Studies using any type of data source are eligible  Additional criteria for level 2 screening: none . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 6, 2021. ; https://doi.org/10.1101/2021.05.04.21256602 doi: medRxiv preprint
2021-05-07T01:07:26.736Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "f1df6f92dd49b25fe639b9718f22ad5979c3d099", "oa_license": "CCBYNC", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/05/06/2021.05.04.21256602.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "f1df6f92dd49b25fe639b9718f22ad5979c3d099", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226193210
pes2o/s2orc
v3-fos-license
Efficacy of afatinib for pulmonary adenocarcinoma with leptomeningeal metastases harboring an epidermal growth factor receptor complex mutation (exon 19del+K754E) Abstract Rationale: Liquid biopsy of cerebrospinal fluid (CSF) and sequencing of cell-free DNA has rarely been used to identify epidermal growth factor receptor (EGFR) mutations, which can guide the design of precise, personalized treatment for patients with leptomeningeal metastasis from lung adenocarcinoma. Patient concerns: A 42-year-old woman with lung adenocarcinoma and leptomeningeal metastasis was admitted to our hospital on March 31, 2019. She exhibited no response to treatment with gefitinib, osimertinib, or chemoradiotherapy and was in critical condition, with an expected survival of <4 weeks. Diagnosis: Next-generation sequencing of CSF and peripheral blood samples identified an EGFR complex mutation (exon19del+K754E). Interventions: On April 10, 2019, the patient started oral afatinib (40 mg po qd), but she developed a grade III oral mucosal reaction 1 week later. The afatinib dose was reduced to 30 mg po qd. Outcomes: At the follow-up examination on May 15, 2019, the patient reported relief from headaches. Enhanced magnetic resonance imaging revealed a reduction in abnormal leptomeningeal enhancement, and the CSF pressure and carcinoembryonic antigen levels were also reduced. The patient continued to respond to afatinib treatment (30 mg once daily) with minimal adverse effects. Lessons: This is the first case report of clinical improvement after afatinib treatment in a patient with lung adenocarcinoma and leptomeningeal metastasis harboring an EGFR complex mutation (exon19del+K754E), and thus provides a clinical reference for treatment with afatinib of cancers harboring EGFR compound mutations. Introduction The prognosis of patients with non-small cell lung cancer (NSCLC) and leptomeningeal metastasis is poor. The overall incidence of leptomeningeal metastasis in patients with NSCLC has increased in recent years to approximately 3.4% to 3.8%, [1][2][3] but the incidence is much higher (9.4%) in patients with mutations in the epidermal growth factor receptor (EGFR). [2] The sensitivity of tumors to EGFR tyrosine kinase inhibitors (EGFR-TKIs) is dependent on the specific EGFR gene mutations, [4] and not all patients with EGFR mutations show responses to EGFR-TKIs. [5] We describe here the case of a patient with lung adenocarcinoma and leptomeningeal metastasis harboring an EGFR complex mutation (exon19del+K754E). Treatment with oral afatinib successfully reduced the leptomeningeal metastasis, suggesting that tumors harboring the EGFR exon19del+K754E complex mutation might be sensitive to second-generation EGFR-TKIs. Case presentation A 42-year-old woman with no history of smoking or familial neoplasms presented at the Department of Medical Oncology in February 2018 with a 1-month history of headache, cough, and chest pain. Chest computed tomography (CT) showed a lesion in the upper right lung, suggesting peripheral lung cancer. Enhanced magnetic resonance imaging (MRI) revealed an abnormal enhanced signal in the pons and left occipital lobe, indicating possible intracranial metastasis. CT-guided percutaneous biopsy of the lung tumor and pathological analysis confirmed lung adenocarcinoma. Next-generation sequencing (NGS) of the lung tumor revealed an EGFR exon19 L747_E749del+K754E (c.2260A>G) mutation. The patient began molecular targeted therapy with oral gefitinib (250 mg once daily [qd]), a firstgeneration EGFR-TKI, and the intracranial metastasis was treated with gamma knife surgery. After 5 months on oral gefitinib, the intracranial lesions progressed. The patient was switched to the third-generation EGFR-TKI osimertinib (80 mg qd). After 2 months on oral osimertinib, the intracranial lesions progressed. She received 2 cycles of combined intravenous chemotherapy and anti-angiogenesis therapy (pemetrexed 800 mg, carboplatin 550 mg, and bevacizumab 550 mg), but the disease continued to progress. The patient was switched to 3 cycles of docetaxel (120 mg) and bevacizumab (500 mg). The patients clinical symptoms continued to deteriorate, with aggravating headache and intermittent loss of consciousness. The patient again visited our emergency department and was admitted on March 31, 2019. Chest CT showed a lesion in the upper right lung indicative of lung cancer, with enlarged lymph nodes in the hilus of the right lung, indicating metastasis ( Fig. 1A). Enhanced MRI was performed on April 1, 2019, and showed a mild abnormal enhancement in the pons, suggestive of intracranial metastasis. Linear fluid-attenuated inversion recovery (FLAIR) hypersignals around the brain stem in both cerebral hemispheres and within the sulcus of the cerebellar hemisphere were indicative of leptomeningeal metastasis ( Fig. 2A). Further (Figs. 3 and 4). The patient reported headache relief, and she was discharged on April 25, 2019. The patient continued to take afatinib 30 mg po qd at home. The patient again visited our department and was admitted on May 14, 2019. A chest CT performed on May 15, 2019, showed no progression of lung cancer or metastasis (Fig. 1B), and enhanced MRI showed reduced signs of leptomeningeal metastasis (Fig. 2B). A lumbar puncture performed on May 18, 2019, showed a CSF pressure of 110 mmH 2 O and CEA level of 1.09 ng/ml (Figs. 3 and 4). The patient was discharged on May 29, 2019, and continued to take afatinib 30 mg po qd at home. Chest CT and enhanced MRI were performed on June 19, 2019, and showed further reductions in signs of lung and leptomeningeal metastases (Fig. 1C, 2C), consistent with partial remission. The patients headache had also improved, and the ECOG PS was 1. She experienced grade I rash and oral ulcer adverse reactions, but she continued to take afatinib 30 mg po qd at home. Discussion Lung adenocarcinoma is the most common type of NSCLC. EGFR is one of the most frequently mutated driver genes involved in the pathogenesis of NSCLC and is present in 20% and 40% of the Caucasian and Asian patient populations, respectively. EGFR mutations are particularly common in non-smoking Asian female patients with adenocarcinoma. [5,6] EGFR mutations are most frequently located in exons 19 and 21; indeed, exon 19 in-frame deletions and the exon 21 L858R point mutation account for 90% of all EGFR somatic mutations (collectively referred to as classical mutations). Of the remaining (non-classical) mutations, about 6% are complex mutations. [7,8] In the present study, the NGS of a lung tumor biopsy sample identified an EGFR L747_E749del+K754E complex mutation. The patient was treated with gefitinib and then osimertinib, but the intracranial metastasis and clinical symptoms were stable for only 5 and 2 months, respectively. The results of the Phase III FLAURA study (NCT02296125) [9] in patients with newly diagnosed advanced NSCLC and central nervous system metastasis showed a median progression-free survival (PFS) of 13.9 months following treatment with first-generation EGFR-TKIs (gefitinib or erlotinib) and >16.5 months with osimertinib. However, the PFS of our patient after treatment with gefitinib and osimertinib was shorter than the patients in the FLAURA study treated with firstand third-generation EGFR-TKIs. Notably, genetic testing of the patient did not reveal any other driver mutations that could account for drug resistance. We, therefore, speculate that the lack of effect of gefitinib and osimertinib in our patient may be due to the presence of the non-classical mutation K754E. A search using the molecular analysis tool PolyPhen-2 (http://genetics.bwh.harvard.edu/pph2/) indicated that EGFR p. K754E is a potential damaging mutation (position-specific independent count [PSIC], 0.607; sensitivity, 0.80; specificity, 0.83). Thus, there is a need to identify drugs with efficacy against tumors carrying the EGFR p.K754E mutation. The LUX-Lung3 and LUX-Lung6 clinical trials showed that afatinib improved the clinical symptoms and PFS in patients with NSCLC harboring classical and non-classical EGFR mutations. [10] The LUX-Lung2, LUX-Lung3, and LUX-Lung6 clinical trials in patients with advanced NSCLC harboring non-classical mutations also showed responses to afatinib. Based on these studies, afatinib was approved by the US Food and Drug Administration to treat metastatic NSCLC with non-classical EGFR mutations. [11,12] Ma et al [13] reported on the efficacy of afatinib treatment in a patient with NSCLC and a non-classical EGFR mutation (G719A). Frega et al [8] have also reported on the response to afatinib treatment of a patient with NSCLC carrying 3 nonclassical mutations: exon 18 E709K and an exon 21 L833V_H835L complex mutation. These findings prompted us to treat our patient with afatinib upon identification of the EGFR 19del+K754E mutation. After 1 week on afatinib, the patients headache was improved, although she also developed grade I rash and grade III oral mucosa reactions. The patient continued to take afatinib (30 mg po qd) for 1 month, and enhanced MRI performed 2 months later revealed reduced enhancement of the leptomeningeal metastasis, with persistently stable pulmonary lesions. The patients headache continued to improve, and the CSF pressure and CEA level were both reduced. These findings are consistent with suppression of leptomeningeal metastasis, and we conclude that the EGFR p.K754E non-classical mutation may be associated with the response to afatinib treatment. Conclusion To the best of our knowledge, the current study represents the first case report of a response to afatinib treatment in a patient with lung adenocarcinoma and leptomeningeal metastasis harboring an EGFR L747_E749del+K754E complex mutation. This report should serve as a clinical reference for the use of afatinib to treat tumors harboring non-classical mutations. MN, LJ, and LL assisted in drafting the manuscript. All authors read and approved the final manuscript. Conceptualization: Rong Jiang. Data curation: Chunhua Ma, Mei Liu.
2020-10-29T09:04:34.302Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "3a51f45d47d0f567bd811e0bc6b65188b605c69b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000022851", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0db623876fcfde63db4f9c06e861a61d30bba33", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251299564
pes2o/s2orc
v3-fos-license
Reciprocal Data Transformations and Their Back-Transforms : Variable transformations have a long and celebrated history in statistics, one that was rather academically glamorous at least until generalized linear models theory eclipsed their nurturing normal curve theory role. Still, today it continues to be a covered topic in introductory mathematical statistics courses, offering worthwhile pedagogic insights to students about certain aspects of traditional and contemporary statistical theory and methodology. Since its inception in the 1930s, it has been plagued by a paucity of adequate back-transformation formulae for inverse/reciprocal functions. A literature search exposes that, to date, the inequality E(1/X) ≤ 1/(E(X), which often has a sizeable gap captured by the inequality part of its relationship, is the solitary contender for solving this problem. After documenting that inverse data transformations are anything but a rare occurrence, this paper proposes an innovative, elegant back-transformation solution based upon the Kummer confluent hypergeometric function of the first kind. This paper also derives formal back-transformation formulae for the Manly transformation, something apparently never done before. Much related future research remains to be undertaken; this paper furnishes numerous clues about what some of these endeavors need to be. Introduction Early comprehensive, fruitful statistical advances in normal curve (i.e., Gaussian distribution; e.g., [1]) theory, which benefits from the relative simplicity of its univariate and multivariate mathematical statistics, allowed it to dominate most sectors of statistical analysis methodology for many decades. The advent of its affiliated normal approximation power transformation technique [e.g., Box and Cox [2], who (especially p. 212) present a brief early history of data transformations, tracing these techniques back at least to 1937 (work by Bartlett), and crediting Tukey for considerable contributions about them prior to the publication of their classic Box-Cox paper; others they recognize include Ascombe, Kleczkowski, Moore, and Tidwell; Rojas-Perilla [3] provides an insightful contemporary update to their story] that extended its suitability to many of the hundreds of other univariate random variable (RV) distributions that exist (e.g., [4][5][6]) preserved its prominence until, for example, Nelder and Wedderburn's formalization and implementation of generalized linear model (GLM; [7]) theory in the early 1970s [8]. Regardless of the data analysis specification error risks affiliated with approximations, recognition of especially normal curve theory's pedagogic value continues to this day [9]). Normal curve theory treats continuous interval/ratio measurement scale RVs over a (-∞, ∞) support domain, with Box-Cox [2] power and Manly ([10]; also see [11]) exponential transformations as well as other normal approximations (e.g., [12]) artificially expanding its practical applicability to more limited domains such as the truncated support [0, ∞). Griffith [13], for example, discusses RV transformations together with their accompanying back-transformations, employing fractional calculus to achieve such final results. A serious drawback of this approach is that it applies only to non-negative Box-Cox power transformation exponents. A study [14] using 2010 United States socio-economic/demographic census data, by census tracts (i.e., areal units), for both Dallas County, TX (529 tracts), and the Dallas-Fort Worth-Arlington Metropolitan Statistical Area (DFW MSA; 1324 tracts) containing it, reveals that roughly a third of the 70 (i.e., 35 × 2) selected but commonly utilized attributes measured as either percentages or densities-two time-honored standardization adjustments to geospatial and other aggregate data to minimize size effects-require a negative (i.e., inverse, reciprocal-one having a constant in its numerator and an algebraic expression in its denominator) rather than non-negative power transformation (Table 1; also see Appendix A). The sizeable proportion of reciprocal transformations reported here testifies to the importance of establishing appropriate back-transformations for this case, too, with a focus on inverse moments rather than the more general inverted distributions (e.g., [15]). solely Dallas-Fort Worth-Arlington MSA 1 0 3 0 Note: δ denotes a translation/shift parameter, γ denotes a non-negative exponent (a value of zero implies the logarithmic transformation), and β denotes a negative exponential slope coefficient. Comments: no attribute renders a single inverse transformation type across all four specimen attribute variable categories; the LN transformation replaced exponents extremely close to zero (i.e., |γ| ≤ 0.01). † Royston [17] devised an algorithm that extends sample size diagnostics from 50 to 2000. Basic Concepts and Methodology The central issue here concerns the inverse first moment (e.g., [18][19][20]). Although Stephan [21] derives E(l/Y) results for non-negative binomial RVs (i.e., Y = 0 does not exist) in the context of negative exponents, a broad interest in inverse moments barely predates Box and Cox, with the first published mentioning of this phraseology apparently appearing in 1962 (retrieved via a MATHSCINET search on 29 June 2022). Initial attention concentrated on continuous univariate RVs (e.g., [22]) because E(l/Y) does not exist for a discrete univariate RV Y mass function with non-zero mass at Y = 0. Nevertheless, Stephan [21] treats a modified binomial RV, and Kabe [23] devises an expression for truncated binomial and Poisson RV r th -order inverse moments, with both continuous and discrete research themes being pursued throughout the subsequent decades (e.g., [24][25][26]). Meanwhile, the more recent literature reflects somewhat of a preoccupation with individual RVs (e.g., [27]). Cressie et al. [28] highlight that the moment generating function of a RV holds information about both its positive and negative integer moments. Unfortunately, as Griffith [13] demonstrates for positive exponent Box-Cox transformations, most empirical transformations involve fractional moments. Regardless, the first relevant proposition is as follows: given certain regularity conditions, an inverse moment can be approximated by its inverse; i.e., E(l/Y) ≈ 1/E(Y). The critical condition is that E(Y) exists and is non-zero. Furthermore, the probability density/mass function support must be positive for E(Y) always to be real. These requirements are the reasons authors devote so much writing about this topic to positive RVs. However, inclusion of a translation (i.e., shift) term δ in a two-parameter transformation allows Y to take on zero, or even negative values, as long as the minimum Y values plus δ is positive. Within the context of maximum likelihood estimation, including a translation parameter δ creates the typical non-regular estimation problem in which the likelihood function becomes unbounded as this parameter approaches −y min , the minimum RV Y sample value [29] (p. 185). Seber and Wild note that the maximum likelihood estimate of δ is −y min , exacerbating this situation, and comment that a "satisfactory estimation procedure is needed" [30] (p. 72). An alternative part of the associated complication is that a nonlinear trade-off frequently exists between estimates of the power exponent γ and the translation parameters δ, whereas another is that the range of values for the modified RV depends upon the resulting estimateδ. Within this preceding setting, Hu et al. [31] and Yang et al. [26] propose that, for nonnegative RVs Y, the inverse moment δ + E Y −γ , where Y is the sample mean, asymptotically approximates E δ + Y −γ , if RV Y is suitably truncated and satisfies Rosenthal-type inequalities (i.e., specific relationships between moments of order higher than 2 and the variance of partial sums of RVs; [32] (p. 279))-given independent and real centered RVs X i , i = 1, 2, . . . , n, for every positive integer n, if E(|X i | p ) < ∞ for p > 1, where |•| denotes the absolute value of its argument represented by •, then Acknowledging that many variants of the adage "a reciprocal moment approximates the reciprocal of that moment" exist, Garcia and Palacios [33] enumerate an additional sufficient condition required for it to be true. More specifically, they address a limit of the form. This limit holds when non-negative RV Y is expressible, at least asymptotically, as a standard normal RV. However, as Groves and Rothenberg [34] emphasize, the general relationship is given by with the gap between the left-(LHS) and right-hand side (RHS) reciprocal polynomials sometimes being very substantial, and the foregoing discussion mostly absorbed by the (near-)equality instance. In addition, this equivalence is adequate only when its transformed distribution exhibits skewness and excess kurtosis of roughly zero (see Appendix A). The Manly Back-Transformation for the Negative Exponential Function e −βY Conspicuously missing from the entire variable transformation literature is any debate about the inverse Manly transformation and its attendant back-transformation; perhaps surprisingly, the same can be said regarding its positive coefficient version (i.e., e βY , β > 0; of the 140 empirical attribute variables constituting the database for this paper, six transformations were of this variety). Table 1 suggests that this oversight is problematic. For the inverse case of interest here, the back-transform arithmetic mean, ignoring its seemingly trivial imaginary part involving the Erfc function-the complementary error function defined by 2 √ π ∞ z e −t 2 dt for argument z-is given by (see Appendix B for its derivation). where LN denotes natural logarithm, and, respectively, µ and σ, are the mean and the standard deviation of the ideal normal distribution approximated by an inverse Manly transformation. The individual conditional expectations are given by substituting each original transformed value, in turn, for µ in Equation (3). Table 2 tabulates computations for an illustrative application of Equation (3). Following guidelines advocated in Griffith [13], the nearly identical raw and back-transformed arithmetic means imply the presence of little data analysis specification error attributable to employing a normal approximation transformation. Furthermore, for the most part, the reported extremes and their corresponding conditional back-transformed means [based upon the quantiles Blom [35] promotes (see Table 2) imply that these Manly transformations also essentially preserve the ranges of the raw attribute values. As an aside, for a non-reciprocal Manly transformation, the first moment expected value given by Equation (3) simply has a sign change. The Box-Cox Back-Transformation for the Inverse Power Function (Y + δ) −γ The inverse case of interest here preoccupying applied statisticians and other researchers in their relevant literature writings argues for some form of E(Y*) = 1/E(Y), where variable Y* denotes a Box-Cox inverse transformation. Now this back-transform arithmetic mean, ignoring the imaginary part in the calculation reported by Mathematica 12.3-this outcome seems to be an artifact of the software's symbolic manipulations (e.g., [36])-is given by (see Appendix B for its derivation). where Γ[•] denotes the standard gamma function with argument •. This expression resembles Equation (3), chiefly because it includes the same type of infinite summations. Table 3 tabulates computations for an illustrative application of Equation (4). Again, following guidelines advocated in Griffith [13], the nearly identical raw and back-transformed means imply the presence of little data analysis specification error attributable to employing a normal approximation. Table 3 results based upon Equation (2) demonstrate the potential superiority of the proposed Box-Cox back-transformation arithmetic mean expression vis à vis contemporary conceptualizations. Evidence supporting Equation (4), beyond that summarized in Appendix B, merits more intensive future scrutiny and research, particularly with regard to the efficacy of ignoring its imaginary part. (2) demonstrate the potential superiority of the proposed Box-Cox back-transformation arithmetic mean expression vis à vis contemporary conceptualizations. Evidence supporting Equation (4), beyond that summarized in Appendix B, merits more intensive future scrutiny and research, particularly with regard to the efficacy of ignoring its imaginary part. Applications: More Specimen Empirical Illustrations Preceding sections present empirical findings for seven of the 49 inverse transformations (see Appendix A) identified for 140 (= 2 × 2 × 35) attribute variables selected from the 2010 US census for either Dallas County or the DFW MSA. Table 4 compilation uncovers a strong tendency for Manly and Box-Cox inverse transformations to be competitive in situations for which the exponent γ is relatively large in absolute value (i.e., |γ| > 2); for example, the percentage of retail employment, whose respective goodness-of-fit error sums of squares (ESSs) are 5.48 and 5.94 [with an accompanying total sum of squares (TSS) of 525.8] yields an exponent of −8.44, well below the lower limit of −2 in Tukey's [37] transformation ladder of reasonable powers (ranging from −2 to 2). (4) Appendix B, merits more intensive future scrutiny and rese to the efficacy of ignoring its imaginary part. Applications: More Specimen Empirical Illustrations Preceding sections present empirical findings for sev mations (see Appendix A) identified for 140 (= 2 × 2 × 35) att the 2010 US census for either Dallas County or the DFW MS ers a strong tendency for Manly and Box-Cox inverse tran in situations for which the exponent γ is relatively large in for example, the percentage of retail employment, whose re sums of squares (ESSs) are 5.48 and 5.94 [with an accompany of 525.8] yields an exponent of −8.44, well below the lower li formation ladder of reasonable powers (ranging from −2 to Applications: More Specimen Empirical Illustrations Preceding sections present empirical findings for seven of the 49 inverse transformations (see Appendix A) identified for 140 (= 2 × 2 × 35) attribute variables selected from the 2010 US census for either Dallas County or the DFW MSA. Table 4 compilation uncovers a strong tendency for Manly and Box-Cox inverse transformations to be competitive in situations for which the exponent γ is relatively large in absolute value (i.e., |γ| > 2); for example, the percentage of retail employment, whose respective goodness-of-fit error sums of squares (ESSs) are 5.48 and 5.94 [with an accompanying total sum of squares (TSS) of 525.8] yields an exponent of −8.44, well below the lower limit of −2 in Tukey's [37] transformation ladder of reasonable powers (ranging from −2 to 2). Table 4 furnishes numerical outcomes extremely supportive of this aforementioned contention. All back-transformed arithmetic means are nearly identical to their raw data counterparts, implying the presence of little data analysis specification error attributable to employing a normal approximation transformation. This type of conclusion almost always is the expectation when the mean percentage is roughly 50; in the suite of cases investigated here, percentages range from roughly 3% to 18%, which are substantially less than 50%. One reason these consequences may appear so good is that the worst raw data Shapiro-Wilk (S-W) statistic is 0.83, which is low but not excessively low; one raw data diagnostic statistic is 0.992, which is significantly less than one, but reflects considerable symmetry (i.e., its companion skewness measure is 0.31, which improves to 0.01 with the Manly transformation), and a distributional form approaching a bell-shaped curve. Figure 1 portrays the two extreme specimens, with regard to their S-W normality diagnostic statistics, appearing in Table 4. The transformed plots are inversely related to their affiliated raw data plots, by construction. Although both raw data diagnostic statistics are significantly less than one, these graphics disclose noticeably better alignment for the 0.83→0.99, and questionably better alignment for the 0.992→0.997, increase in S-W cases. Regardless, in both instances, Equation (3) furnishes an excellent back-transformation as judged by a comparison of the raw and back-transformed data arithmetic means. Table 5 also furnishes extremely supportive numerical outcomes. Although not as similar as the Manly pairings, all Box-Cox back-transformed arithmetic means are nearly identical to their raw data counterparts, again implying the presence of little data analysis specification error attributable to employing a normal approximation transformation. In addition, Table 5 compilation reveals a strong tendency for Box-Cox logarithmic and inverse transformations to be competitive in situations for which the exponent γ lies in the interval [0, −0.1]. For example, the Dallas County associate degree percentage variable has goodness-of-fit ESSs of 0.9700 for the logarithmic, and 0.9622 for the Box-Cox negative power (γ ≈ −0.43), transformations (TSS = 523.8); however,γ is not sufficiently close enough to zero to justify replacing this latter with this former transformation. Figure 2 portrays the two extreme specimens, with regard to their S-W normality diagnostic statistics, appearing in Table 5. As before, the transformed plots are inversely related to their affiliated raw data plots, which is by construction. Although both raw data diagnostic statistics are significantly less than one, these graphics disclose noticeably better alignment for the 0.44→0.997, and modestly better alignment for the 0.97→0.99, increase in S-W cases. Regardless, in both instances, Equation (4) furnishes an excellent back-transformation as judged by a comparison of the raw and back-transformed data arithmetic means. Table 3 Note: DC denotes Dallas County; only the DFW MSA occupied housing units attribute variable densities underwent this transformation (see Table 3); gray denotes alternative or no required attribute variate transformation; bold italic font denotes extreme alignment improvements; Y* denotes the transformed version of RV Y. Figure 2. Normal quantile (red lines denote 95% confidence intervals and trendlines) and histogram portrayals for two Box-Cox power transformation extreme cases appearing in Table 5 In summary, the back-transformations proposed in this paper perform extremely well across a wide range of arbitrarily selected variates. The Manly negative exponential back-transformation seems to accomplish its goal better than the Box-Cox negative power back-transformation. Nonetheless, both appear to be superior to the Equation (2) proposition frequently endorsed, studied, and presumably applied in the literature. The average absolute error for the 49 specimen variables is roughly 1%, with a maximum of slightly less than 7%. Figure 3 portrays features of these errors, which overwhelming ratify Equations (3) and (4); see Appendix Figure A1 for a more comprehensive visualization. Specimen absolute error percentage visualizations: % error = |raw-back-transformed|/raw (gray solid denotes Table 5 DFW%, open circles denote Table 5 DC%, and solid black circles denote Table 5 Discussion Normal curve theory no longer enjoys the statistical methodology dominance it held prior to the advent of GLM theory and practice. Yet, a perusal of introductory mathematical statistics textbooks divulges that teaching about variable transformations is customary. This is an excellent place in a curriculum to treat normal RV back-transformations. After all, as Lesch and Jeske [9] (p. 277) point out, "Although the modern computing environment [coupled with mathematical statistics advances] has obviously alleviated the necessity of [a normal] approximation, it is still both historically relevant and quite insightful from an instructional perspective." In keeping with this contention, the assessment presented in this paper urges future research pursuits addressing normal back-transformations for inverse RVs. Evidence provided in it contends that the Manly transformation, coupled with its accompanying back-transformation, exhibits considerable promise, especially for large negative power exponent values; the Manly transformation appears to preserve the Tukey power exponents ladder and augment its two ends, replacing these exponents when they become too extreme, a notion consistent with both parsimony and the use of an ESS criterion to help decide upon a particular transformation (i.e., Manly or Box-Cox power). Given the preceding materials, at this time, the five ensuing themes of this section merit more thorough discussion to complete this paper. The Inverse Back-Transformation Conceptualization To date, reliable general inverse back-transformations continue to be a tool eluding applied statisticians, even after the emergence of a sizeable literature seeking these instruments. Conceivably, Equation (2) represents the prevailing best case scenario; unfortunately, Table 3 documents that this option can supply poor results. Furthermore, Manly [10] formulated an additional transformation that has been, and is, all but ignored in practice. One appealing advantage of his construction is that it substitutes for more extreme Box-Cox power exponents whose data calculations generate massively large or minutely small numerical values. An important contribution here is the derivation of the back-transformation for Manly's invention. GLM theory furnishes another crucial modern-day component to understanding data transformations and their back-transformations. Initially, the only option was to work with normal curve theory. Today, side-by-side analyses completed with it and the appropriate GLM technique allow a detailed examination of how well a transformation-based normal curve theory approach works. This type of insight can become indispensable in large or massive data settings. GLM estimation often requires an iteratively reweighted least squares routine, which essentially involves repetition of calculus-guided estimation, whereas a normal approximation might allow a linear regression substitution, dramatically reducing daunting computational demands and burdens to solve a problem. Table 6 summarizes illustrative GLM estimation output for the variates appearing in Tables 2 and 3. Georeferenced data tend to be extraordinarily overdispersed. Accordingly, Table 6 tabulates calculations that utilized beta-binomial parametric mixture regression, and gamma-Poisson parametric (i.e., negative binomial) mixture, rather than Poisson, regression to accommodate for any excess variation. The reported GLM estimates further corroborate the validity of Equations (3) and (4). Some Mathematics Underlying the Inverse Back-Transformations Griffith [13] derives positive Box-Cox power exponent back-transformation formulae using fractional calculus (with a detailed appendix overview of this topic; e.g., [38]). These derivations encompass complicated, sophisticated sums having arguments written as powers of and ratios containing µ and σ combined with gamma functions. Not surprisingly, then, Equations (3) and (4) build upon similar complex arithmetic operations. The Kummer confluent hypergeometric function, a degenerate mathematical construct introduced in the early 1800s [39], has two of its three regular singular points merge into an irregular singularity (hence, the term confluent in its description), and is the solution to the following differential equation: whereas, after taking the first partial derivative of its numerator with respect to a and then setting a to 0 for Equation (3), the numerator becomes Its final solution has the imaginary term Erfc[µ/σ √ 2]πi, whose contribution to Equation (3) appears to be rather trivial (e.g., Table 7; the magnitude of the complex number essentially is its real part), and thus has been discarded here. Meanwhile, Equation (4) embraces two specific Kummer confluent hypergeometric functions, the first with a = 1/(2γ) and b = 1/2, and the second with a = (1+γ)/(2γ) and b = 3/2, each pair of which substitutes into Together, these mathematical functions are the source of the imaginary part for Equation (4), which accordingly is twofold: (−σ) 1/γ and −(−σ) 1/γ−1 . These two terms are not totally ignorable, jointly or separately, although their final composite imaginary part seems to be. This particular conjecture warrants future scrutiny and research. Wolfram Mathematica 12.3, for example, implements the Kummer confluent hypergeometric function for both symbolic and numerical manipulations (see https://reference. wolfram.com/language/ref/Hypergeometric1F1.html for its operationalization in Mathematica 12.3 (accessed on 6 July 2022)). Support for this latter maneuver comprises arithmetical evaluation to arbitrary numerical precision. Furthermore, this function's executable capabilities include automatic cycling through lists of values, such as those comprising a transformed dataset in need of back-transforming. Its principal shortcoming is that it can encounter under-and over-flow calculation warnings and failures, as the next section shows. The Specimen Empirical Example A principal objective of the specimen data examined in this paper is to exemplify the relatively large number of times applied statisticians can encounter the necessity for adopting inverse transformations during normal curve theory exercises with their own data. The literature seems to lack any narratives about Manly back-transformations in general, let alone explanations directing their use for inverse (i.e., negative exponential) transformation cases. This paper not only fills that knowledge gap, but it also furnishes more definitive and rigorous Box-Cox inverse back-transformations. The benchmark here is a comparison of raw data and back-transformed arithmetic means (see [13]). However, Fisher's [41] probability integral transform together with Angus's [42] quantile function theorems, which may be stated as follows, enable one of its extensions to an entire dataset: for data values constituting any attribute variable transformable to a formal RV (e.g., the normal), this transformation is exact if the underlying distribution is the true one, and approximate in large samples if the distribution was fitted to these data. This theory is the foundation sustaining the extreme back-transformed values reported in Table 2, which build upon Blom's [35] uniform-based systematic sample spanning a probability density function support. Table 8 continues inspections initiated with Tables 2 and 3; the left-hand amount in each column is the observed quantity, whereas the right-hand stack is the analytical algebraic Equations (3) and (4) Table 8 notes) prompted verification by simulation. Of note is that Box-Cox transformations creating small means and variances may suffer from numerical distortions during their back-transformations, requiring this type of remedial intervention The protocol for this paper was to draw a systematic sample of values based upon the Blom [35] calculated CDF percentages, namely (r i − 3/8)/(n + 1 + 4). This strategy failed for occupied housing units and Dallas County 20-29 years of age densities, because they involve extreme cumulative percentages that are excessive outliers in the normal distribution tails. Its replacement strategy was to draw 10,000 random samples of size n (= 529 or 1324) from a posited ideal normal probability distribution, rejecting negative values (<0.38% of the selections for one, and none for another Dallas County attribute variable; <0.04% for the DFW MSA variate), sort them in ascending order, and then compute a back-transformation using Mathematica 12.3 for each of the n summary averages. This procedural switch causes differences between certain Tables 3 and 8 entries. One outcome is a modest number of negative values (e.g., smallest) and non-monotonicity in the very largest (e.g., misrepresentations attributable to underflow calculations), miscalculations not certified by the simulation exercises. In addition, because they are conditional means, this complication motivating a trimming (i.e., similar to data Winsorizing) of these inadmissible values is in keeping with back-transformed values shrinking toward their mean. Table 8 highlights possible back-transformation confusion between the mean and the median, with reference to a data analysis specification error appraisal criterion, because the ideal transformed RVs are flawlessly Gaussian, and hence these two quantities are the same. Figure 3a portrays a near-perfect matching that this table convincingly contradicts, both with analytical and with replication simulation displays. Rather, it endorses the Equation (3) Manly back-transformation, while raising serious questions about any general improvements Equation (4) might offer Box-Cox back-transformations vis à vis the RHS of Equation (2); this deficiency may be an artifact of simply ignoring the imaginary part of the complex number solutions generated by the Kummer confluent hypergeometric function. In other words, the Box-Cox inverse back-transformation comparisons here signify a potential for its use to introduce moderate-to-severe specification error into a data analysis. In general, Table 8 standard error tabulations are consistent with shrinkage conjectures, whereas, more or less, skewness and kurtosis tabulations are consistent with smoothing expectations. In a nutshell, Table 8 results imply a need for considerable comparative future research. Table 3 results based upon Equation (2) demonstrate the potential superiority of the proposed Box-Cox back-transformation arithmetic mean expression vis à vis contemporary conceptualizations. Evidence supporting Equation (4), beyond that summarized in Appendix B, merits more intensive future scrutiny and research, particularly with regard to the efficacy of ignoring its imaginary part. Applications: More Specimen Empirical Illustrations Preceding sections present empirical findings for seven of the 49 inverse transformations (see Appendix A) identified for 140 (= 2 × 2 × 35) attribute variables selected from the 2010 US census for either Dallas County or the DFW MSA. Table 4 compilation uncovers a strong tendency for Manly and Box-Cox inverse transformations to be competitive in situations for which the exponent γ is relatively large in absolute value (i.e., |γ| > 2); for example, the percentage of retail employment, whose respective goodness-of-fit error sums of squares (ESSs) are 5.48 and 5.94 [with an accompanying total sum of squares (TSS) of 525.8] yields an exponent of −8.44, well below the lower limit of −2 in Tukey's [37] transformation ladder of reasonable powers (ranging from −2 to 2). (4), beyond th Appendix B, merits more intensive future scrutiny and research, partic to the efficacy of ignoring its imaginary part. Applications: More Specimen Empirical Illustrations Preceding sections present empirical findings for seven of the 49 mations (see Appendix A) identified for 140 (= 2 × 2 × 35) attribute varia the 2010 US census for either Dallas County or the DFW MSA. Table 4 c ers a strong tendency for Manly and Box-Cox inverse transformations in situations for which the exponent γ is relatively large in absolute v for example, the percentage of retail employment, whose respective go sums of squares (ESSs) are 5.48 and 5.94 [with an accompanying total su of 525.8] yields an exponent of −8.44, well below the lower limit of −2 in formation ladder of reasonable powers (ranging from −2 to 2). Alternative Transformations The Box-Cox power and Manly exponential data transformations are not unique; Yeo-Johnson [12] transformations, for example, do not complete the set of possibilities, either. History reveals that alternatives exist for especially proportions and percentages, two of the most popular being the logit and the arcsine, this latter being the target of some derision (e.g., [43]). The logit transform is given by the natural logarithm LN[p/(1 − p)], where 0 < p < 1 is an empirical probability, equivalent to a percentage (when multiplied by 100). It maps probability values in the interval (0, 1) {\displaystyle (0,1)} (0, 1) to real numbers in the range (−∞, +∞) {\displaystyle (−\infty, +\infty)} (-∞, ∞), paralleling the real number support for the normal probability density function. One constraining weakness of this conceptualization is that p = 0, 1. Therefore, its slightly more general form may be written as LN[(p + ∆)/(1 − p + 2∆)], ∆ > 0, which allows 0 ≤ p ≤ 1; it also may be written as LN{k(p + ∆)/[k(1 − p + 2∆)]}, where k = 100 is usual (i.e., the values become percentages), and k = 1 in the preceding empirical probabilities example. Its back-transformation is 1/(1 + e −x ). Meanwhile, the inverse for this function is LN[(1 − p)/p], with a backtransformation of e −x /(1 + e −x ). In other words, the notion of an inverse transformation is inconsequential in this context, because estimation is either for p or for (1 − p). Furthermore, it directly relates to binomial regression (see Table 6). Table 9 documents that this variable transformation is not uniformly better than those studied in this paper (e.g., its S-W falls between the raw and the Manly transformed outcomes). In addition, indications from evidence conveyed in Table 6 are that it may well be inferior to its comparable precedingly reflected upon beta-binomial operationalization or Equation (3) output. Alternative RV Specifications Not only do alternative transformations exist, but alternative RV specifications also exist. Perhaps the logarithm is the one deserving the most consideration and contemplation when it competes with an inverse Box-Cox transformation with a power exponent within the interval (0, −0.10); Vélez et al. [44] establish a more precise case-specific lower bound via confidence intervals (CIs) forλ, accompanied by the standard criterion based upon whether or not zero falls within a CI. Its back-transformation is well-known to be e µ+ σ 2 2 ; fortunately, analytical formulae exist for all of its entries in Table 9. The other competition previously mentioned is between the Manly negative exponential and the Box-Cox power exponent of −γ < −2 transformations; the Box-Cox option in this latter case automatically should revert to its Manly competitor on the basis of numerical-for example underflow-difficulties alone. Table 9. Selected specimen attribute summary statistics for the logit back-transformation. (4) results when a negative power exponent is close to 0; Griffith [13] accentuates this point for its mirror positive γ interval (0, 0.10). Both back-transformations furnish competitive and reasonably accurate mean, median, and variance estimates. In contrast, because of smoothing effects induced by a transformation and its subsequent back-transformation, skewness and kurtosis frequently undergo the kinds of alterations that materialized in Table 10. One valuable insight and takeaway from this extended discussion is that parsimony is a useful concurrent criterion when selecting a data transformation, a contention alluded to by the Tukey ladder of powers. The newly stated analytical back-transformation solution provided by Equations (3) and (4) forge this as well as other new comprehensions about variable transformations. Final Remarks In conclusion, a cadre of statistical methodologist have been and are obsessed with trying to compel inverse/reciprocal/negative back-transformations to adhere to the functional form E(1/X) ≈ 1/E(X). However, disappointing sequels to their efforts often follow the application of this specific answer prototype, to which certain Tables 3 and 6 entries attest. Nonetheless, determining such a solution is very important in general because many empirical attribute variables appear to require a transformation containing a negative exponent in order to improve, for example, their frequency distribution alignment with a bell-shaped curve, or stabilize their variance. One of the most important contributions of this paper is the pair of Equations (3) and (4), which furnish a solution defying the quest to exploit the relationship E(1/X) ≈ 1/E(X). Its accompanying critical implication is that the Kummer confluent hypergeometric function of the first kind supplies the necessary formula to excogitate an appropriate, accurate reciprocal function back-transformation solution. In keeping with Freedman and Modarres [45], among others, Equation (3) needs a collection of algebraic formulae for the median, the variance, skewness, and kurtosis, replicating what presently is available for the logarithmic back-transformation, for example, to complement it. In addition, it needs a numerically sound implementation that avoids the normal tail computational adulteration issues currently encountered with Mathematica 12.3, and most likely other symbolic algebra software packages (e.g., Maplesoft; https: //www.maplesoft.com/products/maple/features/symbolicnumericmath.aspx, accessed on 6 July 2022). One implication emerging here is that perseverance with the applicable algebraic manipulations should be productive; after all, this is the approach that rendered Equations (3) and (4). Equation (4), a second novel contribution of this paper, needs considerable refinement that effectively and definitively handles its imaginary part. The real-world attribute variables explored in this paper repeatedly exhibited monotonically decreasing covarying magnitudes of their real and imaginary parts. Table 8 notes communicate that some of these amounts are not necessarily trivial in size. This pernicious Equation (4) property needs to be resolved. Nevertheless, the real number part of its output (a la Tables 5, 6, 9 and 10) tends to match both designated observed data statistics and measures generated by competing back-transformations. The attendant chief implication here derives from the simulation experiments précised in this paper, namely both the imaginary part of the numbers, and the corrupted tail calculations by Mathematica 12.3, appear to be vestiges of symbolic manipulation rules (e.g., [36]) combined with machine and software precision and other computational inadequacies. Consequently, a refinement of Equation (4) should be void of complex numbers. This situation is reminiscent of, and encouraged by, Cardan's formulas versus trigonometric solutions for determining the three roots of cubic equations. Finally, the ultimate advancement spawned by this paper is completion of the backtransformation conceptualization devised by Griffith [13], extending his positive power exponents composition to embrace negative power exponents. The primary implication stemming from this particular provision is that a unified back-transformation theory is draftable now. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The empirical data were accessed and downloaded via https://www. census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html (accessed on 6 July 2022). The simulated data were generated with the SAS 9.4 normal random number generator. Conflicts of Interest: The author declares no conflict of interest. Appendix A. Specimen Attribute RV Pre-Assessments As already mentioned in the narrative, the Box-Cox power and Manly exponential data transformations attempt to align an attribute RV with a normal distribution, and in in doing so stabilize the RV's variance to a normal distribution's constant dispersion. In their inverse forms, these transformations tend to be more applicable to RVs whose observations exhibit right-skewness, tending to concentrate relatively close to zero ([3] (p. 29) within their non-negative support. A noteworthy difference between the inverse polynomial and negative exponential functions is that the former suggests a more complex distribution, whereas the latter indicates a simple distribution. Therefore, when exponents are outside of the [−2, 2] Tukey power ladder interval, parsimony argues for swapping these descriptive equations; this is the same type of argument backing Table 9. This replacement occurs three times in Table A1: Dallas County 40-49 years of age density (γ = −4.98), and DFW MSA professional (γ = −2.64) and wholesale (γ = −4.51) employment percentages. The literature cited in this paper, as well as other readily available publications, furnish a preponderance of evidence attesting to these two reciprocal transformations being very efficient and effective when undertaking their data modification task: empirical frequency distribution makeovers that deform them into mimicking a bell-shaped curve. In this paper, the S-W statistic provides an index of success for such metamorphoses. Hoeffding [46] posits a theorem concerning moment matching and the convergence in probability of density functions. For normal approximations, the first and second moments are of limited importance because they minimally impact density function shape; kurtosis governs the relative heaviness of tails incidental with respect to variance size. A positive support often chaperons reciprocal transformations; certainly, this support cannot contain zero, whose inverse is undefined. In addition, variance must be finite. Meanwhile, Romano and Siegel [47] (pp. [48][49], for example, note counter-examples to the claim that two distributions with the same moments are identical. The notion of a normal approximation already concedes their point. Nevertheless, if one distribution imitates another, then some of their moments should harmonize. For a bell-shaped curve, the intuitive synchronization expectation is for those moments affiliated with skewness and kurtosis: ideal normal and after-transformation histograms should reflect symmetry and peakedness similarities. Tables A1 and A2 tabulate these summary statistics for the attribute RVs discussed in this paper. Both theoretical values of interest are zero: the balance of symmetry begets zero, and excess kurtosis equals kurtosis minus three, the theoretical value for a normal RV. Each of these two tables presents three simultaneous statistical examinations, requiring a multiple testing correction; the Bonferroni adjustment is for a two-tailed 5% significance level, creating the following confidence intervals: skewness for Dallas County of ± 0.254, and for the DFW MSA of ± 0.161; and, kurtosis for Dallas County of ± 0.509, and for the DVW MSA of ± 0.322. These tables reveal that the transformations virtually always adequately induce skewness, but perhaps have a slightly lower chance of also inducing kurtosis. Furthermore, even with near-perfect fits to normal quantile values, as measured by the MSE, they are even less likely to generate a non-significant S-W statistic. As an aside, the relatively large sample sizes of 529 and 1324 complicate this inferential appraisal; as Tables 4 and 5 coupled with Figures 1 and 2 demonstrate, almost all alignment gains through the use of transformations are substantial, even when transformed data S-W values remain statistically significant; this situation reflects the contemporary need to development substantive differences to replace statistical inference criteria. Nevertheless, these larger sample sizes signify a situation in which modest departures from normality tend to be far less problematic. Accordingly, invoking the six-sigma rule here increases the confidence intervals to skewness for Dallas County of ± 0.516, and for the DFW MSA of ± 0.326; and, kurtosis for Dallas County of ± 1.236, and for the DVW MSA of ± 0.784. Unfortunately, the reporting style of SAS software prevents a more precise scrutiny of the <0.0001 S-W p-values. Additionally, because the six-sigma rule classifies only 3.4 per million random samples as extreme outcomes, the natural presence of sampling error does not convincingly account for the few significant kurtosis cases appearing in Table A1; these particular few variable transformations may well be prone to serious specification error, a theme meriting future research. On the one hand, because the assumption of normality rests upon symmetry, and a prominent characteristic of many non-normal RV probability density functions is asymmetry, skewness could be viewed as the more important of the two moments in a normality diagnosis. In keeping with this viewpoint, DeCarlo [48] suggests that skewness has a higher priority in equality of means tests. On the other hand, Khan and Rayner [49] (p. 204) state: "Both the ANOVA and Kruskal-Wallis tests are vastly more affected by the kurtosis of the error distribution rather than by its skewness." This incongruity arises because correlation exists between skewness and kurtosis moments; their effects are not completely separable-for example, increasing skewness tends to demand increasing kurtosis in a frequency distribution. Ryu [50] highlights one consequence of this covariation: selected empirical distribution quantile plots disclose a thicker upper tail attributable to skewness as well as a longer upper tail attributable to kurtosis. With regard to data transformations, skewness usually is easier than kurtosis to manipulate: simultaneously and systematically stretching/shrinking measurement scale segments differentially to better center any clustering tendency of values-alluding to the Tukey-Mosteller bulge-can entail less effort than trying to increase/decrease this clustering propensity. Therefore, until some consensus decision-making rationale crystalizes for weighting one of these moments more than the other, data transformation evaluations should treat them equally, which essentially is the tactic taken in this paper. Finally, especially Table A2 tabulates findings that would, for an overwhelming number of its entries, remain statistically non-significant even if the significance level criterion was more restrictive than that for six-sigma (e.g., the preceding 5% level three-test Bonfronni adjustment). In conclusion, the illustrative reciprocal transformations staged in this paper successfully align their corresponding empirical frequency distributions with a bell-shaped normal curve, when judged by a normal RV lower moments matching yardstick. Appendix B. Deducing Equations (3) and (4) In today's academic world, the nature of mathematical proofs materializes in a multitude of appearances beyond their earlier formalisms, in part coinciding with the unfolding of experimental mathematics. Gone are the days of solely deductive/inductive, counterexample, and complete enumeration demonstrations. Now acceptable proofs also are by simulation [51], with some vigilance, as well as by, again with some caution, computer assisted algebraic/symbolic manipulations (e.g., [36]). The determination and justification of Equations (3) and (4) are ascribable to both of these avant-garde tools: Mathematica 12.3 aided in the postulating of these two mathematical formulae, and simulation experimentation helps validate the presumable superfluousness of the discarded imaginary parts reported in Mathematica symbolic output. Accordingly, this backdrop insinuates that these two expressions are conjectures rather than theorems, and this appendix outlines the process and rationale used to posit them. Future research needs to convert them into theorems with proofs. The formulation of Equation (3) begins with the following back-transformation for the reciprocal Manly exponential transformation: x = e −βy ⇒ y = −LN(x)/β where e denotes Euler's number (i.e., 2.71828 . . . ), and LN denotes the natural logarithm. The original data transformation e −βy creates X~N (µ, σ 2 ), presuming (µ − 6σ) >> 0-whose gap size is relative to the magnitude of the mean and standard deviation-where N denotes a normal RV. The companion Mathematica problem becomes The computational outcome generated by executing this command is where the imaginary part, iπErfc µ √ 2σ appears to be trivial (e.g., see Table 7), Hypergeometric1F1 is the Kummer confluent hypergeometric function of the first kind, the superscript (1, 0, 0) denotes the partial derivative with respect to only the first argument of hypergeometric function 1 F 1 , say a in its 3-tuple [a, b, z] argument, and EulerGamma ≈ 0.577216. Setting iπErfc µ √ 2σ to zero, and replacing the Mathematica notation Log with the natural logarithmic notation LN, yields 1 2β 0.577216 + LN 2 σ 2 + ∂Hypergeometric1F1[a, b, z]/∂a , evaluated at a = 0, b = 1/2, and z = − µ 2 2σ 2 Simulation experiments (e.g., Table 2) verify this reduced result. Nonetheless, future research needs to document definitively that the imaginary number part source term is irrelevant in general. This last expression may be rewritten as follows, writing latent Prochhammer symbols with summation and product terms: Theory of equations states that the coefficients for the k th -order polynomial generated by k−1 ∏ j=0 (a + j) are given by, for each of its a 1 terms that disappear with the first partial differentiation and after substitution of a = 0 in the resulting derivative, (k − 1)!. Thus, the new reduced expression becomes 1 2β which is Equation (3). For this paper, specimen empirical data for Dallas County and the DFW MSA submitted to Mathematica 12.3 supplies numerical illustrations employing this expression. Equation (4) has a similar mathematical pedigree, and hence its derivation parallels the preceding protocol sketched for Equation (3). This new proposition begins with the following back-transformation for the reciprocal Box-Cox polynomial transformation: where, as mentioned in the text of this paper, δ is a translation/shift parameter. This data transformation also creates X~N(µ, σ 2 ), presuming (µ − 6σ) >> 0. The companion Mathematica problem becomes The computational outcome generated by executing this symbolic computer code is −δ + 1 √ π (−1) −1/γ 2 −1− 1 2γ σ −2/γ (((−σ) where the imaginary part spawned by (−1) −1/g appears to be trivial, enabling its removal. Next, factoring out σ and then combining it with σ −2/γ renders Equation (4), once more with the appropriate notational replacements (e.g., Γ for Gamma, and the embedded Prochhammer symbol based summations and products): Interestingly, although the twice-appearing term (−1) 1 γ causes the solution to be a complex number, trial-and-error experiments reveal that it cannot be deleted from this expression without nontrivial real number part consequences. This undesirable complication warrants future research. In addition, equivalent to the Equation (3) situation for this paper, specimen empirical data for Dallas County and the DFW MSA submitted to Mathematica 12.3 supply confirmatory numerical illustrations employing this final expression, ignoring its imaginary part. To conclude, these two sets of reasoning deliver new normal curve theory transformation conceptualizations pertaining to inverse data transformations. Table A3 summarizes utilized specimen dataset implementation details for exemplification purposes in this paper; Figure A1 visualizes part of their quality evaluation. No back-transformed mean results reflect error in excess of 10%: Figure A1a portrays a near-perfect linear alignment of these quantities with their corresponding source observed means. Mathematica 12.3 is able to compute the analytical expected value of X 2 for Equation (4), allowing calculation of its analytical back-transformed standard error. This second moment quantity encompasses noticeably more error (e.g., Figure A1c) than its first moment counterpart, although Figure A1b indicates that even the most extreme case of this error still falls within its applicable linear regression prediction interval. Note: std denotes standard deviation; bold italic font entries denote a failure for (µ − 6σ) >> 0 to hold; underlined bold italic font denotes a back-transformation deviation from its partner raw data statistic of at least 10%. Figure A1. Quality assessment of
2022-08-04T15:08:10.104Z
2022-07-30T00:00:00.000
{ "year": 2022, "sha1": "ee31ab74023dea66d4544898325b44030d9aadeb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2571-905X/5/3/42/pdf?version=1659171327", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "deceeb8dd970e89607c01d172e137bfdc7504de7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
216444392
pes2o/s2orc
v3-fos-license
TiO2-Seeded Hydrothermal Growth of Spherical BaTiO3 Nanocrystals for Capacitor Energy-Storage Application Simple but robust growth of spherical BaTiO3 nanoparticles with uniform nanoscale sizes is of great significance for the miniaturization of BaTiO3-based electron devices. This paper reports a TiO2-seeded hydrothermal process to synthesize spherical BaTiO3 nanoparticles with a size range of 90–100 nm using TiO2 (Degussa) and Ba(NO3)2 as the starting materials under an alkaline (NaOH) condition. Under the optimum conditions ([NaOH] = 2.0 mol L−1, RBa/Ti = 2.0, T = 210 ◦C and t = 8 h), the spherical BaTiO3 nanoparticles obtained exhibit a narrow size range of 91 ± 14 nm, and the corresponding BaTiO3/polymer/Al film is of a high dielectric constant of 59, a high break strength of 102 kV mm−1, and a low dielectric loss of 0.008. The TiO2-seeded hydrothermal growth has been proved to be an efficient process to synthesize spherical BaTiO3 nanoparticles for potential capacitor energy-storage applications. The miniaturization of electronic components and nanotechnology makes it necessary to synthesize nanometer-scale BaTiO 3 materials, including nanowires [20] and nanoparticles [21], with scientific appeal and technical urgency. Device miniaturization and high dielectric constant can be achieved by controlling their microstructures and compositions, which are strongly dependent on the phase, uniformity, surface area, and size of the BaTiO 3 materials [22][23][24]. For the applications in MLCC, BaTiO 3 powders are usually used as dielectric fillers and blended with a polymer to a fabricate composite film with a compact and flexible surface. In order to manufacture a reliable BaTiO 3 -based MLCC, high-quality BaTiO 3 powders with high purity, high crystallinity, high dispersibility, and uniform small size are the precondition. The BaTiO 3 fillers with a narrow particle-size distribution and suitable phases are in favor of obtaining a compact composite film with a lower content of pores, and the dense and homogeneous BaTiO 3 phase in polymer matrix can lead to higher dielectric properties of the composite films [25]. R.K.Goyal et al. found that the dielectric constants of the composite films filled with tetragonal BaTiO 3 powders are higher than those of the films with cubic BaTiO 3 fillers; whereas the effect of crystal phase on the dielectric losses presents an opposite trend that the composite filled with a cubic BaTiO 3 filler shows a lower dielectric loss than that of the tetragonal BaTiO 3 composite film [26]. Therefore, a high-quality BaTiO 3 filler is important for high performance composite dielectric films, and a recent investigation on the synthesis of BaTiO 3 nanocrystals via various processes has become one of the hot topics. There have been a number of methods developed to prepare high-quality BaTiO 3 powders [27]. As mentioned above, the conventional route used to prepare BaTiO 3 powders is via a solid-state reaction between BaCO 3 and TiO 2 at a high temperature of 850-1400 • C [28]. This solid-state method is easy in operation and allows for mass production, but there are a number of serious drawbacks in the control of particle-size (morphology) and compositional purity. Ball-milling is usually used to mix BaCO 3 and TiO 2 . It is not only time-consuming and labor-intensive but also easy to introduce impurities [29]. As an alternative to the solid-state process, various "wet chemical" methods, including sol-gel process [30,31], hydrothermal method [32], micro-emulsions [33], and oxalate process [34] have been developed to synthesize BaTiO 3 powders. These methods can produce high-purity, uniform, ultrafine BaTiO 3 powders. Because of the complexity of operation, multi-stage, and relatively high cost, most of these methods are mainly used at the laboratory level. It should be noted that the hydrothermal process is a promising method to synthesize BaTiO 3 powders with controllable morphology and chemical uniformity. The hydrothermal method can use various processing conditions in the synthesis of BaTiO 3 powders including the sources of barium and titanium in an aqueous medium under crystallization or amorphous state, the hydrothermal temperature and time, and morphology-controlled agents. Because of the diversity of the factors that affect the synthesis of BaTiO 3 nanoparticles, hydrothermal methods are full of opportunities to improve their quality in phase composition, dimensions, and morphology. Li et al. [35] reported the synthesis of tetragonal BaTiO 3 nanocrystals using TiCl 4 (or TiO 2 ) as the source of titanium, BaCl 2 as the source of barium, and polymer(vinylpyrrolidone) (PVP) as the surfactant. Grendal et al. [36] used two titanium sources of amorphous titanium dioxide and a Ti-citrate complex solution to synthesize BaTiO 3 nanoparticles with a size range of 10-15 nm at different hydrothermal temperatures and times. Zhao et al. [37] used cetyltrimethylammonium bromide (CTAB), Ba(OH) 2 ·8H 2 O, and tetrabutyl titanate as the precursors to synthesize BaTiO 3 nanocrystals via a self-assembly process. Ozen et al. [38] reported the hydrothermal synthesis of tetragonal BaTiO 3 nanocrystals from a single-source amorphous barium titanate precursor in a high concentration sodium hydroxide solution via a homogeneous dissolution-precipitation reaction. From the above cases, one can see that different hydrothermal parameters and growth mechanisms can effectively adjust the formation of BaTiO 3 nanocrystals. In addition, a single cubic phase of BaTiO 3 can be formed at a low alkalinity, and a tetragonal phase of BaTiO 3 is easily formed under a strong alkaline condition [39]. With the motivation of preparing cubic/tetragonal BaTiO 3 nanocrystals with a spherical morphology, this paper herein develops a TiO 2 -seeded hydrothermal process to grow BaTiO 3 nanocrystals using Ba(NO 3 ) 2 and TiO 2 (P25) as the barium and titanium sources, respectively. This synthesis is conducted under a strong alkaline NaOH aqueous solution (pH = 13.6), and the factors that affect the formation of BaTiO 3 nanocrystals are systematically investigated. The major influencing factors involve molar Ba/Ti ratios, hydrothermal temperature, and hydrothermal time, and their effects on the morphology, particle size, and phase composition of the BaTiO 3 nanoparticles are investigated. The possible growth mechanisms are discussed. The BaTiO 3 /polymer/Al films containing the BaTiO 3 nanoparticles obtained under the optimum conditions are of a high dielectric constant of 59, a high break strength of 102 kV mm −1 and a low dielectric loss of 0.008. This work achieves this aim to seek optimum methods to synthesize spherical BaTiO 3 nanoparticles with potential applications in capacitor energy-storage and other electric devices. Growth of Spherical BaTiO 3 Nanoparticles BaTiO 3 samples were synthesized via a hydrothermal process using TiO 2 (P25) nanoparticles as the Ti source and seeds. The synthetic process of the BaTiO 3 nanocrystals is shown in Figure 1. Teflon-lined autoclaves with a volume of 100 mL were used as the reaction vessel. Typically, 6.0 g of NaOH and 1.5 g of TiO 2 nanoparticles were first added into 75 mL of distilled water under magnet stirring; then a given amount of Ba(NO 3 ) 2 was added to the above suspension containing TiO 2 nanoparticles and NaOH under magnetic stirring. In the final suspensions, the molar ratios of Ba(NO 3 ) 2 to TiO 2 (R Ba/Ti ) were kept at 1.6-2.0, and the molar concentration of NaOH was about 2 mol L −1 . The pH values of the as-obtained suspensions before hydrothermal treatment were about 13.6. The prepared suspensions were then transferred into the Teflon-lined steel autoclaves. After carefully sealing, the autoclaves were heated in an oven at 150-210 • C for 2-16 h. After the hydrothermal reaction, the autoclaves were cooled naturally, and the solid samples were collected using a centrifugal machine (5000 rpm, 5 min), followed by washing with water for more than three times and drying at 120 • C for 24 h. The as-obtained BaTiO 3 solids were ground into powders using an agate mortar. These white powders, i.e., BaTiO 3 nanocrystals, were collected and used for characterization. The detailed processing parameters for the synthesis of BaTiO 3 nanocrystals are listed in Table 1. It was assumed that TiO 2 added was completely converted into BaTiO 3 , and the theoretical mass could be calculated. The yield of BaTiO 3 was the ratio of the actual mass of the BaTiO 3 sample to their corresponding theoretical mass. To determine the possibility of the as-obtained BaTiO3 nanocrystals to form a uniform film for capacitor energy-storage application, we chose sample S8 (in Table 1) as an example to prepare BaTiO3/polymer/Al films (BPA films, Figure 2) using the similar method reported in our previous work [25]. Typically, the BaTiO3 nanocrystals (S8) were mixed with a silicon-containing heat-resistant resin (CYN-01), and then some silane coupling agent (KH550) was added into the above mixture. Dimethylacetamide (DMAc, Guangzhou Jinhuada Chemical Reagent Co., Ltd., Guangzhou, China)) was used as the solvent. The mass ratio of MBaTiO3:MDMAc:MPolymer:MKH550 was kept at 100:45:25:4. The as-prepared mixture was ultrasonically treated for 30 min for a uniform slurry. The above slurry was coated on an Al foil by a bar coater (T-300CA) and a coating rod (D10-OSP010-L0400) from Shijiazhuang Ospchina Machinery Technology Co., Ltd (Shijiazhuang, China)). The as-formed films were then dried in an oven at 220 °C for 10 min and finally used for the test of dielectric properties. Preparation of BaTiO 3 /Polymer/Al (BPA) Films To determine the possibility of the as-obtained BaTiO 3 nanocrystals to form a uniform film for capacitor energy-storage application, we chose sample S8 (in Table 1) as an example to prepare BaTiO 3 /polymer/Al films (BPA films, Figure 2) using the similar method reported in our previous work [25]. Typically, the BaTiO 3 nanocrystals (S8) were mixed with a silicon-containing heat-resistant resin (CYN-01), and then some silane coupling agent (KH550) was added into the above mixture. Dimethylacetamide (DMAc, Guangzhou Jinhuada Chemical Reagent Co., Ltd., Guangzhou, China)) was used as the solvent. The mass ratio of M BaTiO3 :M DMAc :M Polymer :M KH550 was kept at 100:45:25:4. The as-prepared mixture was ultrasonically treated for 30 min for a uniform slurry. The above slurry was coated on an Al foil by a bar coater (T-300CA) and a coating rod (D10-OSP010-L0400) from Shijiazhuang Ospchina Machinery Technology Co., Ltd (Shijiazhuang, China)). The as-formed films were then dried in an oven at 220 • C for 10 min and finally used for the test of dielectric properties. The X-ray diffraction (XRD) patterns of the BPA composite films and BaTiO3 powders were recorded by a DX-2700BH X-ray diffractometer (Dandong, China) using Cu Kα irradiation. The morphologies and particle sizes of the BaTiO3 samples were measured using a scanning electron microscope (SEM, Hitachi S-4800, Japan). The particle-size distribution was statistically analyzed according to the SEM images. The pH values of the suspensions were measured using a pH meter (PHS-2C). The yields of the BaTiO3 samples were calculated according to the ratios of experimental BaTiO3 mass to its theoretical mass on the basis of Ba conservation. Fourier-transform infrared (FT-IR) spectra were recorded on a Bruker-Equinox 55 spectrometer in a wavenumber range of 4000-400 cm −1 using the KBr technique. The dielectric constant (ε) and loss (tanδ) of the BPA films were measured using a high-precision high-voltage capacitor bridge (QS89, Shanghai Yanggao Capacitor Co., Ltd., Shanghai, China), and the frequency during dielectric performance test was kept at 10 Hz. The breakdown strengths of the BPA films were measured using a withstand voltage tester (GY2670A, Guangzhou Zhizhibao Electronic Instrument Co., Ltd., Guangzhou, China). Results and Discussion The TiO2-seeded growth process of BaTiO3 nanocrystals is shown in Figure 1. The commercially available TiO2 (P25) nanoparticles, with a mixed phase of anatase and rutile and a size range of 20-25 nm, are used as the Ti source and seeds in the synthesis of BaTiO3 nanocrystals via a conventional hydrothermal process in a strongly basic aqueous solution. In this synthesis, TiO2 nanoparticles can first react with NaOH and form insoluble titanate species (e.g., Na2TiO3), which then act like the crystal nucleus to form BaTiO3 nanocrystals by reacting with Ba 2+ ions under the hydrothermal conditions. We systematically investigated the effects of molar ratios of Ba/Ti (RBa/Ti), hydrothermal temperature (T/°C) and time (t/h) on the phase, morphology and particle size of the BaTiO3 nanocrystals. Influence of Molar Ba/Ti Ratio In order to verify the effect of the molar Ba/Ti ratio on the formation of BaTiO3 nanoparticles, we synthesized a series of samples with various RBa/Ti values from 1.6 to 2.5, and the other hydrothermal conditions were kept as the same: sodium hydroxide concentration [NaOH] = 2.0 mol L −1 (pH = 13.6), T = 200 °C, and t = 8 h. The typical results of these samples are shown in Figure 3. Characterization of BaTiO 3 Nanocrystals and BPA Films The X-ray diffraction (XRD) patterns of the BPA composite films and BaTiO 3 powders were recorded by a DX-2700BH X-ray diffractometer (Dandong, China) using Cu Kα irradiation. The morphologies and particle sizes of the BaTiO 3 samples were measured using a scanning electron microscope (SEM, Hitachi S-4800, Japan). The particle-size distribution was statistically analyzed according to the SEM images. The pH values of the suspensions were measured using a pH meter (PHS-2C). The yields of the BaTiO 3 samples were calculated according to the ratios of experimental BaTiO 3 mass to its theoretical mass on the basis of Ba conservation. Fourier-transform infrared (FT-IR) spectra were recorded on a Bruker-Equinox 55 spectrometer in a wavenumber range of 4000-400 cm −1 using the KBr technique. The dielectric constant (ε) and loss (tanδ) of the BPA films were measured using a high-precision high-voltage capacitor bridge (QS89, Shanghai Yanggao Capacitor Co., Ltd., Shanghai, China), and the frequency during dielectric performance test was kept at 10 Hz. The breakdown strengths of the BPA films were measured using a withstand voltage tester (GY2670A, Guangzhou Zhizhibao Electronic Instrument Co., Ltd., Guangzhou, China). Results and Discussion The TiO 2 -seeded growth process of BaTiO 3 nanocrystals is shown in Figure 1. The commercially available TiO 2 (P25) nanoparticles, with a mixed phase of anatase and rutile and a size range of 20-25 nm, are used as the Ti source and seeds in the synthesis of BaTiO 3 nanocrystals via a conventional hydrothermal process in a strongly basic aqueous solution. In this synthesis, TiO 2 nanoparticles can first react with NaOH and form insoluble titanate species (e.g., Na 2 TiO 3 ), which then act like the crystal nucleus to form BaTiO 3 nanocrystals by reacting with Ba 2+ ions under the hydrothermal conditions. We systematically investigated the effects of molar ratios of Ba/Ti (R Ba/Ti ), hydrothermal temperature (T/ • C) and time (t/h) on the phase, morphology and particle size of the BaTiO 3 nanocrystals. Influence of Molar Ba/Ti Ratio In order to verify the effect of the molar Ba/Ti ratio on the formation of BaTiO 3 nanoparticles, we synthesized a series of samples with various R Ba/Ti values from 1.6 to 2.5, and the other hydrothermal conditions were kept as the same: sodium hydroxide concentration [NaOH] = 2.0 mol L −1 (pH = 13.6), T = 200 • C, and t = 8 h. The typical results of these samples are shown in Figure 3. Crystals 2020, 10, x FOR PEER REVIEW 6 of 15 211) and (220) reflections of the cubic BaTiO3 phase, respectively, according to the JCPDS card no. 31-0174 [40]. No peaks belonging to other identifiable impurities can be found in all the samples obtained, indicating the as-obtained BaTiO3 samples are pure. As Figure 3b shows, the peak at about 45° can be divided into two diffraction sub-peaks at 44.9 and 45.3°, attributable to the (200) and (002) reflections of the tetragonal BaTiO3 species, respectively [41]. With the increase of the RBa/Ti value from 1.6 to 2.5, the peaks near 45° become wider and wider, suggesting that a higher RBa/Ti value is favorable in forming a tetragonal BaTiO3 phase. Figure 3c shows the plots of particle size dependent on the RBa/Ti values. When RBa/Ti = 1.6-1.8, the particle sizes are 90-100 nm (97 ± 15 nm for RBa/Ti = 1.6 and 93 ± 24 nm for RBa/Ti = 1.8), but the uniform degree is not high. Figure 3d shows the yields of BaTiO3 samples synthesized with various RBa/Ti values after hydrothermally treating at 200 °C for 8 h ([NaOH] = 2.0 mol L −1 ). One can see that the yields of all the samples are close to 100%, indicating the complete conversion of TiO2 to BaTiO3 nanocrystals. The formation of a small amount of crystal water may make the BaTiO3 yield a little larger than 100% according to the TiO2 amount [42]. Figure 3g), the particle size of the BaTiO3 sample is 91 ± 22 nm, and it shows a more uniform solid spherical particle morphology. When RBa/Ti = 2.5 (Figure 3h), the particle size of the BaTiO3 sample is 98 ± 26 nm, and one can see that it shows obviously clean-cut crystal faces for the BaTiO3 particles, suggesting a higher degree of crystallinity and favorable formation of the tetragonal BaTiO3 phase. Taking the results of XRD and particle-size distribution into account, we can tentatively conclude that a higher Ba/Ti ratio is more favorable in forming tetragonal BaTiO3 nanocrystals with a more uniform size. [41]. With the increase of the R Ba/Ti value from 1.6 to 2.5, the peaks near 45 • become wider and wider, suggesting that a higher R Ba/Ti value is favorable in forming a tetragonal BaTiO 3 phase. Figure 3c shows the plots of particle size dependent on the R Ba/Ti values. When R Ba/Ti = 1.6-1.8, the particle sizes are 90-100 nm (97 ± 15 nm for R Ba/Ti = 1.6 and 93 ± 24 nm for R Ba/Ti = 1.8), but the uniform degree is not high. Figure 3d shows the yields of BaTiO 3 samples synthesized with various R Ba/Ti values after hydrothermally treating at 200 • C for 8 h ([NaOH] = 2.0 mol L −1 ). One can see that the yields of all the samples are close to 100%, indicating the complete conversion of TiO 2 to BaTiO 3 nanocrystals. The formation of a small amount of crystal water may make the BaTiO 3 yield a little larger than 100% according to the TiO 2 amount [42]. Figure 3g), the particle size of the BaTiO 3 sample is 91 ± 22 nm, and it shows a more uniform solid spherical particle morphology. When R Ba/Ti = 2.5 (Figure 3h), the particle size of the BaTiO 3 sample is 98 ± 26 nm, and one can see that it shows obviously clean-cut crystal faces for the BaTiO 3 particles, suggesting a higher degree of crystallinity and favorable formation of the tetragonal BaTiO 3 phase. Influence of Hydrothermal Temperature Taking the results of XRD and particle-size distribution into account, we can tentatively conclude that a higher Ba/Ti ratio is more favorable in forming tetragonal BaTiO 3 nanocrystals with a more uniform size. Influence of Hydrothermal Temperature The effect of hydrothermal temperature on the synthesis of BaTiO 3 nanoparticles was investigated by changing the hydrothermal temperature from 150 to 210 • C under the conditions: R Ba/Ti = 2.0, t = 8 h and [NaOH] = 2.0 mol L −1 , and Figure 4 shows their characterization results of XRD and SEM. Figure 4c shows the particle-size distribution plot versus hydrothermal temperature (T). When T = 150 °C, the particle sizes of the as-obtained BaTiO3 nanocrystals are 85 ± 15 nm. When T = 165 °C, the particle size of the as-obtained BaTiO3 is about 74 ± 13 nm, seeming to become smaller, but their uniformity is low. When the temperature increases to 180 °C, the particle size of the as-obtained BaTiO3 is 88 ± 10 nm, and the morphology of the BaTiO3 particles becomes relatively uniform. When T = 210 °C, the particle size of the as-obtained the BaTiO3 sample is 91 ± 14 nm, just a slight increase. As Figure 4c shows, the particle sizes of the BaTiO3 samples obtained at various hydrothermal temperatures are kept almost constant at about 80-90 nm. Figure 4d shows the plot of the yield of the BaTiO3 sample versus the hydrothermal temperature. One can see that during the hydrothermal temperature of 150-180 °C, the yield is close to 100%; when the hydrothermal temperature is 210 °C, the yield slightly decreases because of the complete dehydration reaction in the elevated temperature. Figure 4c shows the particle-size distribution plot versus hydrothermal temperature (T). When T = 150 • C, the particle sizes of the as-obtained BaTiO 3 nanocrystals are 85 ± 15 nm. When T = 165 • C, the particle size of the as-obtained BaTiO 3 is about 74 ± 13 nm, seeming to become smaller, but their uniformity is low. When the temperature increases to 180 • C, the particle size of the as-obtained BaTiO 3 is 88 ± 10 nm, and the morphology of the BaTiO 3 particles becomes relatively uniform. When T = 210 • C, the particle size of the as-obtained the BaTiO 3 sample is 91 ± 14 nm, just a slight increase. As Figure 4c shows, the particle sizes of the BaTiO 3 samples obtained at various hydrothermal temperatures are kept almost constant at about 80-90 nm. Figure 4d shows the plot of the yield of the BaTiO 3 sample versus the hydrothermal temperature. One can see that during the hydrothermal temperature of 150-180 • C, the yield is close to 100%; when the hydrothermal temperature is 210 • C, the yield slightly decreases because of the complete dehydration reaction in the elevated temperature. According to the XRD patterns (Figure 4a,b) and SEM images (Figure 4e-h), we find that a higher hydrothermal temperature is helpful to form tetragonal BaTiO 3 nanocrystals with more uniform spherical morphology. For safety's sake, the hydrothermal temperature is chosen as 210 • C for the synthesis of BaTiO 3 nanocrystals in the following investigation. Cautions: the working temperature limit of a PTFE hydrothermal reactor is usually about 220 • C, and a too high temperature will cause explosion. Influence of Hydrothermal Time The effect of hydrothermal time on the formation of BaTiO 3 nanocrystals ( Figure 5 Figure 5a,b shows their XRD patterns. As Figure 5a shows, the XRD peaks of all the samples can be assignable to the cubic/tetragonal BaTiO 3 phase with no other identifiable impurity peaks. The partially enlarged XRD patterns in Figure 5b shows the details that the XRD peaks at around 45 • become wider and wider as the hydrothermal time increases from 2 h to 16 h, indicating that the BaTiO 3 sample obtained with a longer hydrothermal time has more tetragonal BaTiO 3 species. Crystals 2020, 10, x FOR PEER REVIEW 8 of 15 According to the XRD patterns (Figure 4a,b) and SEM images (Figure 4e-h), we find that a higher hydrothermal temperature is helpful to form tetragonal BaTiO3 nanocrystals with more uniform spherical morphology. For safety's sake, the hydrothermal temperature is chosen as 210 °C for the synthesis of BaTiO3 nanocrystals in the following investigation. Cautions: the working temperature limit of a PTFE hydrothermal reactor is usually about 220 °C, and a too high temperature will cause explosion. Influence of Hydrothermal Time The effect of hydrothermal time on the formation of BaTiO3 nanocrystals ( Figure 5 Figure 5a,b shows their XRD patterns. As Figure 5a shows, the XRD peaks of all the samples can be assignable to the cubic/tetragonal BaTiO3 phase with no other identifiable impurity peaks. The partially enlarged XRD patterns in Figure 5b shows the details that the XRD peaks at around 45° become wider and wider as the hydrothermal time increases from 2 h to 16 h, indicating that the BaTiO3 sample obtained with a longer hydrothermal time has more tetragonal BaTiO3 species. Figure 5c shows the BaTiO3 sample gradually changes from small nanoparticles (~70 nm) to large ones (~100 nm) as the hydrothermal time is prolonged from 2 h to 16 h. Figure 5d shows the yield plot of the BaTiO3 nanocrystals versus hydrothermal time. With a short hydrothermal time of 2 h, the BaTiO3 yield is about 92% because of the incomplete reaction. When the hydrothermal time increases to 4-16 h, the yields of the BaTiO3 samples is close to 98%. Figure 5e-g, exhibit a spherical shape; when the hydrothermal time increases to 12-16 h, as Figure 5h,i shows, the as-obtained BaTiO3 samples take on a planar polyhedral morphology. It is interesting that the particle sizes of the BaTiO3 samples are close to 100 nm and not changed obviously with the prolonging of hydrothermal time to 16 h. In addition, as Figure 5i shows, the BaTiO3 nanoparticles obtained by hydrothermal treating at 210 °C for 16 h are uniform in particle size and well dispersed. Figure 6 shows the FT-IR spectra of the BaTiO3 samples synthesized with different hydrothermal times (RBa/Ti = 2.0, T = 210 °C, [NaOH] = 2.0 mol L −1 ). The bands at 3431 and 1568 cm −1 can be attributed Figure 5e-g, exhibit a spherical shape; when the hydrothermal time increases to 12-16 h, as Figure 5h,i shows, the as-obtained BaTiO 3 samples take on a planar polyhedral morphology. It is interesting that the particle sizes of the BaTiO 3 samples are close to 100 nm and not changed obviously with the prolonging of hydrothermal time to 16 h. In addition, as Crystals 2020, 10, 202 9 of 15 Figure 5i shows, the BaTiO 3 nanoparticles obtained by hydrothermal treating at 210 • C for 16 h are uniform in particle size and well dispersed. Figure 6 shows the FT-IR spectra of the BaTiO 3 samples synthesized with different hydrothermal times (R Ba/Ti = 2.0, T = 210 • C, [NaOH] = 2.0 mol L −1 ). The bands at 3431 and 1568 cm −1 can be attributed to the stretching mode of the adsorbed water molecules and O-H groups, indicating that the surfaces of the BaTiO 3 nanocrystals contain some adsorbed water and -OH groups. The weak band at 1400 cm −1 can be attributed to the stretching mode of the C-O groups because of the incorporation of CO 2 into the basic solution. The broad and strong absorption bands at 562 cm −1 is attributed to the normal vibration of Ti-O I stretching, and the weaker and sharper absorption bands near 438 cm −1 can be attributed to the normal vibration of Ti-O II bending. When the hydrothermal time is extended from 2 h to 16 h, the bands at 562 and 438 cm −1 become stronger and sharper, indicating that the BaTiO 3 nanocrystals with a high degree of crystallinity are formed. According to the XRD patterns (Figure 5a,b), SEM images (Figure 5e-i) and FT-IR spectra (Figure 6), the BaTiO 3 nanocrystals obtained by hydrothermal treating at 210 • C for more than 8 h are of uniform spherical morphologies with a size range of 95-100 nm and high degree of crystallinity. Therefore, the optimum hydrothermal parameters for the synthesis of BaTiO 3 nanocrystals can be R Ba/Ti ≥ 2, T ≥ 200 • C, t ≥ 8 h. The as-obtained BaTiO 3 nanocrystals are of a mixture of cubic and tetragonal phases and exhibit a uniform spherical particulate morphology with a size range of 90-100 nm. The as-obtained spherical BaTiO 3 nanocrystals show a high performance in ceramic capacitor for energy-storage applications. Understanding of Growth Mechanism In the hydrothermal synthesis of BaTiO3 nanocrystals, TiO2 (P25) nanoparticles are used as the solid-state Ti source and seeds for crystal growth. The possible growth mechanism of the BaTiO3 nanocrystals by the hydrothermal process is shown in Figure 7. TiO2 nanoparticles first react with OH − ions in a strong alkaline solution to form a soluble titanium hydroxide complex, which can form a negatively charged Ti-O chain. These negatively charged Ti-O chains attract positively charged Ba 2+ or BaOH + ions to form BaTiO3 nuclei, on which the excess Ba 2+ species continue to grow in the strong alkaline solution under the hydrothermal conditions for a long time. The possible reactions for the growth of BaTiO3 nanocrystals can be described as follows: Understanding of Growth Mechanism In the hydrothermal synthesis of BaTiO 3 nanocrystals, TiO 2 (P25) nanoparticles are used as the solid-state Ti source and seeds for crystal growth. The possible growth mechanism of the BaTiO 3 nanocrystals by the hydrothermal process is shown in Figure 7. TiO 2 nanoparticles first react with OH − ions in a strong alkaline solution to form a soluble titanium hydroxide complex, which can form a negatively charged Ti-O chain. These negatively charged Ti-O chains attract positively charged Ba 2+ or BaOH + ions to form BaTiO 3 nuclei, on which the excess Ba 2+ species continue to grow in the strong alkaline solution under the hydrothermal conditions for a long time. The possible reactions for the growth of BaTiO 3 nanocrystals can be described as follows: 2 (1) Ti(OH) 6 Ti(OH)6 2− + Ba + → BaTiO3 + H2O Using TiO2 (P25) nanoparticles as the seeds and Ti source for the synthesis of BaTiO3 nanocrystals, the negatively charged Ti-O chains are first formed on the surface of TiO2 (P25) particles in the strong alkaline solution, and the whole TiO2 (P25) nanoparticles are then gradually transformed to the [Ti(OH)x] 4−x species. The negatively charged Ti-O chains (i.e., [Ti(OH)6] 2− ) react with Ba 2+ ions to form BaTiO3 nanocrystals under hydrothermal conditions. The large spherical particles in situ formed on the TiO2 (P25) nuclei may overcome the agglomeration because of their weak attraction to each other. The small particles can be self-regulated by the interaction of van der Waals torque (Casimir Torque) under high-temperature Brownian motion via the orientation attachment mechanism [43]. During the long hydrothermal reaction, smaller crystals dissolve and re-deposit on larger particles for orientation attachment and crystal extension via the Ostwald ripening process. Therefore, the growth mechanism for the formation of BaTiO3 nanoparticles may involve the following steps: Dielectric Properties of the BPA Film with BaTiO3 Nanoparticles The spherical BaTiO3 nanoparticles with a size range of 91 ± 14 nm (S8 in Table 1) obtained under the optimum conditions ([NaOH] = 2.0 mol L −1 , RBa/Ti = 2.0, T = 210 °C and t = 8 h) were used to prepare BaTiO3/polymer/Al (BPA) composite films to verify the feasibility of the BaTiO3 sample in capacitor energy-storage applications. The typical XRD patterns, SEM image and dielectric properties of the typical BPA films with the BaTiO3 sample (S8) are shown in Figure 8. Figure 8a shows the XRD patterns of the BaTiO3 sample, polymer/Al foil, and BPA film. According to the JCPDS card (No. 99-0005), the diffraction peaks at 2 θ = 38.47°, 44.72°, and 65.09° correspond to the (111), (200), and (220) of the Al foil, respectively. The XRD pattern of the BPA film is a superposition of the BaTiO3 sample and Al foil, and no other impurities are found in the BPA film. Figure 8b shows a typical SEM image of the BPA film. The film exhibits a uniform distribution of BaTiO3 nanoparticles. Figure 8c gives the dielectric properties of the BPA films with spherical BaTiO3 nanoparticles. As the statistical results show, the average dielectric constant of the BPA films reaches 59, the average dielectric loss reaches 0.008, and the Using TiO 2 (P25) nanoparticles as the seeds and Ti source for the synthesis of BaTiO 3 nanocrystals, the negatively charged Ti-O chains are first formed on the surface of TiO 2 (P25) particles in the strong alkaline solution, and the whole TiO 2 (P25) nanoparticles are then gradually transformed to the [Ti(OH) x ] 4−x species. The negatively charged Ti-O chains (i.e., [Ti(OH) 6 ] 2− ) react with Ba 2+ ions to form BaTiO 3 nanocrystals under hydrothermal conditions. The large spherical particles in situ formed on the TiO 2 (P25) nuclei may overcome the agglomeration because of their weak attraction to each other. The small particles can be self-regulated by the interaction of van der Waals torque (Casimir Torque) under high-temperature Brownian motion via the orientation attachment mechanism [43]. During the long hydrothermal reaction, smaller crystals dissolve and re-deposit on larger particles for orientation attachment and crystal extension via the Ostwald ripening process. Therefore, the growth mechanism for the formation of BaTiO 3 nanoparticles may involve the following steps: (1) TiO 2 (P25) nanoparticles are transformed to [Ti(OH) x ] 4−x species in the strong alkaline solution; (2) Ba 2+ ions reacts with [Ti(OH) x ] 4−x species to form BaTiO 3 nanocrystals; (3) small BaTiO 3 nanocrystals grows to large ones via the Ostwald ripening process and the orientation attachment mechanism. Dielectric Properties of the BPA Film with BaTiO 3 Nanoparticles The spherical BaTiO 3 nanoparticles with a size range of 91 ± 14 nm (S8 in Table 1) obtained under the optimum conditions ([NaOH] = 2.0 mol L −1 , R Ba/Ti = 2.0, T = 210 • C and t = 8 h) were used to prepare BaTiO 3 /polymer/Al (BPA) composite films to verify the feasibility of the BaTiO 3 sample in capacitor energy-storage applications. The typical XRD patterns, SEM image and dielectric properties of the typical BPA films with the BaTiO 3 sample (S8) are shown in Figure 8. Figure 8a shows the XRD patterns of the BaTiO 3 sample, polymer/Al foil, and BPA film. According to the JCPDS card (No. 99-0005), the diffraction peaks at 2θ = 38.47 • , 44.72 • , and 65.09 • correspond to the (111), (200), and (220) of the Al foil, respectively. The XRD pattern of the BPA film is a superposition of the BaTiO 3 sample and Al foil, and no other impurities are found in the BPA film. Figure 8b shows a typical SEM image of the BPA film. The film exhibits a uniform distribution of BaTiO 3 nanoparticles. Figure 8c gives the dielectric properties of the BPA films with spherical BaTiO 3 nanoparticles. As the statistical results show, the average dielectric constant of the BPA films reaches 59, the average dielectric loss reaches 0.008, and the average breakdown strength reaches 102 kV mm −1 . These electrical properties are much higher than those of the previous reports [44][45][46][47][48][49]. The TiO 2 -seeded hydrothermal process is an efficient process to synthesize spherical BaTiO 3 nanoparticles for potential capacitor energy-storage applications. Crystals 2020, 10, x FOR PEER REVIEW 11 of 15 average breakdown strength reaches 102 kV mm −1 . These electrical properties are much higher than those of the previous reports [44][45][46][47][48][49]. The TiO2-seeded hydrothermal process is an efficient process to synthesize spherical BaTiO3 nanoparticles for potential capacitor energy-storage applications. We compared the dielectric constant, dielectric loss, and breakdown strength of the BPA films with those of the literature reports [25,31,44,46,49,50], and the results are shown in Table 2. One can find that the BPA films with the TiO2-seeded BaTiO3 nanocrystals exhibit an excellent balanced dielectric performance. We compared the dielectric constant, dielectric loss, and breakdown strength of the BPA films with those of the literature reports [25,31,44,46,49,50], and the results are shown in Table 2. One can find that the BPA films with the TiO 2 -seeded BaTiO 3 nanocrystals exhibit an excellent balanced dielectric performance. Conclusions TiO 2 (P25) nanoparticle assisted hydrothermal process has been developed to synthesize BaTiO 3 nanocrystals in a strong alkaline solution (pH = 13.6) using TiO 2 (P25) and Ba(NO 3 ) 2 as the starting materials and NaOH as the mineralizer. The particle sizes, morphologies, and phases of the BaTiO 3 nanocrystals have been controlled by changing the molar Ba/Ti ratio, the hydrothermal temperature, and time. The XRD and SEM results indicate that a high Ba/Ti ratio (≥2.0), a high hydrothermal temperature (≥200 • C), and a long hydrothermal time (≥8 h) are favorable in forming a mixture of cubic/tetragonal BaTiO 3 nanocrystals with a uniform, well-dispersed spherical particulate morphology (90-100 nm). Under the optimum conditions ([NaOH] = 2.0 mol L −1 , R Ba/Ti = 2.0, T = 210 • C and t = 8 h), the as-obtained spherical BaTiO 3 nanoparticles have a narrow particle size range of 91 ± 14 nm. It should be emphasized that the particle size and morphology of the BaTiO 3 nanocrystals are kept relatively stable when the hydrothermal conditions change in a proper range, suggestive of a robust and efficient process toward spherical BaTiO 3 nanocrystals. The growth mechanism of the TiO 2 -assisted hydrothermal process for the synthesis of BaTiO 3 nanocrystals has been attributed to the dissolution-crystallization, Oswald ripening, and oriented attachment process. The BaTiO 3 /polymer/Al films containing the above BaTiO 3 nanoparticles are of a high dielectric constant of 59, a high break strength of 102 kV mm −1 , and a low dielectric loss of 0.008. The TiO 2 -seeded hydrothermal process developed here is an efficient process to synthesize spherical BaTiO 3 nanoparticles for potential capacitor energy-storage applications.
2020-03-19T10:26:51.352Z
2020-03-14T00:00:00.000
{ "year": 2020, "sha1": "2ac9595c29f5ef50178a98fb900aed77c5a421dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/10/3/202/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9b252af20253603980bd3d65a35abc7e514ffe0c", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
51896606
pes2o/s2orc
v3-fos-license
Effects of Heating and Storage on the Antifungal Activity of Camel Urine Camel urine, considered a ‘miraculous’ drug used in Prophetic Medicine, since the pre-Islamic era camel milk and urine were used as drinking medicine for different health problems. In addition, camel urine has proven to be effective as an antimicrobial agent, and may not have side effects for humans. Furthermore, camel urine may be resistant to factors such as high temperatures and an extensive waiting period in laboratory conditions, which can reduce the effectiveness of antibiotics. The aim of our study was to examine the effectiveness of camel urine as an antifungal agent following exposure to high temperatures and long time periods in laboratory conditions. After maintaining camel urine in natural laboratory conditions for 6 weeks at temperatures of up to 100°C, we tested camel urine on the fungi Aspergillus niger and Fusarium oxysporum, and on the yeast Candida albicans. We then measured the dry weight of each microorganism, and determined their minimum inhibitory and fungicidal concentrations. Our results showed that after maintained for 6 weeks, camel urine did not lose its antifungal activity; dry weights following treatment were decreased 100% of the dry weight prior to treatment for Aspergillus niger and Candida albicans, and 53.33% for Fusarium oxysporum. Our study demonstrates that camel urine is a highly effective and resilient antifungal agent for treating human and plant fungal diseases. Introduction The camel is mentioned in the Holy Qur'an as a particularly important animal 1 , and is referred to by other names such as al-ibil, alnagah, al-jamal, al-ishar and al-him [1]. Ccamel urine is considered a 'miraculous' drug used in Prophetic Medicine since the pre-Islamic era 2 [2], which has been used as traditional and folk medicine for women's hair; gums and teeth; skin injuries; snake bites; stomach pain; tumors; the common cold; diarrhea and nausea; diabetes; jaundice; scabies; and eye, skin, liver and nail infections [1][2][3][4][5]. Camel urine is also commonly used against cancer and respiratory tract infections in alternative medicine [6]. Camel urine has been proven to be effective as an antimicrobial agent, and may not have any side effects for humans [7]. Muhammad (1998) reported that patients who were given camel urine to treat digestion problems recovered after two months of treatment [8]. Al-Yousef et al. (2012) found that camel urine has no cytotoxic effect against mononuclear cells, and has strong immune activity by inducing IFN-γ and inhibiting Th2 cytokines IL-4, IL-6 and IL-10. Kidney, liver and stomach tissues infected with Escherichia coli in mice recovered with no histopathological effects after treatment with camel urine of concentrations up to 100% [9][10][11][12]. Studies have tested the antimicrobial activity of camel urine against pathogenic microorganisms including the fungi Aspergillus niger, A. flavus, Fusarium oxysporum, Rhizoctonia solani, Aschocayta sp., Pythium aphanidermatum, Sclerotinia sclerotiorum, Candida albicans; and the bacteria Staphylococcus aureus, Streptococci, E. coli, Pseudomonas aeroginosa and Klebsiella pneomoniae. The results of these studies showed high antimicrobial activity against the tested microorganisms, even when accompanied by changes in anions and cations [4,[13][14][15][16][17][18]. Antimicrobial activity of camel urine is due to factors such as high salt concentrations, alkalinity, natural bioactive compounds from the plants camels eat, resident bacteria, and excreted antimicrobial agents. Compared with other cattle, camel urine is alkaline due to high concentrations of potassium, magnesium and albuminous proteins, and low concentrations of uric acid, sodium and creatine [19][20]. The different composition of camel urine compared to other cattle and goats is due to the type of plants they consume and their feeding habits; camels prefer browse with high concentrations of minerals that decline more slowly when they dry instead of other types of forage such as grasses [21][22][23]. Further, camels eat a variety of types of vegetation including thorny bushes, halophytes, salty and sour plants, shrubs and aromatic species that are avoided by cattle and goat (e.g., Haloxylon aphyllum, H. persieum, Salsola gemmaseens, S. orientabs, Astragalus, Aristida karelinii and A. pinnate) [17,18,20,24]. The aim of our study was to investigate the resistance of camel urine to heating at high temperatures and storage for extensive waiting periods in laboratory conditions, which can reduce the effectiveness of antibiotics. 1 'Do they not look at the camel, how it was created?' (Surah Number 88: Al-Ghâshiyah). Study materials The molds Aspergillus niger and Fusarum oxysporium were isolated and identified at the Cairo MIRCEN, Ain Shams University, Cairo, Egypt. Tested fungi were incubated at 28 ± 2°C. Candida albicans ATCC CA 10231 was incubated at 30 ± 2°C. Camel urine was collected from north Jeddah from live camel in the desert in sterilized dark bottles that were taken directly to the laboratory. To investigate the effect of storage time and heating on camel urine antifungal activity, collected camel urine was divided into two major groups. The first group was further subdivided into three portions that were heated at 60, 80 and 100°C for 60 min. The second group was further subdivided into three portions that were stored for 3, 6 and 9 months before laboratory analyses. The positive control was fresh camel urine at 4°C. Laboratory analyses The antimicrobial activity of camel urine was determined in vitro in response to A. niger, F. oxysporum and C. albicans. Activity levels were measured using disc diffusion and broth dilution, methods previously described by the Clinical and Laboratory Standards Institute (CLSI; formerly known as the National Committee for Clinical Laboratory Standards) [25,26]. For disc diffusion we used filter paper discs (1 mm diameter impregnated with 100 μL), which were placed on the pre-inoculated agar surface. Negative controls were prepared with sterilized discs. Plates were then incubated at 28°C for A. niger and F. oxysporum for 7 days, and at 30°C for C. albicans for 48 h. The inhibitory zones of each disc were measured. All tests were performed in triplicate. The Minimum Inhibitory Concentration (MIC) and Minimum Fungicidal Concentration (MFC) of camel urine that inhibited the growth of fungi were investigated using a broth-microdilution method. C. albicans, A. niger and F. oxysporum were cultured and resuspended in 1 mL mueller-hinton broth (OXOID) to obtain a final concentration of 100 cfu mL-1. Camel urine was serially diluted with Mueller-Hinton broth using methods approved by the National Committee for Clinical Laboratory Standards (M27-A) [27]. After incubation, the MIC was determined as the lowest concentration of extract for which there was no visible growth compared with the control [28,29]. The MFC was determined by inoculating 0.1 mL of negative growth at the MIC onto sterile Sabouraud Dextrose Agar SDA for C. albicans and Potato Dextrose Agar PDA for A. niger and F. oxysporum (OXOID) plates (Table 1). The plates were incubated at 30°C for 48 h for C. albicans, and at 28°C for 7 days for A. niger and F. oxysporum. The lowest concentration of camel urine that did not demonstrate growth of the tested fungi was considered the MFC; the negative control was a plate grown with media only [30,31]. The dry weight of the tested fungi was measured to determine the effects of recommended doses in Arab folk-medicine. 1 mL samples of A. niger and F. oxysporum spores, and C. albicans suspension (108 cfu mL-1) were inoculated into 5, 10 and 15 mL samples of treated camel urine with SDB/PDB in 250 mL Erlenmeyer flasks. Flasks were incubated with shaking (180 rpm) at 30°C for 7 days for A. niger and F. oxysporum, and for 48 hours for C. albicans. Afterwards, samples were collected and centrifuged at 10,000 rpm for 10 min. Fungal mycelia and yeast cells were collected. Growth was estimated as dry weight by washing with triple-distilled water and drying at 80°C on Whatman no. 1 filter paper until constant weight [32]. The lowest MICs of 1 μL mL-1 were obtained with untreated camel urine; with urine treated at 60°C and 80°C, and stored for 2 months for C. albicans; and with all treatments except for urine stored for 6 months for A. niger ( Table 2). The most resistant fungus was F. oxysporum with MIC values ranging from 2 to 8 μL mL-1. MFC values ranged from 4 to 32 μL mL-1, and were lowest for A. niger and greatest for F. oxysporum (Table 3). Statistical Analysis The results were analyzed by paired-samples t-test using the IBM SPSS 20 statistical software to compare the mean values of each treatment. The results are expressed as means ± SE. Probability levels of less than 0.01 were considered highly significant. Results We observed high inhibitory growth of C. albicans, A. niger and F. oxysporum after treatment with fresh camel urine, which provided evidence for camel urine as an active antifungal agent ( Table 1). The most sensitive tested fungi were C. albicans and A. niger, while the inhibition of F. oxysporum only decreased by 22% when camel urine was stored for 6 months. Heating camel urine at different temperatures did not affect fungal dry weight (Table 4). Fungal growth was completely inhibited by 15% concentration of camel urine for all treatments and all tested fungi, and by 5 and 10% concentrations for most treatments. The activity of camel urine after heating at different temperatures increased compared with untreated camel urine; there was still 100% growth inhibition after treatment at 100°C for all tested fungi and all concentrations of camel urine. However, storage time increased the effect of inhibition for C. albicans and F. oxysporum at camel urine concentration of 5 and 10% (Table 5). Discussion Camel urine is an efficient antimicrobial compound, particularly against Aspergillus sp., as demonstrated by our study and others [13,[15][16][17]. Our results on the effects of heating and storage time on the antimicrobial activity of camel urine were consistent with the results of several other studies [33,34]. High inhibitory growth of the tested fungi, which were grown in an acidic environment, was due to the high alkalinity of camel urine as a result of high concentrations of K, Mg, Ca and proteins, and low concentrations of carbohydrate and cellulose [13,[19][20][21]. Inhibited growth of C. albicans, A. niger and F. oxysporum reveals that the antimicrobial activity of camel urine was not affected by heating or storage time, perhaps because it was a high dose 100 µl; these results are reflected in the MIC and MFC. There was more of an effect of heating and storage time on the recommended dose of camel urine in Arab folk-medicine, which may be due to changes in the camel urine structure and composition as a result of treatment. Al-Awade and Al-Judaibi (1999) explain that camel urine is very effective against microorganisms because of several components including bacteria that can survive under extreme conditions. These bacteria have special characteristics that enable them to live in conditions with high osmotic concentrations and alkalinity, and without nutrition. Further, these bacteria stay highly motile even after incubation at low temperatures. Our results show that the antimicrobial activity of camel urine increases after storage and heating up to 100°C, which completely inhibited the growth of C. albicans, A. niger and F. oxysporum. Heating may increase the concentration of active compounds in urine by lysis of the bacterial cells, which in turn secrete enzymes and antibiotics. Storage time had no effect on the 15% concentration of camel urine. At high concentrations, more antibiotics are secreted by the bacteria, alkaline concentrations are higher and there are more active compounds from the plants. The increased inhibitory effects on C. albicans and F. oxysporum at concentrations of 5 and 10% may be due to low concentrations of active compounds in the urine, which may allow the fungal cells to become more permeable to antibiotics and active compounds [14,42,43]. The high antifungal activity of camel's urine reflected on the inhibition of the tested fungi and the results agreed with Al-Judaibi's results of camel's urine on A. niger and C.albicans compared with the antifungal agents Mycostatin, Pevaryl and Nizoral [44]. Several studies determined the effect of camel's urine on the cells and the results showed the efficient as repaired to the damaged cells, including the tumor cells and can be used as anticancer and antiplatelet activity against ADP-induced agent [8][9][10][11][45][46][47]. Conclusion In conclusion, camel urine is a highly effective and resilient antifungal agent for treating human and plant fungal diseases. Our results confirm the traditional uses of camel urine as an antimicrobial agent, and may not have side effects for humans. In addition, heating and storage of camel urine did not alter the main fungicidal effects.
2019-03-30T13:09:33.482Z
2014-12-21T00:00:00.000
{ "year": 2014, "sha1": "2b3f712da7581ee84baa6edccb07781e20926572", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/effects-of-heating-and-storage-on-the-antifungal-activity-of-camel-urine-2327-5073.1000179.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0b626396eb54a1c944d4b1328b1b8c6bf9203ed6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
6613494
pes2o/s2orc
v3-fos-license
Assessment of the contents related to screening on Portuguese language websites providing information on breast and prostate cancer The objective of this study was to assess the quality of the contents related to screening in a sample of websites providing information on breast and prostate cancer in the Portuguese language. The first 200 results of each cancerspecific Google search were considered. The accuracy of the screening contents was defined in accordance with the state of the art, and its readability was assessed. Most websites mentioned mammography as a method for breast cancer screening (80%), although only 28% referred to it as the only recommended method. Almost all websites mentioned PSA evaluation as a possible screening test, but correct information regarding its effectiveness was given in less than 10%. For both breast and prostate cancer screening contents, the potential for overdiagnosis and false positive results was seldom addressed, and the median readability index was approximately 70. There is ample margin for improving the quality of websites providing information on breast and prostate cancer in Portuguese. Breast Neoplasms; Prostatic Neoplasms; Internet ARTIGO ARTICLE Assessment of the contents related to screening on Portuguese language websites providing information on breast and prostate cancer Avaliação dos conteúdos em websites sobre rastramento do câncer da mama e da próstata em língua portuguesa Evaluación de contenidos en páginas web de lengua portuguesa sobre rastreo del cáncer de mama y próstata Abstract The objective of this study was to assess the quality of the contents related to screening in a sample of websites providing information on breast and prostate cancer in the Portuguese language.The first 200 results of each cancerspecific Google search were considered.The accuracy of the screening contents was defined in accordance with the state of the art, and its readability was assessed.Most websites mentioned mammography as a method for breast cancer screening (80%), although only 28% referred to it as the only recommended method.Almost all websites mentioned PSA evaluation as a possible screening test, but correct information regarding its effectiveness was given in less than 10%.For both breast and prostate cancer screening contents, the potential for overdiagnosis and false positive results was seldom addressed, and the median readability index was approximately 70.There is ample margin for improving the quality of websites providing information on breast and prostate cancer in Portuguese. Breast Neoplasms; Prostatic Neoplasms; Internet ARTIGO ARTICLE Introduction The use of the Internet has increased over the last years, mainly because it is easily accessible and allows gathering information from different sources 1,2 .It has become one of the most important sources of both general and health-related information, and its potential to influence individual health behaviours emphasizes the importance of monitoring the quality of health contents available on websites 3,4,5 . Although there are different guidelines to assess the formal quality of these sources of information 6,7 , as well as tools to assess the readability of the contents, there are no instruments to evaluate the accuracy of the information on specific topics 8,9 .Such an evaluation needs to be conducted case by case, taking into account the best available evidence on each health topic and the local health policies 10 . Information related to oncological diseases corresponds to an important proportion of the Internet searches on health issues 11 , and breast and prostate cancer patients are the ones who use the Internet more frequently to search for information related to their disease 12 .Specifically breast and prostate cancers are leading causes of oncological morbidity and are among the malignancies with the highest relative survival, which leads to the seeking of information on these topics by the general population, and by patients and their families in particular 11,13,14,15 .Furthermore, breast and prostate cancers have specificities regarding the potential for control through secondary prevention, and a large proportion of women and men participate in screening activities, even though population-based screening is recommended only for breast cancer.Thus, these oncological diseases may constitute a good model for designing a framework for website quality assessment that may be extended to other conditions for which screening is recommended or effectively conducted, regardless of the available evidence on its effectiveness. We aimed to replicate an Internet search conducted by a lay-person and to assess the quality of the contents on breast and prostate cancer screening in the websites that provide information on breast and prostate cancer in Portuguese. Selection of the websites for analysis We searched the World Wide Web to identify Portuguese language web pages that addressed breast or prostate cancers, on the 16 th and 15 th of September 2011, respectively, using the Google search engine (http://www.google.com),with the expressions "cancro da mama" and "cancro da prostate", respectively.We saved the first 200 results from each search for further analysis, including information on the URL (Uniform Resource Locator) of each web page, and registered its rank in the search.The websites were initially screened to assess eligibility, by applying the following exclusion criteria: inaccessible websites due to non-functioning URL; websites not providing information in Portuguese; repeated websites (corresponding to different web pages from the same website); websites providing information on breast or prostate cancer only in the format of downloadable files (e.g.slideshows, portable document files), or only through audio or videos (e.g.YouTube videos); scientific articles (whether or not located in medical websites); blogs or forums; general encyclopedias; websites only providing information about female breast or prostate cancers in the form of news; websites with no specific information on female breast or prostate cancers (e.g.advertising only, male breast cancer). To identify the contents related to breast or prostate cancers in the eligible websites we proceeded as follows: when the URL corresponded to a website's main page, we searched the whole site; when the URL corresponded to a web page other than the website's main page, we navigated towards the latter, and then a more comprehensive screening of the website was conducted for identification of all relevant pages. General characterization of the websites The general characterization of the websites was accomplished using information depicted in any of their pages.One investigator (D.F.) gathered data on the following variables: the website's main subject; country of origin; intended audience; media used to convey the cancer-specific information; profit motive of the owner of the website. The websites were classified regarding the predominant relation of its contents to health (health related/not only health related), to cancer (cancer related/not only cancer related) and to breast or prostate specific disease (breast or prostate cancer specific/not specific for breast or prostate cancer). The websites were identified as registered in Portugal, Brazil or another country.This information was assessed through the domain (".pt" for Portugal, ".br" for Brazil).For other domains, the contact information of the website was consulted.The other origins of websites included African countries where Portuguese is the official language, or other non Portuguese-speaking countries. The intended audience was classified as general population, patients, health professionals or media.Since the websites could target more than one population group, these categories were not mutually exclusive.We searched the website "disclaimer"/"about" item to obtain this information.When it was not specified, we carried out a search to assess whether the area of activity of the institution that owned the website could be associated with a specific population or population group; if not, the website was considered to target the general population. The media used by the websites to convey the cancer-specific information (display of contents) were classified in six mutually exclusive categories: text only; text and figures; text and video; text and charts; text and audio; other. The affiliation of the websites was primarily defined as public or private.Among the private institutions, we distinguished the organizations responsible for the websites based on whether it was a profit making organization or not, and grouped them as for-profit (e.g.health care providers, pharmaceutical industries, individual subjects) or non-profit (e.g.non-governmental organizations).The websites were classified according to the profit intent, considering public and non-governmental organizations as nonprofit, and private institutions as for-profit. Analysis of the contents related to screening of breast and prostate cancer • Specific contents on cancer screening We analyzed the contents of the websites on this topic, namely regarding the existence of specific information on cancer screening and their accuracy.We selected topics that covered the different methods for screening and its effectiveness, the potential harms of screening, the recommended periodicity, the eligibility for screening and instructions on how to proceed to be screened. The criteria to assess the accuracy of the information and its adequacy to the Portuguese setting were defined in accordance with the evidence summarized by the U.S. Preventive Services Task Force (USPSTF) 16,17 , the European Union Advisory Committee on Cancer Prevention 18 and the local policy for cancer screening 19 .In Portugal, there is a screening program for breast cancer, which differs slightly from the U.S. Preventive Task Force recommendations and the EU Advisory Committee on Cancer Prevention, especially regarding the age from which women should start their regular biennial mammograms (45-69 years) 19 . From each website we selected the information about screening for further analysis.The specific items searched, as well as the message considered the most appropriate to convey to the general population are presented in Figures 1 and 2, for breast and prostate cancer.For each item three options were possible: does not mention the subject; mentions the subject but the information is incorrect or incomplete; mentions the subject and the information is correct. • Readability To assess the readability of the contents on cancer screening, in the websites providing information on breast cancer we selected the text from the sections related to symptoms, diagnosis, types of cancer and screening, while in the websites providing information on prostate cancer we selected information related to screening, cancer detection and diagnosis.These sections were systematically selected in all websites to ensure comparability. We used the Fernandez-Huerta index to determine the readability of the contents.This index is computed as [206.84-0.6*(average number of syllables per word) -1.02*(average number of words per sentence)]; the results range from 0 to 100, representing the worst level (very difficult to read) and the better level of readability, respectively.To estimate the number of words and of syllables per word, we extracted the information to a Microsoft Office Word (Microsoft Corp., USA) document and analyzed the text using the software TextMeter (http://www.lazarusbrasil.org/textmeter.php,Brazil), which is an application of text statistics only for the Portuguese language.This software counts the number of words and sentences, and also provides an algorithm for counting syllables. Data analysis The results are presented as the proportion of websites depicting each one of the characteristics assessed, for the whole sample and by cancer type (breast vs. prostate cancers) and by website rank in each of the searches (first 30 URL vs. remaining results).This cut-off was selected because individuals who search on the Internet tend to navigate until the third page of results 20 .The contents on screening were further analyzed by country of origin of the websites and the profit motive.The proportions were compared with the χ 2 or the Fisher exact test, as appropriate.* Accuracy of information defined according to U.S. Preventive Services Task Force 16 , Advisory Committee on Cancer Prevention 18 and Coordenação Nacional Para as Doenças Oncológicas, Alto Comissariado da Saúde, Ministério da Saúde 19 .The shadowed cells represent the information considered correct; ** The items addressing similar subjects were grouped together and it was considered that the topic was not mentioned (when none of the items within the topic were mentioned), mentioned with incorrect information (when this applied to at least one of the items with no correct information being provided in each of them), or that the topic was mentioned correctly (when this applies to at least one of the items within the same topic). The results regarding the readability index were compared between breast and prostate cancer websites and, for each of them, according to the websites' characteristics using the Kruskal-Wallis test. Websites selected for analysis In the first 200 results retrieved by each cancerspecific Google search, 47 websites addressing issues related with breast cancer and 67 websites Procedure followed in the analysis of information on prostate cancer screening.DRE: digital rectal examination; PSA: prostate specific antigen.* Accuracy of information defined according to U.S. Preventive Services Task Force 16 , Advisory Committee on Cancer Prevention 18 and Coordenação Nacional Para as Doenças Oncológicas, Alto Comissariado da Saúde, Ministério da Saúde 19 .The shadowed cells represent the considered correct justification; ** The items addressing similar subjects were grouped together and it was considered that the topic was not mentioned (when none of the items within the topic were mentioned), mentioned with incorrect information (when this applied to at least one of the items with no correct information being provided in each of them), or that the topic was mentioned correctly (when this applies to at least one of the items within the same topic). with prostate cancer information fulfilled the eligibility criteria (Figure 3).Among the former, 35 websites (74%) covered issues related with breast cancer screening and 43 websites of the latter (64%) provided specific information on prostate cancer screening. General characteristics of the websites Seven out of 10 websites providing information on breast and prostate cancers were health-related, and the proportion was higher among those appearing in the first thirty results of the search (86.2% vs. 67.1%,p = 0.048).Approximately 20% and 10% of the websites exclusively covered issues related with cancer or specifically breast/ prostate cancer, respectively; these appeared in the first pages of the search 4 and 9 times more frequently, respectively (Table 1). Nearly half of the websites were from Portugal, and it was more likely to find a Portuguese website in the first 30 results (79.3% vs. 43.5%,p = 0.004).The websites appearing in the first three pages of results were more frequently aimed at cancer patients (44.8% vs. 3.5%, p < 0.001) and less often at the general population (75.9% vs. 96.5%,p = 0.001).Approximately three-quarters of the websites provided the information only in the format of text; video and audio were seldom used.Approximately 15% of the websites were from non-profit organizations, and appeared more frequently in the first 30 results (31% vs. 9.4%, p = 0.005). There were no statistically significant differences in the characteristics of the websites according to the cancer addressed.However, those providing information on breast cancer tended to target the general population less often (85.1% vs. 95.5%,p = 0.053) and those on prostate can-cer were more frequently from Brazil (35.8% vs. 21.3%, p = 0.123). Contents related to screening of breast and prostate cancer • Accuracy of the contents on breast cancer screening Most websites mentioned mammography as a method for breast cancer screening (80%), although only 28% mentioned it correctly as the only recommended method for screening, and sound quantitative estimates of the effectiveness were provided in only 14%.The breast self-exam and the clinical breast exam were mentioned almost as often as the mammography, but the information provided was usually incorrect.The potential for overdiagnosis, false positive and false negative results was addressed in a very low percentage of the websites, and most of the times the information was not correct.The information that the dose of radiation exposure in mammography testing is insufficient to increase the risk of cancer was correctly mentioned in 14.3% of the websites.Approximately one-quarter of the websites gave correct information about the eligible ages for screening, but the fact that screen-ing applies only to asymptomatic subjects was seldom addressed.The adequate periodicity for screenings was mentioned in 22.2% of the websites and the recommendations on how to perform a screening test were correct in 31.4% of the websites (Figure 4a).The websites appearing on the first 30 results tend to have better information about screening harms (30% vs. 4%, p = 0.014), and about periodicity of screening (70% vs. 12%, p = 0.004).Websites owned by a non-profit organization tended to provide information more frequently on how to proceed to be screened (83.3% vs. 20.7%,Cad.Saúde Pública, Rio de Janeiro, 29 (11):2163-2176, nov, 2013 p = 0.011), and to mention correctly the potential harms of screening (50% vs. 3.5%, p = 0.019) (Table 2). • Accuracy of the contents on prostate cancer screening The evaluation of the prostate specific antigen (PSA) was mentioned as a possible screening test in nearly all websites, but information regarding the insufficient evidence of its effectiveness was given in less than 10%.The most frequently referred harm of screening with the correct information was the potential for overdiagnosis and false positive (both 6.9%).None of the websites mentioned that screening targets asymptomatic subjects, and the age-groups potentially eligible for screening were addressed by 39.5%, most of the times incorrectly.The periodicity of the screening was mentioned in less than a fifth of the websites and never with the correct information.None of the websites provided information on how to proceed to be screened (Figure 4b).No significant differences were found in the analysis of contents on prostate cancer screen-ing, according to the order of appearance in the search, country of origin or profit intent of the websites' affiliation, except for the less frequent reference to the potential harms of screening among the first 30 results (Table 3). • Readability The median readability index values were not significantly different between the websites providing information on breast and prostate cancer (73.1 vs. 69.7,p = 0.144).The readability of the contents related to breast cancer screening was lower on Portuguese websites (median: 70.2 vs. 75.7,p = 0.036) and on for profit websites (68.7 vs. 73.7,p = 0.035).Readability of content related to prostate cancer screening did not vary meaningfully by order of appearance of the website, country of origin or profit motive (Figure 5).screening, though it was often incomplete or inaccurate.It is noteworthy that the possible harms of the screening were frequently overlooked.Despite the poor overall quality of the contents, the websites obtained good scores on readability. Most of the websites that addressed breast or prostate cancer provided information on cancer In the present study we described the assessment of the quality of the websites' contents with the necessary detail to ensure the transparency of the process.It provides a framework for analysis that can be used by other researchers and for the monitoring of the quality of the health information provided in the internet.However, it has limitations that need to be addressed. The number of websites selected for analysis was relatively small, as we were attempting to replicate searches conducted by a layper-son looking for general information on breast or prostate cancer.The small sample is probably an unavoidable limitation, given the need to use relatively simple and unspecific search terms and the expectation that most people are not willing to filter through a large number of websites to obtain the information they require 20,21 .Nonetheless, this study is one of the largest conducted on this issue, as other similar works selected 30 3,22 , 50 23 , or 100 4 results. Similarly, only Google was selected because it is the most popular search engine among the Portuguese speaking population 15 .Although the use of other search engines could yield a different sample of eligible websites, the internal validity of our study is not compromised by this methodological option.The same reasoning applies to the fact that our search was conducted on one day for each type of cancer, and the websites that would be identified at other moments could be different 24 . Another limitation of our study is the data collection from the websites made by only one investigator.However, the procedures for the evaluation of the websites were standardized and based on criteria defined a priori, to make the assessment as replicable as possible.Furthermore, a second investigator was involved in the discussion of the evaluation of the websites whenever their characteristics did not match entirely the predefined framework of assessment. Our study evaluated the quality of the contents specifically related with screening.Other investigations on the overall quality of the contents of websites assessed a wider range of aspects, according to the specific subject 3,4,25,26,27 .Therefore, it is not possible to compare directly the quality of the websites providing information on breast or prostate cancers screening with previous investigations, though the quality of contents available on the Internet related with health issues has been considered poor 27 . As in previous studies, the results tend to demonstrate that websites appearing in the first 30 results (with lower page rank) tend to provide more reliable information 28 , which can be explained by a higher specificity of the websites for breast or prostate cancer issues (as shown by our results).Also, better websites tend to be more linked or referred by other websites, which increases their importance and as a consequence decreases their page rank, placing them in the first places of the search results. The information on screening tends to be better on the female breast cancer websites than on those related to prostate cancer, namely regarding the screening methods.In the setting of our study, we hypothesized that this fact must be explained by the existence of organized screening for breast cancer 19 , while no similar screening strategy is recommended or available for prostate cancer, as its effectiveness remains controversial and overdiagnosis is a major public health concern 17 .Also, the websites whose country of origin was Portugal tended to provide better information on screening, which can be explained by the fact that we assessed the correctness of information according to the recommendations/guidelines followed in Portugal.This shows that, although this general frame-work for evaluation of the quality of the website's contents may be used in any other Portuguese speaking-country, the results obtained will be setting-specific.Moreover, the assessment of websites' contents in other Portuguese-speaking countries needs to account for the specificities of the Portuguese language in each setting.For instance, in Brazil the term used to refer to cancer is different from the one used in Portugal (Brazil: câncer; Portugal: cancro).Therefore, if the Brazilian form was used the search would retrieve different websites, which illustrates the need to conduct setting specific surveys of the quality of health information available in the Internet. In spite of the aforementioned expected differences across settings, our results are in accordance with what would be expected in most contexts 27,29,30 .Particularly, the results related with the profit motive of the websites and their affiliated organizations are less likely to be locale-specific.Internet users may be expected to find the information provided by websites from public or nongovernmental organizations more reliable than the ones associated with for profit organizations, as commercial interests may be responsible for incomplete or incorrect information on theses websites 31 . The harms of screening were also seldom addressed.This is of particular relevance for prostate cancer screening, whose potential benefits are not considered to outweigh the deleterious effects that may be associated with it 11 .The absence of this information was particularly notorious in the for profit websites.At a population level, this may contribute to a larger number of subjects undergoing screening without having the necessary knowledge for a well-informed decision. Our study only focused on contents related with screening, which targets asymptomatic subjects in eligible ages.Notwithstanding, subjects already presenting signs and symptoms that require medical attention may search the internet to obtain more information.The impact of the information on screening over these subjects is difficult to ascertain, and the assessment of the accuracy of the websites' contents directed to these conditions was not the aim of our study. Readability refers to the facility with which a text is read 32 , being an important aspect of the quality of a website's content 33 .We assessed the readability of the content on screening us-ing the Fernandez-Huerta index, which was created to assess texts written in Spanish.Although it has not yet been validated in the Portuguese language, the Spanish and Portuguese languages share the same Latin basis, and this tool has been used to assess the quality of Brazilian governmental websites 33 .We considered that the websites presented a good level of readability, as 70 has been accepted to correspond to a good level of readability, in Portuguese texts, when using the Fernandez-Huerta index 33 .Further work is needed to establish the correspondence between the score attributed to the website and the education level needed to understand the information (according to the Portuguese curricula) 34 .To the best of our knowledge, there are no similar investigations on health related aspects that aimed to assess readability in websites in Portuguese, which precludes a more in depth discussion of our results. The present study demonstrates that the quality of the contents on breast and cancer screening in Portuguese is far from good, warranting continuous monitoring and educational and regulatory actions to ensure that the general population and the patients are not "exposed" to misleading information on the Internet.Regulation of the websites and education by information providers have been considered to improve the general quality of the websites 9,35 . In conclusion, there is a large margin for improving the quality of Portuguese language websites providing information on breast and prostate cancer.This study provides a framework for the standardized assessment of the quality of the contents of websites providing information on breast or prostate cancer, which may be used for the monitoring of the quality of the health information provided in the Internet. Resumen El objetivo fue evaluar la calidad de los contenidos en lengua portuguesa sobre rastreo en una muestra de páginas web con información sobre el cáncer de próstata y/o mama.Se consideraron los primeros 200 resultados de cada búsqueda en Google.La adecuación de los contenidos sobre rastreo se definió de acuerdo con la mejor evidencia científica disponible y se evaluó su legibilidad.Cerca de un 80% de las páginas web se refirieron a la mamografía como un método para el rastreo del cáncer de mama, sin embargo, solamente un 28% la mencionaron como el único método recomendado.Casi todas las páginas web señalaron el examen de Antígeno Prostático Total (APT/PSA en inglés) como un posible test de rastreo, pero solamente un 10% presentó información correcta respecto a la efectividad de esta forma de rastreo.En lo referente a los contenidos de ambos cánceres, el potencial para un sobrediagnóstico y un resultado falso positivo raramente fue mencionado, y la mediana del índice de legibilidad fue de aproximadamente 70.Existe un ancho margen para mejorar la calidad de las páginas web con información sobre cáncer de mama y de próstata.Neoplasias de la Mama; Neoplasias de la Próstata; Internet Contributors D. Ferreira contributed in the study design, was the main person responsible for the acquisition of data, collaborated in its analysis and interpretation, and wrote the first draft of the manuscript.H. Carreira collaborated in the data analysis and revision of the article.S. Silva and N. Lunet participated in the design of the study, and reviewed the article for important intellectual content. Figure 1 Procedure Figure 1Procedure followed in the analysis of information on breast cancer screening. Figure 3 Selection Figure 3Selection of the Internet search results for breast cancer and prostate cancer. Figure 4 Quality Figure 4Quality of the contents on breast and prostate cancer screening. Figure 5 Readability Figure 5Readability of the contents on breast (n = 35) and prostate cancer screening (n = 43) by order of appearance in the search, country of origin and profit intent of the websites. Table 1 General characteristics of the websites selected for analysis. * Within each variable the sum of the proportions may not be 100% due to rounding; ** Categories not mutually exclusive. Table 2 Quality of the contents on breast cancer screening according to websites' order of appearance and country of origin and profit intent of the websites' affiliation. Table 3 Quality of the contents on prostate cancer screening according to websites' order of appearance and country of origin and profit intent of the websites' affiliation.
2017-04-08T17:30:14.255Z
2013-11-01T00:00:00.000
{ "year": 2013, "sha1": "750d623ae827ed715ec1a9a29da1fecc4dd906d9", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/csp/a/RtSx8x3ktgSDFh9MdynLCDv/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "750d623ae827ed715ec1a9a29da1fecc4dd906d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9492154
pes2o/s2orc
v3-fos-license
Ethnomedicinal survey of some plants used for the treatment of diabetes in Ibadan , Nigeria Submitted: 24-05-2014 Revised: 21-07-2014 Published: 01-05-2015 Objectives: A comprehensive survey with the aim of documenting traditional medicinal practices was carried out in targeted areas of Ibadan, Nigeria in order to inventory plants used by traditional healers in the area for the management of diabetes. Materials and Methods: Open-ended informal interviews were administered during series of repeated visits to the respondents consisting majorly the traditional medical practitioners (TMPs) and herb sellers. Some traditional healers who know and use medicinal plants for treating diabetes mellitus were interviewed. The inventory contains scientifi c, vernacular, common names of the plants used and methods of preparation. Results: Twenty seven plants commonly used by traditional healers in the region were identifi ed. These plants were found to be very important and useful in the treatment of diabetes based on their frequency of occurrence in the recipes obtained. Herbal remedies were either prepared from dried or freshly collected plants while traditional solvent of choice included water, lime, local gin and aqueous extract from fermented maize. Decoction and pulverization were the main methods of preparation while mode of administration varies within 1 to 3 times daily. Survey revealed that leaves form the major part of plants for herbal preparations. Residents in the study area fi nd traditional medicine cheaper as compared to orthodox medicine. Conclusion: This review focuses on the various plants that have been reported to be effective in the treatment of diabetes. The survey shows that plants from the Rubiaceae, Labiataceae, Meliaceae, Hypoxidaceae and Cucurbitaceae families are commonly used by traditional healers in Ibadan for the treatment of diabetes mellitus. INTRODUCTION In diabetes mellitus a chronic endocrine disorder, abnormally high blood glucose is the major feature.2][3] Though despite harmful side effects, insulin and synthetic oral hypoglycaemic agents are widely used in management of diabetes. 4It is as a result of these harmful side effects that, herbal remedies are also preferred because they are safe for long-term use, easily accessible, and cost effective hence many rural dwellers could easily afford these.It is also in recognition of this that the ethnomedicine is of great interest in the scientifi c world in the past decades or so. Ethnomedicine is concerned with the study of medical systems from the native's point of view.In ethnomedicine, native categories and explanatory models of illness including aetiologies, symptoms, courses of sickness and treatment are investigated. 5,6The ethnomedical approach is very useful in the study of indigenous therapeutic agents because it allows the researcher to understand the treatment patterns according to native explanatory models.Furthermore, in order to preserve traditional medicinal knowledge, it is necessary that inventories of plants with therapeutic value are carried out and the knowledge related to their use documented in systematic studies.Ethnomedicinal surveys provide the rationale for selection and scientifi c investigation of medicinal plants since some of these indigenous remedies have been successfully used by signifi cant numbers of people over extended periods Objectives: A comprehensive survey with the aim of documenting traditional medicinal practices was carried out in targeted areas of Ibadan, Nigeria in order to inventory plants used by traditional healers in the area for the management of diabetes.Materials and Methods: Open-ended informal interviews were administered during series of repeated visits to the respondents consisting majorly the traditional medical practitioners (TMPs) and herb sellers.Some traditional healers who know and use medicinal plants for treating diabetes mellitus were interviewed.The inventory contains scientifi c, vernacular, common names of the plants used and methods of preparation.Results: Twenty seven plants commonly used by traditional healers in the region were identifi ed.These plants were found to be very important and useful in the treatment of diabetes based on their frequency of occurrence in the recipes obtained.Herbal remedies were either prepared from dried or freshly collected plants while traditional solvent of choice included water, lime, local gin and aqueous extract from fermented maize.Decoction and pulverization were the main methods of preparation while mode of administration varies within 1 to 3 times daily.Survey revealed that leaves form the major part of plants for herbal preparations.Residents in the study area fi nd traditional medicine cheaper as compared to orthodox medicine.Conclusion: This review focuses on the various plants that have been reported to be effective in the treatment of diabetes.The survey shows that plants from the Rubiaceae, Labiataceae, Meliaceae, Hypoxidaceae and Cucurbitaceae families are commonly used by traditional healers in Ibadan for the treatment of diabetes mellitus. of time. 7According to the World Health Organization, at least 80% of people in developing countries depend largely on indigenous practices for the control and treatment of various diseases affecting both human beings and their animals. 8The surveys also help the conservation of traditional knowledge through the identifi cation of medicinal plants with market potential that can generate incomes for local communities [Figure 1]. Diabetes mellitus is a metabolic disease characterized by high blood glucose level resulting from defects in insulin secretion, insulin action or both. 9It is a chronic disorder that affects the metabolism of carbohydrates, fats, proteins and electrolytes in the body, leading to severe complications which are classifi ed into acute, sub-acute and chronic.Acute complications include hypoglycaemia, diabetic ketoacidosis, hyperosmolar and hyperglycaemic non-ketotic syndrome, 10 while sub acute complications include thirst, polyuria, lack of energy, visual blurriness and weight loss, 9 chronic hyperglycaemia causes glycation of body proteins which in turn leads to complications that may affect the eyes, kidneys, nerves and arteries. 11It is a major health problem with its frequency increasing every day in most countries. 12The prevalence of diabetes mellitus is on the increase worldwide and it is still expected to increase by 5.4% in 2025. 13rbal medicine is known to play an important role in diabetic therapy, particularly in the developing countries where most people have limited resources and do not have access to modern treatment. 14In the last few years there has been an exponential growth in the fi eld of herbal medicine and these medicinal plants are gaining popularity both in developing and developed countries because of their natural origin and less side effects. 15Based on the historical success of natural products as antidiabetic agents, the side effects associated with the use of orthodox drugs such as insulin and oral hypoglycaemic agents and the ever increasing need for new antidiabetics, there is an increase in demand for the use of plant based medicines to treat diabetes. 16Another important factor that strengthens the use of plant materials as anti diabetics could be attributed to the belief that medicinal plants do provide some benefi ts over allopathic medicine and allow the users to feel that they have some control in their choice of medication. 17e aim of this survey was to compile the different indigenous plants in Ibadan used for the management of diabetes. Study area Ibadan is the capital city of Oyo state and the third largest metropolitan area by population in Nigeria with the total estimated population of 1,338,659 according to 2006 census, The land area is 3,080 square kilometer. 18Ibadan came into existence in 1829; it is located in the south eastern part of Oyo state and south western part of Nigeria, 128km inland northeast of Lagos and 530km southwest of Abuja.Ibadan is classifi ed as a derived savannah, it has a tropical wet and dry climate with a lengthy wet season and relatively constant temperatures throughout the course of the year.The city is naturally drained by four rivers with many tributaries including Ona river, Ogbere river, Ogunpa river and Kudeti river.The people are mainly Yoruba people; the main indigenous occupation of the people is farming. Ethnomedicinal survey The main data sources consisted of a series of informal interviews and general conversation administered on the local herb sellers and other groups of people rich in traditional medicine knowledge.The interviews were done in their native language (Yoruba language), while the information gathered was sorted, the data collected included the local names of plants and parts of the plants used.The plants were identifi ed by their vernacular names and later validated at the Department of Botany, University of Ibadan. Descriptive statistics such as pie chart and percentages were used in the analysis of the data. Respondents' identity All the respondents were females with 65% within the age range of 41-50years, 25% within the age range of 51-60years, while about 10% were above 60years.All of them were married; 90% of the respondents were herb sellers and 10% were traditional medical practitioners.Majority of the respondents were either primary school leavers or secondary school leavers and most of them claimed that they inherited their vocation and ethnomedicinal knowledge from their parents.All the respondents were Yoruba speaking people. Ethnomedicinal survey A total of 27 plants were described as being used for the treatment of diabetes, plant forms include the climbers, herbs, shrubs and trees.The plant parts used mostly from the identified plants include the leaves, fruits and roots [Table 1].Prominent among the plant species mentioned for the treatment of diabetes mellitus are Vernonia amygdalina and Ocimum gratissimum.It was observed that recipes were made from combination of different parts from more than one plant species including fruits and leaves mostly, while some were made from single plant part.The preferred solvents in most preparation were water, soft drink, local gin and liquid from fermented maize (Table 2). Oral administration was the only mode of administration of the herbal treatment for diabetes in the study areas in Ibadan, similarly, the method of preparation mostly preferred were by decoction, squeezing, boiling (in water), soaking, grinding/pounding, drying and pulverization into powder (Table 3). DISCUSSION Diabetes mellitus is a heterogeneous group of disorders characterized by abnormalities in carbohydrate, protein, and lipid metabolism.The effects of uncontrolled diabetes include inability to see clearly, recurrent boils on the skin, leg ulcers that fail to heal, frequent urination, weight loss, inordinate appetite, mental depression, progressive weakness, thirst and dry tongue. 19In Nigeria, most diabetic patients consult traditional medical practitioners (TMPs) to manage their health condition. 20As a result of this, the From the study, It was observed that leaves formed the most frequently used part for diabetes (40%), followed by fruits (16%), root (16%), seeds (7%), stem/bark (7%), bulb (5%), fl ower (5%), rhizome (2%) and whole plant (2%). The plant leaves are important ingredient in traditional treatment of various diseases as it occurred as a component in many herbal preparations. 21jority of the herbal recipes were observed to be polyherbal (i.e. in combination), while some were prepared from single plant source.Polyherbal therapy is said to be a current pharmacological principle having the advantage of producing maximum therapeutic effi cacy with minimum side effects. 22Polyherbal therapies have the synergistic, potentiative, agonistic/antagonistic pharmacological agents within themselves that work together in a dynamic way to produce therapeutic effi cacy with minimum side effects. 23 CONCLUSION The practice of traditional medicine has been from time immemorial and the rural population depends mostly on it. 24In addition to the documentation of traditional medicinal practices used for the treatment of diabetes in the study area, this study have provided the ethnomedicinal foundation for the pharmacological properties of notable medicinal plants and their therapeutic effects on diabetes.This study further strengthened the relationship between indigenous knowledge, ethnomedicinal practices, drug discovery and pharmacology.Considering the rich cultural traditions of plant use and the high prevalence of diabetes, more in vivo investigations should be encouraged in order to validate the antidiabetic activity of the identifi ed plants as claimed by the traditional healers. Figure 1 : Figure 1: Percentage occurrence of plant parts used for diabetes treatment Table 3 : Enumeration of Anti-diabetics recipes, methods of preparation and mode of administration Recipes Solvent of choice Method of preparation Mode of Administration Small quantity of the material is taken with hot pap once in the morning Carica papaya (male) (Roots), Senna podocarpa (Roots), Senna alata (Roots), Citrus aurantifolia (Fruit juice) Lime Decoction One tablespoon in the morning and evening Perquetina nigrescens (Roots), Citrus aurantifolia (Fruit juice) Lime Decoction Half glass cup in the morning and evening Ocimum gratissimum (Leaf), Vernonia amygdalina (Leaf) Azadirachta indica (Leaf), Picralima nitida (seeds), Allium Sativum ( rhizomes), Allium ascalonicum (bulbs), little potash Ocimum gratissimum (leaves), Viscum album (Whole plant) Water Squeezing A glass cup of the mixture three times daily Citrullus lanatus (fruit), Picralima nitida (seeds), Aframomum melegueta (Fruit)
2017-09-07T07:29:16.963Z
2015-03-08T00:00:00.000
{ "year": 2015, "sha1": "176c1cbc4d49abbb60bc3d8d12aa7c333b15b412", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/AJMS/article/download/10480/10107", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "176c1cbc4d49abbb60bc3d8d12aa7c333b15b412", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258766564
pes2o/s2orc
v3-fos-license
Efficacy and safety of combined immunotherapy and antiangiogenic therapy for advanced non-small cell lung cancer: a real-world observation study Purpose This study was performed to investigate the efficacy and safety of combined immunotherapy and antiangiogenic therapy for advanced non-small cell lung cancer (NSCLC) in the real world. Methods Data on clinicopathological features, efficacy and adverse events (AEs) were collected retrospectively in advanced NSCLC patients who received immunotherapy combined with antiangiogenic therapy. Results A total of 85 advanced NSCLC patients were enrolled. The patients had a median progression-free survival (PFS) of 7.9 months and a median overall survival (OS) of 18.60 months. The objective response rate and disease control rate were 32.9% and 83.5%, respectively. Subgroup analysis revealed that NSCLC patients with stage IV (p = 0.042), brain metastasis (p = 0.016) and bone metastasis (p = 0.016) had shorter PFS. NSCLC patients with brain metastasis (p = 0.025), liver metastasis (p = 0.012), bone metastasis (p = 0.014) and EGFR mutations (p = 0.033) had shorter OS. Multivariate analysis revealed that brain metastasis (HR = 1.798, 95% CI: 1.038, 3.112, p = 0.036) and bone metastasis (HR = 1.824, 95% CI: 1.077, 3.090, p = 0.025) were independent predictive factors of PFS, and bone metastasis (HR = 2.00, 95% CI: 1.124, 3.558, p = 0.018) was an independent predictive factor of OS. In addition, patients receiving immunotherapy combined with antiangiogenic therapy in second-line therapy had longer OS than those receiving immunotherapy in third- or later-line therapy (p = 0.039). Patients with EGFR mutations who received combination therapy had worse OS than those with KRAS mutations (p = 0.026). Furthermore, PD-L1 expression was associated with treatment responses in advanced NSCLC (χ2 = 22.123, p = 0.000). AEs of different grades occurred in 92.9% (79/85) of NSCLC patients, most of which were mild grade 1/2 AEs. No grade 5 fatal AEs occurred. Conclusion Immunotherapy combined with antiangiogenic therapy was an option for advanced NSCLC patients with good safety and tolerability. Brain metastases and bone metastases were potentially independent negative predictors of PFS. Bone metastases were a potential independent negative predictor of OS. PD-L1 expression was a potential predictor of response for immunotherapy combined with antiangiogenic therapy. Introduction Lung cancer is still the most common malignant tumour with morbidity and mortality in China and seriously threatens the life and health of Chinese people [1]. Non-small cell lung cancer (NSCLC) accounts for approximately 85% of lung cancers. In recent years, with the continuous progress of molecular biology technology, NSCLC has been increasingly identified as a highly heterogeneous disease. Targeted therapy and immunotherapy for different molecular types have greatly improved the prognosis of patients [2]. Especially for advanced NSCLC patients without targetable driver oncogenes, immune checkpoint inhibitors (ICIs) provide new therapeutic options with longer progression-free survival (PFS) and overall survival (OS) [3]. However, the overall effective rate of ICI monotherapy in NSCLC is only 20% [4]. How to obtain the dominant population of immunotherapy and improve the efficacy of immunotherapy is a hot topic in clinical research. A preclinical study revealed that tumour angiogenesis is closely related to the immune microenvironment [11]. Tumour vascular normalization and immune reprogramming form a reinforcing loop that reconditions the tumour immune microenvironment to induce durable antitumour immunity [12]. Antiangiogenic therapy can normalize the blood vessels in this part of the tumour, weaken the suppression of immune factors, and thus promote the development of immunotherapy, which is beneficial for the application of immunotherapy. Additionally, ICIs can normalize the tumour vascular system by activating effector T cells and increasing the infiltrating and killing functions of effector T cells [12]. At present, a large number of clinical studies have explored the efficacy of the model of ICIs combined with antiangiogenic therapy in a variety of tumours and have observed good results [11,13]. Impower150 is the first successful Phase III clinical study of the efficacy of immunotherapy combined with antiangiogenic therapy in NSCLC [14]. The results show that the addition of immunization to antiangiogenic therapy can significantly improve patients' OS (19.5 vs. 14.7 months; hazard rate [HR] 0.80; 95% confidence interval [CI] 0.67-0.95) [15]. Based on this, antiangiogenic therapy combined with immunotherapy (programmed death ligand 1 [PD-L1] inhibitor) and chemotherapy has been approved by the FDA as first-line treatment for advanced NSCLC patients. The phase III ORIENT-31 study also proved the clinical efficacy of chemotherapy combined with immunotherapy (programmed death-1[PD-1] inhibitor) and antiangiogenic therapy [16]. However, a higher number of drug combinations is associated with a greater economic burden and a relatively higher incidence of adverse events (AEs) despite improving treatment efficacy [15,16]. Especially for elderly patients with poor general status, aggressive chemotherapy is often intolerable [17]. In 2022, ESMO published the results of the Phase III IPSOS study of first-line atezolizumab vs. single-agent chemotherapy in patients with NSCLC who were not eligible for platinum-containing chemotherapy, showing that compared with chemotherapy, first-line ICI had an OS benefit (HR = 0.78; 95% CI: 0.6, 0.97; p = 0.028) with stable health-related quality of life and good tolerance [18]. Chemotherapy-free models are increasingly popular. At present, a number of clinical studies on chemotherapyfree models have been carried out in NSCLC patients, showing promising clinical significance [19]. Therefore, we retrospectively analysed 85 patients with advanced NSCLC who received ICIs combined with antiangiogenic drugs and evaluated the efficacy and safety of this chemotherapy-free combination regimen in the real world to provide more options and a basis for the treatment of advanced NSCLC patients. Patients Patients were enrolled according to the following inclusion criteria: (1) patients were pathologically diagnosed with advanced or metastatic NSCLC in the First Affiliated Hospital of Zhengzhou University; (2) ICIs combined with antiangiogenic therapy were used during the treatment and regardless of the treatment lines from March 1, 2019, to September 30, 2021; (3) there were measurable lesions according to Response Evaluation Criteria of Solid Tumours (RECIST) 1.1 version [20]; and (4) all patients agreed to participate in the study and signed informed consent. Patients were excluded for the following reasons: (1) patients with other malignant tumours that were not cured in five years; (2) chemotherapy was combined with ICIs and antiangiogenic therapy. Treatment The ICIs included pembrolizumab, camrelizumab, sintilimab, tislelizumab and toripalimab. Patients were treated with pembrolizumab, camrelizumab, sintilimab or tislelizumab at a dose of 200 mg every three weeks. Toripalimab was administered at a dose of 240 mg every 3 weeks. The antiangiogenic drugs included bevacizumab, anlotinib, and apatinib. Bevacizumab was administered at a dose of 15 mg/kg every three weeks. Anlotinib was administered at a dose of 12 mg/10 mg/8 mg depending on the tolerance of patients for two weeks and stopped for one week. Apatinib was administered at a dose of 250 mg daily. Efficacy and safety The assessment of treatment efficacy was based on RECIST version 1.1 [20]. The tumour responses of target lesions were divided into complete response (CR), partial response (PR), stable disease (SD) and progressive disease (PD). Objective Response Rate (ORR) = CR + PR/ total number of enrolled cases; Disease Control Rate (DCR) = CR + PR + SD/total enrolled cases. PFS was defined as the time from the start of combination therapy to PD or death from any cause. OS was defined as the period from the start of combination therapy until death from any cause or the last follow-up. The deadline for follow-up was August 31, 2022. AEs were evaluated and recorded according to the Common Terminology Criteria Adverse Events (CTCAE) V5.0. Statistical analysis Survival curves and median PFS and OS were generated using the Kaplan-Meier survival method. Risk factors for subgroups were calculated using the Cox proportional hazards regression model. Multivariate analyses were based on the Cox proportional hazards regression model. Clinical treatment responses were analysed using χ2 tests. All statistical analyses were performed using SPSS 26.0 statistical software, and p < 0.05 was considered statistically significant. Treatment responses Of the 85 NSCLC patients with ICIs combined antiangiogenic therapy, 28 achieved PR, 43 achieved SD, and 14 achieved PR. The ORR was 32.9%, and the DCR was 83.5% (Table 5). PD-LI expression was associated with treatment responses in advanced NSCLC patients (p = 0.000), both in adenocarcinoma (p = 0.007) and squamous cell carcinoma (p = 0.049). Notably, 1 patient obtained SD and 1 patient obtained PD for each pair of patients with ALK fusion, with BRAF V600E mutation, and with HER-2 exon 20 insertion mutation. Discussion How to improve the efficacy of immunotherapy is a hot topic in clinical research. Preclinical studies have confirmed that ICIs combined with antiangiogenic therapy achieve a 1 + 1 > 2 antitumour effect. An increasing number of clinical studies have begun to explore the application prospects of the chemotherapy-free mode in advanced NSCLC [19]. Domestic and international clinical studies revealed that the mPFS of immunotherapy combined with antiangiogenic therapy in the subsequent treatment of advanced NSCLC was approximately 6 months, and the ORR and DCR were approximately 30% and 80%, respectively [21][22][23][24]. Indirectly compared with the previous literature data, the effect of combination therapy is superior to ICI monotherapy, and the mPFS of ICI monotherapy in subsequent therapy in advanced NSCLC was less than 4 months, and the response rate was only approximately 20% [25][26][27]. In our present study, we retrospectively analysed the efficacy and safety of 85 NSCLC patients who received ICIs combined with antiangiogenic therapy. A total of 94.1% (80/85) of NSCLC patients received second-line and subsequent treatment, the mPFS was 7.5 months, and the ORR and DCR were 31.25% and 82.5%, respectively. The research data of our centre were basically NSCLC Non-small cell lung cancer, CR Complete response, PR Partial response, SD Stable disease, PD Progressive disease. EGFR (epidermal growth factor receptor), including classical EGFR mutation and nonclassical EGFR mutation, KRAS, Kirsten Rat Sarcoma Viral Oncogene Homologue. PD-L1 (programmed death ligand 1) expression, Negative, 0%; Low, 1%-49%; High, ≥ 50% * significance p values consistent with the real data reported in the past and were slightly better. A 2-centre, retrospective study in the real world revealed that 57 previously treated advanced NSCLC patients who received any PD-1 antibody combined with antiangiogenic drugs exhibited a PFS of 4.2 months and a DCR of 63.2% [28]. A retrospective analysis of 67 advanced NSCLC patients who had previously received PD-1 antibody in combination with anlotinib showed that 19 patients had PR (28.4%), 39 had SD (58.2%) and 9 had PD (13.4%). The mPFS was 6.9 months, and the OS was 14.5 months. The study also found that the benefit of anti-PD-1 plus anlotinib was also observed in patients with EGFR mutation positivity, liver metastases, and brain metastases [29]. In our present study, NSCLC patients with brain metastasis and bone metastasis had shorter PFS and OS, and patients with liver metastasis and EGFR mutations had shorter OS. Although previous studies have found that patients with EGFR mutations do not respond well to immunotherapy [25,27,30,31], in this study, the combination of ICIs and antiangiogenic therapy also achieved a PFS of 4.4 months and an OS of 12.7 months. Among the 17 patients with EGFR mutations, 2 achieved PR, and 10 achieved SD, with a DCR of 70.6%. This study suggested that immunotherapy combined with antiangiogenic therapy can be an option for patients with EGFR mutations after drug resistance, as shown by Impower 150 [15] and ORIENT-31 [16]. KRAS mutations have been linked to better immunotherapy responses in lung cancer [32][33][34]. Our study showed that patients with KRAS mutations had longer PFS (9.4 months) and OS (24.4 months). In addition, it was found that patients with high PD-L1 expression were more likely to obtain PR in the combination regimen, a result consistent with the previous conclusion that PD-L1 expression predicts the efficacy of immunotherapy [27,35]. However, both the PD-L1 high expression group and the positive group had longer PFS and OS. There was no significant difference. Therefore, whether PD-L1 expression status can be used as a predictor of the efficacy of combination treatment modes needs to be further confirmed by large-sample, prospective clinical studies. Advanced NSCLC patients who received ICIs combined with antiangiogenic therapy at or above 3 lines achieved a PFS of 5.5 months and an OS of 14.7 months. These data could be compared with the results of a retrospective study of 30 samples from Xu et al., in which the mPFS was 5.0 months and the mOS was 14.3 months. Similarly, it was also found that patients with higher PD-L1 expression had longer PFS, but the difference was not statistically significant [36]. Another scholar performed a cohort study of the efficacy and safety of ICIs plus anlotinib versus ICIs alone as the treatment of advanced NSCLC in the real world. The results revealed that the mPFS of patients in the ICI plus anlotinib group was also much longer than that of patients in the ICI monotherapy group (6.37 vs. 3.90 months; P < 0.001). Combining ICIs with anlotinib could improve the outcomes of patients with bone metastasis [37]. The above results of this realworld study suggest that ICIs combined with antiangiogenic therapy are a good option for advanced NSCLC patients who have failed first-line therapy. Surprisingly, the efficacy of immunotherapy combined with antiangiogenic therapy as the first-line treatment for NSCLC patients has also been explored. In the 2019 World Lung Cancer Congress, a study from Professor Han et al. revealed the efficacy of sintilimab combined with anlotinib as the first-line treatment for stage IV NSCLC patients with negative driver genes. A total of 16/22 patients achieved PR, and the ORR was [38]. Based on the current data, sintilimab combined with anlotinib had a great advantage in the first-line treatment of advanced NSCLC. In our present study, there were 5 NSCLC patients who received ICIs combined with antiangiogenic therapy as the first-line treatment. The mPFS was 8.0 months, and the mOS was not mature, slightly worse than those shown in the study of Han et al. The reason may be that the general status of the included population was relatively poor, and ICIs combined with antiangiogenic therapy were treated as a compromise protocol. In addition, the number of cases included was so small that the strength of the data was limited. Currently, a phase III clinical study (NCT04964479) in which TQB-2450 (a humanized monoclonal antibody against PD-L1) is combined with anlotinib versus pembrolizumab as a first-line treatment for advanced NSCLC patients with PD-L1 ≥ 1% is ongoing. It is expected that this study will provide good evidence for anlotinib combined with immunotherapy in the first-line treatment of advanced NSCLC. In summary, immunotherapy combined with antiangiogenic therapy has shown good antitumour effects in the first and posterior lines. However, it is difficult to determine which line is better. An indirect comparison with previous literature indicated that first-line single-agent immunotherapy was superior to second-line immunotherapy [27,35,39]. The KEY-NOTE-001 study revealed that immunotherapy had a longer mOS in untreated patients than in treated patients (22.3 vs. 10.5 months) [40]. In addition, the PFS2 analysis of the KEYNOTE-024 [39] study also showed that the earlier immunotherapy was used, the better the efficacy. However, whether combined immunotherapy must also be performed as early as possible remains to be further explored. Conclusion Immunotherapy combined with antiangiogenic therapy has been increasingly recognized in advanced NSCLC and has been favoured by many clinicians because of its relatively mild adverse effects. In the present study, we retrospectively analysed the efficacy and safety of immunotherapy combined with antiangiogenic therapy in advanced NSCLC in the real world. Regardless of the number of treatment lines, chemotherapy-free combination therapy was an option for advanced NSCLC patients with good safety and tolerability. In particular, patients without brain metastases, bone metastases, liver metastases, and EGFR mutations had longer OS. Immunotherapy combined with antiangiogenic therapy could also be a good choice for patients with KRAS mutations. In addition, NSCLC patients with high PD-L1 expression were more likely to respond to combination therapy and had longer PFS and OS. Since this study was a retrospective study based on a small sample, sampling differences may affect the results. In addition, ICIs and antiangiogenic drugs were not qualified in this study, and there could be differences in efficacy between different drugs. Finally, this study did not exclude patients who had used ICIs or antiangiogenic drugs alone in the past, and whether cross-line use has an effect on the efficacy of immunotherapy combined with antiangiogenic therapy remains to be further explored. In the future, more exploration is needed into how to screen the advantaged population. Additionally, more phase III clinical studies are needed to verify the feasibility of clinical application and provide survival benefits to more patients.
2023-05-19T13:54:27.725Z
2023-05-19T00:00:00.000
{ "year": 2023, "sha1": "5fc8a47c95c8c8258b16cb115d29c91077149f57", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "5fc8a47c95c8c8258b16cb115d29c91077149f57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119110629
pes2o/s2orc
v3-fos-license
Aspects of One-Dimensional Coulomb Gases In this short review, we discuss recent advances in exact solutions of models based on a one- dimensional (1D) Coulomb gas by means of field-theoretic functional integral methods. The exact solutions can be used to assess the accuracy of various approximations such as the weak coupling Poisson-Boltzmann theory as well as the strong coupling theory of Coulomb gases. We consider three different 1D models: the Coulomb fluid configuration in the case of the soap film model consisting of positively and negatively charged particles between adsorbing boundaries, counterions between two charged surfaces, and an ionic liquid lattice capacitor with positively and negatively charged particles on a lattice between one positive and one negative bounding surface. I. INTRODUCTION Field-theoretic functional integral methods can be used to study exact solutions of models based on a onedimensional (1D) Coulomb gas with charged boundaries. In 1D, exactly solvable Coulomb gas models can be then used as a testbed for assessing the accuracy of various approximations: the weak coupling expansion, Poisson-Boltzmann/mean-field equations, and the strong coupling expansion [1]. We review these approximations in the context of three 1D Coulomb gas systems and remark on whether or not they fail to predict important effects present in the exact solution. Some physical properties of the 1D system can be applicable at least qualitatively for dimensions d > 1 and can help us to understand whether pertaining approximation methods are reliable or not. In particular, our analysis gives insight into systems such as an array of charged smectic layers or lipid multilayers, and ionic liquids near charged interfaces, treated as effectively 1D systems. An important aspect of these endeavours is that we can test and develop the analysis and especially numerical methods that can then be tentatively applied also for d > 1. II. THEORETICAL METHODS The method of functional integrals applied to Coulomb gas systems has been developed over many years [2][3][4][5]. In any dimension this approach allows for both strong and weak coupling to be studied explicitly, but specifically in 1D the functional integral representation can be applied using a variety of methods to obtain exact solutions to a number of models which are generally characterized by a Coulomb gas of ions of possibly non-zero size confined between boundaries with properties that allow their potential or charge to be determined either dynamically or as an external field condition. Three varieties of a 1D Coulomb gas model discussed below are presented in Fig. 1. The functional integral representation of the Coulomb gas partition function allows us to formulate two effective solution techniques. The Schrödinger kernel technique is applicable in all dimensions and has been used to analyze a number of models [6]. In 1D it corresponds to solving the Schrödinger equation [2] which is in principle exact. In d > 1 the Schrödinger kernel field theoretic representation of the partition function is derived, often using a Hubbard-Stratonovich transformation, and is analyzed by perturbative and graphical methods. For d > 1 this approach does require that a preferred co-ordinate can be designated as the Euclidean time and so the approach is limited to symmetrically layered systems [6]. The transfer matrix and Fourier methods technique is an alternative to the Schrödinger kernel approach. Though it is more general, it is only practical in 1D. Its implementation exploits periodicity in the (imaginary) electrostatic potential φ which also restricts its general applicability. Examples of this technique in 1D are the counterion gas and lattice ionic liquids. The actual formulation of the functional integral method relies on the action for the full QED of a general system that is then reduced to the electrostatic action proper. The relevant electrostatic Lagrangian is then Here q i is the charge of the i-th ion at position x i and ρ e (x) is the external charge distribution. The partition function is obtained by tracing the Boltzmann weight of the above Lagrangian over the electrostatic field [ψ]. Tracing furthermore over ion positions, changing the axis of functional integration ψ = iφ and introducing fugacity µ − = µ + ≡ µ by the Gibbs technique, the partition function for monovalent ions (with q i = ±1) assumes the form with the "field action" where β = 1/(k B T ). The charge density operator is then given by where · · · stands for the φ average. III. BILAYER SOAP FILM IN IONIC SOLUTION Because hydrophobic heads of the surfactant molecules preferentially migrate to the surfaces charging them up dynamically, the configuration of the bilayer soap film consists of two planar (surfactant) surfaces separated by a distance L confining a solution of a symmetric electrolyte. We calculate the surface charge, the density profile of electrolyte near the interfaces, and the disjoining pressure P as a function of the thickness L of the soap film, defined as i.e. the difference between the film and bulk pressures. Here J is the grand-canonical partition function/unit area. An important phenomenon to predict is the first-order collapse transition of the film to a Newton black film expected as the electrostatic coupling in the film is increased. We model this system by a Coulomb gas confined to z ∈ [0, L], schematically presented in Fig. 1 (top), with potentials on the boundaries that account for the hydrophillic nature of the head group of the surfactant molecule. The Debye length is given by l D = εk B T /2ρe 2 , and the Bjerrum length in 1D by l B = 2k B T ε/e 2 . Perturbation theory is an expansion in the coupling parameter g = l D /l B . We use the partition function described earlier but now includes surface free energy f (φ) to model the surface potentials, which are attractive for the negatively charged hydrophillic surfactant head groups whose surface density is denoted by ρ − (φ): where λ controls the potential strength. To simplify the notation we scale the variables: φ → eβφ, x → x l B . The charge density operators for ± charges are then given by the Boltzmann weights ρ ± (φ) = e ±iφ . The 1D partition function then becomes is the Schrödinger kernel for evolution in the "Euclidean time" x: It satisfies the Schrödinger (Feynman-Kac) equation with Z(g) = 1/ cos(φ) . The above equation is also known as the Mathieu equation, and the harmonic term gives the Debye length in units of l B . Z(g) = 2µ/ρ is the renormalization that relates the fugacity to the observable charge density and is given by Eq. (3). We now consider the solution in various limiting regimes. A. Large L: bulk pressure Strong coupling (SC) g → ∞: The Mathieu ground state dominates in this regime and so we can use the Schrödinger perturbation theory for the ground-state energy of H. The result, derived originally in [2], is The leading term is the free gas term but for density ρ/2, which therefore signals the onset of the dimerization process, i.e., the Bjerrum pair formation of positive and negative mobile charges. Weak coupling (WC) g → 0: Feynman perturbation theory is applicable in this case and so we use the Feynman diagram expansion to find The leading term is the free gas term and the second-order term is the familiar Debye-Hückel result in its 1D variant. Note that there is no O(g 2 ) term; this is cancelled by the counter term in Z(g). The strong and weak coupling dependencies of the bulk pressure P bulk on g compare well with the exact solution of the problem. Both approximations are accurate across a wide range of g in their regime of validity. More details can be found in [4]. B. Finite L: exact methods For finite L we expand the kernel K(φ 0 , φ L ; L) over periodic eigenfunctions of the Mathieu equation. We can then use a numerical approach for eigenfunctions/eigenenergies which will give an exact solution for all L. This method is described fully in [4] and we do not delve into details here. It gives the same answers as the Fourier approach that we describe below. The Fourier method for obtaining an exact solution to problems in 1D is more general than the Schrödinger approach since it works also when the Hamiltonian is not hermitian, which is the case for the counterion gas considered in the next section. It also forms the basis for the transfer matrix method. The theory is periodic under φ → φ + 2π and we can define where the coefficients b n (x) obey the evolution equation This is the Fourier version of the Schrödinger equation but can be derived generally from the convolution property of the Schrödinger kernel. The partition function can then be obtained from The exact solution for the disjoining pressure as a function of the separation L for different values of the surface potential strength parameter λ clearly predicts a collapse transition to a Newton black film that can not be accounted for by the mean-field theory, which we address next. C. Classical or mean-field (MF) theory Standard variational methods applied to the expression for the partition function gives the classical MF equation: the Poisson-Boltzmann (PB) equation for φ cl (x) as the saddle point equation of the corresponding field theory. In this case the disjoining pressure P is given by the value of ion density at the midpoint x = L/2 between the bounding surfaces. The MF theory predicts that universally P > 0, contrary to our exact result and also to experiment; it does not predict any collapse transition, which is thus obviously a consequence of the non-MF correlation effects and is intrinsically a fluctuation phenomenon. IV. COUNTERIONS BETWEEN CHARGED SURFACES The 1D model here is a Coulomb gas of counterions confined between two oppositely charged surfaces; the system is overall neutral. We compare exact results with strong and weak coupling calculations, which are the same as in a 3D system. More details can be found in [7]. The system is shown in Fig. 1 (middle) and consists of N counterions, each of valency q, with surface charges σ 1 and σ 2 , respectively. We define ζ = σ 2 /σ 1 , with −1 < ζ < 1, and define α = 1/(1 + ζ). The 1D Bjerrum length is l B = 2k B T ε/e 2 , and the Gouy-Chapman length is µ ≡ µ 1 = l B e/q|σ 1 |, where we have chosen σ 1 to be non-zero and have µ 2 = µ/|ζ|. The electrostatic coupling constant, g, is then given by where N → ∞ corresponds to the MF/PB theory and N → 1 to the SC theory. The partition function is derived as A. Exact results In this system H is not hermitian because the counterions are, by definition, of one charge only. We therefore analyze the model using the Fourier method. We exploit periodicity in H of φ → φ + 2π in order to write can be evaluated exactly. Since the second term in the above equation can be seen to be just the counterion density at the boundary of the system, the above form of the pressure is thus a clear example of the contact value theorem; it connects the pressure with the value of the particle density at the confining wall of the system. B. Weak coupling We consider the WC expansion g → 0 which is equivalent in the lowest order to the MF/PB theory. In the d > 1 case, the MF theory treats the potential field φ(x) as constant in the directions transverse to the normal to the bounding interfaces, and so the results are independent of the dimensionality. The leading contribution arises from the saddle-point configuration φ 0 (x) = iψ 0 (x) with ψ 0 real. The PB equation and the boundary conditions have the form The leading PB contribution to the disjoining pressure, P , is then expressed as where ρ 0 (x) is the density of counterions between the boundaries, given by the standard Boltzmann form ρ 0 (x) = Ce −ψ 0 (x) , where C is a normalization constant. This furthermore implies that the MF/PB disjoining pressure P is obtained as follows: When the pressure is repulsive (P > 0), we have P = µ 2 σ 2 1 Γ 2 /2, where Γ satisfies and when the pressure is attractive (P < 0), which may be the case within the MF/PB theory only for ζ < 0, we have P = −µ 2 σ 2 1 Γ 2 /2, where Γ is now given as a solution of coth(ΓL) = − ζ + µ 2 Γ 2 µΓ(1 + ζ) . C. Strong coupling The strong coupling limit is formally identical to the one-particle limit [1]. In the present case it is easily evaluated from the partition function in the case of a single counterion in the system. The partition function in an explicit one-particle form leads to the disjoining pressure The range of validity of this limiting expression is of course defined by the number of counterions in the system. As this number decreases towards one, N → 1, the above expression for the disjoining pressure becomes exact. D. Comparison Both the weak and strong coupling approximations are independent of dimension d and the comparison with the exact results can test their validity. For symmetric surface charges (ζ = 1) the PB/MF pressure is positive (repulsive) for all intersurface separations, whereas the SC expansion and the exact result for N = 1 predict attraction at large separations; this distinction holds for 0 < ζ ≤ 1. For the asymmetric configuration with ζ < 0, there is little difference between the different approaches; on trivial grounds there is attraction for large separations but there is repulsion for sufficiently small separations, see Fig. 2, where a comparison is made with Monte-Carlo (MC) simulations at different numbers of counterions N [7]. V. IONIC LIQUID LATTICE CAPACITOR In the models above the ions have been chosen to be point-like. Here we address the question of changes wrought by their finite size. In this case the system consists of a 1D lattice of M sites with spacing a, with the i−th site, 0 ≤ i < M , occupied by ion with charge qS i with S i ∈ [−q, 0, q], see Fig. 1 (bottom). Within this model the finite ion size is ∼ a, which is crucial to the phenomena observed in experiments on confined ionic liquids. The configuration described is one of the 1D ionic liquid capacitor. The external fields are imposed either by fixing the charges of the boundaries at i = −1 and i = M to be ±qQ, respectively, or by imposing a fixed voltage/potential difference, ∆v, across the capacitor. More details can be found in [8]. The electrostatic Hamiltonian in this case is expressed through a spin-like variable S i = 0, ±1 After a Hubbard-Stratonovich transformation this yields the action The system includes boundary charges ±qQ at sites −1, M . The electrostatic potential is defined as V = −iφ/βq. In limit a → 0, q/a fixed, the MF equations obtained from the saddle-point of the above field action reduce to those of Kornyshev [9] and Borukhov et al. [10]. For non-zero a the action is not positive definite for µ ≥ 0.5 and so we seem to have a sign problem and certainly cannot use the Schrödinger approach a priori. Nevertheless, in the case of 1D the partition function can be computed exactly by using the transfer matrix approach, with the Fourier method described earlier. This can be seen as follows: write y i = φ i and define with Kf (y) = ∞ −∞ dy K(y, y )f (y ). The free energy for the fixed Q ensemble, Ω Q , then follows as The conjugate free energy for the fixed ∆v ensemble, Ω ∆v , then follows from a Legendre transform, e −βΩ ∆v = dQ e −∆vQ−βΩ Q , while the capacitance C ∆v is obtained from the first derivative of ∂ Q ∆v w.r.t. ∆v. and can thus be calculated directly from the partition function. A. Results The transfer matrix and Fourier approach can be formulated in order to evaluate the free energy explicitly. Details of this procedure can be found in Ref. [8]. Enthalpy G M = Ω M + M P bulk , the disjoining pressure P = G M − G M +1 , and the capacitance C ∆v , can all be calculated as a function of µ, Q, ∆v. We show explicitly only the capacitance results, C ∆v , as a function of ∆v in Fig. 3, both for large µ and small µ. For large µ the curve shows shows the typical "bell" shape in contrast to the curve for smaller µ, which shows the non-monotonic "camel" shape and so C ∆v has a minimum at the point of zero charge confirming the Fermi MF results of Kornyshev [9]. For smaller γ (increasing T ) the periodic non-monotonicity both for large µ and small µ disappears and the solution approaches the Fermi MF result of Kornyshev [9]. It is interesting that the exact solution dances around the Kornyshev solution with an ever increasing amplitude but the system nevertheless always remains thermodynamically stable, as can be straightforwardly ascertained. VI. LESSONS We have demonstrated that in 1D one can use the Schrödinger approach for continuum models of Coulomb fluids, but that for discrete models a more general approach is needed which exploits the transfer matrix and the periodicity of the field to use Fourier methods. We tested the PB/MF and the strong coupling limiting expressions and demonstrated that they need correcting although the exact analytic result clearly supports the two limiting analyses. We also confirmed that the MF theory does not capture the important effects which are due to correlations, either the attractive intersurface forces in the case of a counterion-only system or non-monotonic periodic variation of the capacitance in the confined ionic liquid case.
2012-09-16T19:04:47.000Z
2012-09-16T00:00:00.000
{ "year": 2012, "sha1": "da2f3b54e80cbf0a1d4ffda2e024197b9221d81a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1209.3514", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "da2f3b54e80cbf0a1d4ffda2e024197b9221d81a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
85521714
pes2o/s2orc
v3-fos-license
Theory of phonon-assisted"forbidden"optical transitions in spin-gapped systems We consider the absorption of light with emission of one S(tot)=1 magnetic excitation in systems with a spin gap induced by quantum fluctuations. We argue that an electric dipole transition is allowed on the condition that a virtual phonon instantaneously breaks the inversion symmetry. We derive an effective operator for the transition and argue that the proposed theory explains the polarized experiments in CuGeO(3) and SrCu(2)[BO(3)](2). I. INTRODUCTION Techniques of using the interactions between light and spin-waves to study the excitations of magnetic solids were developed shortly after the invention of the laser. Single magnon scattering of photons was first predicted from the Zeeman coupling of the magnetic field of the photon field to the magnetic spins, leading to magnetic dipole transitions. 1 Later it was pointed out 2,3 that the electric field of the electromagnetic radiation could also couple to the spin, by an indirect process in which spin-orbit interactions act on electronic states excited virtually by electric-dipole transitions. Experiments in antiferromagnets 4 showed that this latter mechanism dominated the magnetic-dipole transitions to single magnon excitations. The Raman spectrum also revealed relatively strong two-magnon scattering. This was argued 4 to be due to an independent mechanism: excitedstate exchange interactions. The same mechanism, by which the magnetic exchange interaction is modified by electric-dipole excitation of the magnetic electrons, was advanced 5 to explain far-infrared absorption. A variant is to replace the virtual electronic excitation by a virtual lattice distortion that modifies the magnetic exchange. 6 The intensities of such transitions can be calculated by writing effective operators for absorption or Raman scattering in terms of the spin operators. 4,7 This theory is considered generally to give good account of inelastic light scattering and optical absorption. For an isotropic system the effective operator conserves total spin and what is commonly called the "Fleury-Loudon" theory is used to analyse the spectroscopy of spin conserving transitions. Optical techniques are now well established as probes of magnetic excitations, whether it be by Raman scattering, i.e. inelastic scattering of optical frequencies, electron spin resonance (ESR), i.e. resonant absorption of electromagnetic radiation with sweeping magnetic field, or by transmission measurements of infrared radiation. The techniques have been further enhanced by the increasing flexibility of light sources and detectors in the far-infrared region that is useful to much of magnetism. ESR studies using sources derived from far-infrared lasers rather than the traditional cavities are now available up to THz frequencies and may be made in large static or pulsed magnetic fields. 8 Transmission studies in the far infrared range have the advantage of allowing for measurement in zero external magnetic field. While restricted to small momentum transfer, q ≈ 0, compared to neutron inelastic scattering, the optical techniques have the advantage of much higher frequency resolution. The possibility of polarising the electromagnetic radiation means different transition mechanisms may be distinguished. Optical measurements are particularly useful for precise measurements of the spin gap properties in strongly correlated systems and spin-liquid systems with magnetic singlet ground states. Because of the frequencies now available, one can apply an electromagnetic source with sufficient energy to excite the first triplet S tot = 1 excited state from the singlet S tot = 0 ground state. Many systems of interest are highly isotropic with respect to spin rotations and transitions between the singlet S tot = 0 ground state of the spin-liquid to the first triplet S tot = 1 excited state would be forbidden by symmetry in the isotropic limit. Even the weaker magnetic-dipole coupling should give zero intensity as the ground state is a spin singlet. One would then expect to see the excited singlets, i.e. two magnon states only. Nonetheless the "forbidden" transitions to the single magnon states have been observed in many spinliquid, ranging from the S=1/2 quasi-one-dimensional systems CuGeO 3 9,10,11,12,13,14 and NaV 2 O 5 15,16 , to 2d system such as SrCu 2 (BO 3 ) 2 17,18 and to the spin-1 chain compound, NENP 19 . Despite detailed experiments, no clear understanding of the mechanism of these transitions has emerged. It is clear that spin-orbit coupling, that breaks the conservation of total spin, must be included as it is then possible a priori to have a transition to a one-magnon state. As mentioned, the photon can couple to the spin degrees of freedom in different ways, via direct magnetic dipole transitions or indirect electric dipole transitions with spin-phonon or spin-orbit couplings. As one of the purposes of performing high resolution spectroscopy is to resolve the weak anisotropies, it is important to distinguish between these mechanisms, i.e. to find the one which gives the strongest absorption. As in the original studies 4 this is done by establishing, and then verifying experimentally, selection rules. For one-magnon absorption, previous estimations favored a purely electric dipole transition for NENP. 20 In the case of CuGeO 3 the suggestion that a staggered field would give rise to a magnetic dipole transition 21 has been ruled out by the polarized experiments. 11 Furthermore the first order corrections to the Hamiltonian in spin-orbit coupling lead to vanishing magnetic dipole intensity owing to a lattice selection rule. 22 In the compound SrCu 2 (BO 3 ) 2 it has been shown experimentally that varying the direction of the electric field of the wave (while keeping the magnetic field of the wave fixed) changes the intensity of the absorption, suggesting that the transition is electricdipole in nature. 18 One would also like to know which of the two electric dipole mechanisms applies, absorption involving solely the electronic degrees of freedom or with the lattice degrees of freedom. In the original theory of Elliott and Loudon of light scattering by magnons, the electric dipole coupling indeed leads to the creation of one-magnon excitations. 2,4 Although such two photon processes are not forbidden in infrared absorption, they are much smaller in intensity since they involve the weak coupling to light to second order in perturbation theory. Alternatively in the presence of strong spin-orbit coupling, it is possible to have single photon coupling to spin excitations 7,23 but as this is of second order in the spin-orbit coupling, we shall assume that the linear order will dominate for these materials, which are close to isotropic. In addition lattice symmetries such as centers of inversion between the magnetic ion may eliminate such terms, or at least reduce them further, if the inversion symmetry is slightly broken, as in SrCu 2 (BO 3 ) 2 . 24 In this paper we shall show that an effective operator of Dzyaloshinski-Moriya symmetry 26,27 acting on the spin degrees of freedom, can be used to explain the polarized experiments of CuGeO 3 and SrCu 2 (BO 3 ) 2 . Here E β (t) is the component β of the applied electromagnetic field at time t. The indices i and a define the lattice of magnetic bonds and the coefficients A βγ will be made explicit in section II. They couple the component β of the electric field with the component γ of the vector product of the spin operators. An electric dipole operator (1) can arise from an electronic mechanism, as may be the case in NENP 20 , but centers of inversion at the middle of the Cu-Cu bonds in CuGeO 3 and SrCu 2 (BO 3 ) 2 24 , would forbid generation of the operator from purely electronic processes. A lattice distortion may, however, break the inversion symmetry instantaneously, and allow terms of the form in (1). We therefore consider the phonons explicitly, and in section II we derive in detail the effective transition operator, which includes an anisotropic part of the form (1). The essential physical mechanism is that the electric field excites a virtual phonon state S tot = 0 which is coupled to the S tot = 1 state by an anisotropic spin-phonon coupling which originates in spin-orbit coupling. An explanation involving the modulation of static Dzyaloshinski-Moriya interactions has been put forward recently for the case of NaV 2 O 5 . 28 In that compound, however, no polarized experiments are available and moreover, it is difficult to distinguish with a magnetic dipole transition which turns out not to be forbidden by a lattice selection rule. 22 The mechanism we develop here is more general in that it does not require the presence of a static Dzyaloshinski-Moriya interaction. It only needs the instantaneous breaking of the inversion center which is assured by the appropriate phonons. This allows us to consider the operator (1) on the strongest bonds irrespective of whether the bond lacks an inversion center or not. In section II, we give the selection rules and the order of magnitude of such electric dipole transitions. We compare with the experiments in CuGeO 3 and SrCu 2 (BO 3 ) 2 in section III. II. EFFECTIVE MAGNETIC OPERATOR AND SELECTION RULES In this section we show that the first-order spin-orbit correction to the spin-phonon coupling leads indeed to an effective magnetic operator for the optical transitions. We note that a phonon-assisted optical transition is the usual explanation for the occurrence of the singlet S tot = 0 bound states of two magnon states in the spectrum of the high Tc's cuprates. 6 The spin-orbit correction should then lead to transition to S tot = 1 states. We start with a magnetic Hamiltonian for a chain or a layer of Cu atoms, for instance, that can be motivated by the usual super-exchange arguments: where S i is a spin operator, u id is the displacement vector of the ion d in the unit-cell i, H ph is the phonon Hamiltonian which takes into account the kinetic part of the ions and the spring constants, P ph is the electric dipole of the ions and E is the external electric field. The magnetic couplings, J({u id }), can be expanded to first order in the ion displacements. Including the first order in spin-orbit coupling, there is an extra term of Dzyaloshinski-Moriya symmetry: where g α d is the partial derivative of the diagonal part of J({u id }) with respect to u id (it depends on the bond i, a but we will not write it explicitly in the following). The origin of d αβ d is explained below. This is indeed a general form for the spin-phonon coupling and there is no restriction to be added on the grounds of symmetry. The static Dzyaloshinski-Moriya interaction is forbidden when there is an inversion center at the middle of the bond. If the set of displacements u id is such as to remove the inversion center (which is the general case) then such an interaction takes place. For example if we take the two symmetric ninety degrees super-exchange paths Cu-O-Cu, there is a center of inversion and there is an interference between the two paths that leads to no Dzyaloshinski-Moriya interaction. Suppose now that the two oxygens move upwards. Because the hopping of the electrons is much faster than the typical phonon frequency, the electrons see a frozen distorted lattice on that time scale. The interference therefore does not occur anymore and there is an effective Dzyaloshinski-Moriya interaction linear in the displacements in the first order. This is the origin of the second term of (3) which involves a tensor d αβ d since the displacements in one direction, α, generally produce a Dzyaloshinski-Moriya vector in another direction, β. Strictly speaking, d αβ d also depends upon the bond i, a, but we do not write it explicitly. Note that this term is derived in a super-exchange approach by taking into account the spin-orbit coupling in first-order in perturbation theory in the lines of the original Moriya's article. 27 We shall refer to it as a dynamical Dzyaloshinski-Moriya interaction in the following. The transition probability is then given at zero temperature by the "golden rule": where ω f is the energy of the excitation, typically the one-magnon energy. At first order in H sp in perturbation theory the matrix element is written in terms of a sum over the excited states: The intermediate states that contribute to the sum over n contain one phonon (whereas the initial and final states we are interested in do not contain any phonon). The partial phonon matrix elements are calculated out, but we keep the general form for the magnetic states at this stage. In other words the phonons are integrated out and we end up with an effective matrix element acting between different magnetic states: where D s = d q d λ dsq=0 is the amplitude of the instantaneous electric dipole of the unit cell due to the phonon mode s with energy Ω s = Ω q=0,s . The final magnetic state has an energy ω f . g s = d,α g α d λ α ds is the amplitude of the variation of the magnetic exchange energy due the atomic distortions of the phonon s (λ α ds is the amplitude of the motion of the atom d, in the direction α due to the phonon s at q = 0). Similarly, d α s = βd d αβ d λ β ds is the amplitude of the instantaneous Dzyaloshinski-Moriya vector due to the phonon s. The resulting γ and δ depend on the bond considered. They would usually couple the nearest neighbors, but could be introduced for neighbors at larger distances if such super-exchange processes were likely to take place. They can be introduced on the basis of the symmetry which is usually reduced with respect to the crystal symmetry by the presence of the external electric field. Thus we have written an effective operator announced in eq. 1 with A βγ = ∂δ γ ∂E β . The selection rules are: • (i) D s .E = 0: the virtual phonon s creates distortions that carry an instantaneous electric dipole D s . In other words, the phonon s must be infra-red active. • (ii) g s = 0: The distortion of the unit cell due to the phonon s modulates the magnetic exchange between the spins. The transition at ∆S tot = 0 is allowed. -d s = 0: It implies that the distortion of the unit cell due to the phonon s must break instantaneously the symmetry by inversion at the middle of the bond; so that to allow an instantaneous Dzyaloshinski-Moriya interaction which amplitude is given by d s . The transitions between states that differ by the spin, ∆S tot = 1, are allowed and have an intensity ∼ δ 2 . Suppose that there is only one phonon mode s which gives a major contribution to the sum. In addition, we know that this active phonon mode will appear in the infrared spectrum at the energy Ω q=0,s , with an intensity given by I ph,s = (D s .E) 2 . We can therefore rewrite the intensity of the ∆S tot = 1 line as: We denote by E the order of magnitude of the variation of the magnetic exchange energy due to the phonon and following Moriya, 27 we estimate d s ∼ ( ∆g g )E. That gives: This expression gives the intensity of such a process compared to the intensity of the optically active phonon. It is reduced by two factors: the spin-orbit coupling (in the cuprate materials, ∆g/g can be 0.1) and the ratio of the energy modulation of the magnetic exchange due to the phonon by roughly the energy of the same phonon. The latter is difficult to estimate : in CuGeO 3 , the first optical phonons have Ω ∼ 10meV, and the modulation can be as large as E ∼ 1meV. 29 That gives I e ∼ 10 −4 I ph . Another way to compare with is to consider that singlet excited states, as for example the S = 0 bound-state below the continuum in CuGeO 3 , appear in the optical spectrum due to the isotropic spin-phonon coupling (the γ term). We denote their intensity by I singlet . It means that if the singlet bound-state appears in the optical spectrum with an intensity I singlet e due to the isotropic spin-phonon coupling, the triplet states should also appear with an intensity which is roughly 100 times smaller, if Moriya's estimate applies. Effect of a magnetic field. We consider a basic triplet excitation here. A magnetic field lifts the degeneracy of the triplet into three branches. When H δ ( z), S z is a good quantum number and the transition should satisfy ∆S z = 0. Therefore, only the mode S z = 0 could be observed and its intensity does not depend on the strength of the field. By contrast, when the magnetic field is perpendicular to δ, the wave-function is a superposition of wave-functions with different S z : where the vector notation stands for |S, S z . The transition is allowed to the states Ψ ±′ with quantum numbers S ⊥ = ±1 and the mode which energy does not depend on the field has no intensity. The magnetic field dependence is therefore very different from what is expected for magnetic dipole transitions. 22 This is basically because the electric field conserves the S z quantum number. As we have just seen, however, in transverse magnetic field, S z is no longer conserved and the magnetic field-dependent branches may appear in the optical spectrum. They do indeed appear in CuGeO 3 . 10 We now compare the intensities of the magnetic dipole transitions with those of the electric dipole transitions that we have made explicit here. To make such a comparison, we consider the following two models that give intensity to the optical transitions. First a purely magnetic model and magnetic dipole transitions. In order to have an intensity, we need to add a static magnetic anisotropy, such as a Dzyaloshinski-Moriya interaction or an anisotropy in the g factor, which are both first order in the spin-orbit coupling, so that in the most favourable case (when no lattice selection rule forbids it), the matrix element is of order ∼ ∆g/g at best. In the second model, we consider an isotropic magnetic model, but we add the anisotropic spin-phonon coupling that we have considered above. The intensities of the transitions of the two models are given by: So that the ratio is: where E = cH, c is the speed of light. D is given by MΩ is the amplitude of the motion of the ion and e is its charge. with M Cu = 63g/mol (M at ∼ 10 −25 kg), Ω = 10meV, we find λ ∼ 0.1Å. gµ B = 120µeV/T. We take ω = 5meV for the energy of the magnetic mode and g = 2meV for the spin-phonon coupling. This estimation has to be taken with a pinch of salt because of the the crude order of magnitude given above, but it shows that there is no particular reason to not consider the electric dipole transition due to dynamical Dzyaloshinski-Moriya interaction. We compare the selection rules derived above with the experimental observation in CuGeO 3 . Experimentally, the absorption has been observed in the configuration E ⊥ c but an extinction has been reported for E c, 11 even in the presence of a magnetic field. 14 We have a natural interpretation of this fact: when E c, the only contributions to δ come from the virtual phonons s that have D s c, or, in other words, the virtual phonons involved are those which are optically active in this configuration. The vector δ is given by β d αβ d λ β ds,q=0 where λ β ds,q=0 are the displacements of the atoms, the same as those that appear at higher energy in the real phonon state s. In the configuration E c, the atoms in the phonon state s roughly move along the c-axis. In a crystal with many atoms per unit On the left, the motion along the y-axis creates a Dzyaloshinski-Moriya interaction whose vector is along z (the mirror plane Cu2O2 and the one perpendicular which has the O(2) atoms); on the middle, the atoms move out of the plane and the Dzyaloshinski-Moriya vector is along y (mirror plane xz passing through the bond Cu-Cu) ; on the right, these distortions break the inversion center at the middle of the Cu − Cu bond. However, the two perpendicular mirror planes xy and xz imply that the Dzyaloshinski-Moriya interaction actually vanishes. For CuGeO3, the x-axis is the c-axis and the xy plane is the plane of the CuO2 chains. cell, this is not exactly true and the displacements will acquire other components (a full study of the phonons that have been theoretically predicted in Ref. 30 does not change the picture). Then, according to the figure 1, the dynamical Dzyaloshinski-Moriya interaction is forbidden d s = 0 because of the mirror plane containing the atoms and the mirror plane perpendicular to the previous plane and containing the Cu atoms. Therefore, the intensity vanishes in this special configuration. In other configurations, however, there is no such symmetry arguments leading to a cancellation of the dynamical Dzyaloshinski-Moriya interaction, and an intensity is expected in agreement with the experiment performed in CuGeO 3 . We now consider the electric dipole transitions in SrCu 2 (BO 3 ) 2 in greater detail. The obvious advantage of this compound is that, neglecting anisotropies, it is described by the Shastry-Sutherland 31 Hamiltonian that possesses an exactly known ground state as a product of local singlets. 32 Optical transitions have been observed between this ground state and each of the zero-field threesplit triplet states 12,18 (see Fig. 2) that have been described previously. 33 The probability of a transition between the ground state Ψ 0 and an excited state f is given by: We have restricted the operator H E to the nearest neighbor spins (nn) in order to find the largest effect. The first part of it does not change the total spin but may generate transitions to the first excited states if the system has some anisotropy. We have considered previously the existence of a Dzyaloshinski-Moriya interaction whose vector is perpendicular to the plane. 33 We have shown that such first-order anisotropy does not give intensity within the assumption of magnetic dipole transitions. Here we start by considering the electric dipole transitions generated by the first part of the operator (18) and in presence of the static Dzyaloshinski-Moriya interaction. Using a symmetry argument we show that this part actually vanishes. S z is a conserved quantity so that we only need to consider the matrix elements with a S z = 0 final state. The symmetry by the mirror plane perpendicular to the (ab) plane and passing through a dimer is a symmetry of the crystal. In this symmetry, the ground state and the operator nn γS i .S j are both even. However the triplet state S z = 0 adiabatically connected to the local triplet at J ′ = 0 (the next nearest neighbor exchange) is odd. Then the matrix element vanishes. Additional spin anisotropy of the Dzyaloshinski-Moriya symmetry with extra in-plane components is present because of the small buckling of the crystal structure at low temperatures. 25 However, this (together with possible exchange anisotropies) would, in any case, respect the same mirror-plane symmetry. So the first term is not expected to give intensity, because of this special symmetry. In SrCu 2 (BO 3 ) 2 , the transitions have been studied using polarized electromagnetic waves and exhibit very peculiar polarisation properties: in the configuration E (ab), at zero field, only the state at 24.2cm −1 (i.e. the S z = 0 state [the middle state]) appears in the spectrum, but an external in-plane magnetic field gives intensity in the two other modes (upper and lower modes). 18 Similarly when the magnetic field lies in the (ab) plane, only the upper state at 25.4cm −1 (i.e. the S z = ±1) appears at zero magnetic field while an in-plane magnetic field allows observation of the middle state, but not the lower one. 18 We now show that these observations are compatible with the dynamical Dzyaloshinski-Moriya interaction which leads to the second part of the effective operator (1). To explain these results we need to find the particular pattern of dynamical Dzyaloshinski-Moriya vectors and then the δ ij . That crucially depends on the direction of the electric field of the wave, according to eq. (9). In the following, we will determine the δ ij but we restrict them to nearest neighbor interactions. Configuration E(t) (ab). Let us consider first the case of a wave-vector of the electromagnetic wave parallel to the c-axis, then the electric field lies in the ab plane. According to the first selection rule (i), only the virtual phonons which carry an electric dipole D s (ab) may contribute to the sum (9). We basically assume that the main displacements of the atoms in such a virtual phonon mode are confined into the (ab) plane. We make the assumption that the main components of λ ds are parallel to the electric field, so that we should be able to find the main components of the Dzyaloshinski-Moriya vectors d ij,s (eq. 9). To estimate them (and then δ ij ), we fix the atoms d at the distorted positions λ ds and we then apply the Moriya's rules which give the constraints on the Dzyaloshinski-Moriya vectors. In this case, the plane remains instantaneously an approximate mirror plane for the crystal structure. Subsequently, the instantaneous dvector between the spins, generated by the distortions, should be perpendicular to this plane (parallel to the caxis). The effective operator is therefore written: where z is here again the c-axis. We have introduced two different δ A,B z to take into account the existence of two dimers per unit-cell. Taking the same would not change the argument. In the following we take the no- The operator (19) does not break the symmetry by rotation around the c-axis. A transition to the S z = ±1 when the external magnetic field is parallel to the c-axis is still forbidden. Only the S z = 0 triplet mode (at the middle of the others 33 ) is allowed to appear in the spectrum (this is in agreement with the general symmetry argument given above since the electric field breaks the symmetry by mirror plane). This is in agreement with the experimental result at zerofield. 18 We further predict that a magnetic field parallel to the c-axis does not change the picture and gives no intensity in the other branches. We can give an estimation of the intensity assuming an approximate wave-function for the excited state that we take from the strong dimerization limit. In this approximation, the excitation with S z = 0 is a purely local triplet on the dimer A or B. This gives an intensity: We now consider the effect of a transverse magnetic field (H ⊥ c) on the intensities. A transverse magnetic field splits the modes into three branches ( figure 2, left). To evaluate the intensity of each branch, we first calculate the excited states in the approximation used above, taking into account the static Dzyaloshinski-Moriya interaction which is responsible for the zero-field splitting. Note that the other in-plane components do not play any role in the triplet spectrum at q = 0, 25 so that only the perpendicular component appear in the following. The eigenvalues are in fact twice degenerate. The eigenvectors are denoted by Ψ (±,0) q=0 and Ψ (±,0)′ q=0 with energies E (±,0) q . We then calculate the matrix elements as a function of the transverse magnetic field: We find: where h = gµ B H ⊥ /2D is the transverse magnetic field in the units of the static Dzyaloshinski-Moriya interaction. A transverse field transfers intensity into the lower and upper modes. The two curves given by I 0 E (H ⊥ ) and figure 3 together with the experimental results of Ref. 18. We have used the non-renormalized value of D = 0.09meV extracted from the energy spectrum 33 (all the calculations we performed here are in the limit J ′ /J → 0, so that we use the value of D we would have extracted from such a calculation and not the renormalized value). Note that if we take I 0 E (H ⊥ ) and I + E (H ⊥ ) for instance, they cross at a given field 1T , which is in good agreement with the crossing of the fitted intensities in the original experimental article (H ⊥ = 2.3T ) 18 . This is most probably coincidental since we are using the wavefunctions that are not renormalized by the interaction J ′ . Configuration E(t) c. We consider the case of an electric field perpendicular to the plane E(t) c. Let us suppose that the atoms move out of plane. According to the figure 1, the dynamical Dzyaloshinski-Moriya interaction would be in plane and perpendicular to the Cu − Cu bond. The dimers are, however, perpendicular to one another. Therefore the dynamical Dzyaloshinski-Moriya vectors of adjacent dimers should be perpendicular as well. The effective electric operator is: where δ (respectively δ ′ ) is perpendicular to the Cu-Cu bond of the dimers A (resp. B), so parallel to y (resp. x). Note that we take the same |δ| and |δ ′ |. Strictly speaking there is no reason why they should be the same but taking into account the special direction of the field we can reasonably assume that the motions of the atoms which belong to adjacent dimers are similar at least for the low-energy phonons. Let us apply this operator on the ground state which is approximately a product of singlet states on the dimers (we thus neglect the effect the static Dzyaloshinski-Moriya interactions have on the ground state which would give small corrections to the result). Note that Ψ +,S z =+1 q=0 and Ψ −,S z =−1 q=0 are both eigenstates of the Hamiltonian restricted to triplet states with the same energy J + 2D. Depending on the sign of D, therefore, only the upper mode or the lower mode should appear in the spectrum. Experimentally, the upper mode has been found in such a polarised configuration, 18 so that we conclude that D > 0. Only a detailed superexchange calculation of D would be able to infer it. The matrix elements giving the intensities are given by: In zero external magnetic field, the two final states are degenerate so that the total intensity of the optical transitions is the sum of the two, i.e. δ 2 /2. In a magnetic field parallel to the c-axis (z-axis), the upper mode splits into two branches with equal intensity δ 2 /4. Furthermore, we calculate the intensities as a function of a transverse magnetic field. The excited states Ψ (±,0) q=0 and Ψ (∓,0)′ q=0 are twice degenerate, so we calculate: We find the following expressions for the intensity of the upper (+), lower (-) and middle (0) states: where h = gµ B H ⊥ /2D. The corresponding curves are given in the figure 4. Note that the crossing between I + E and I 0 E occurs at gµ B H ⊥ = 4 √ 2D, therefore at a field two times larger than in the configuration E (ab). The agreement with the experiment is very good since such a balance of the intensities has been observed. 18 The lower mode does not actually appears in the spectrum experimentally and this is compatible with the low intensity we found. If we take the non-renormalized value of D = 0.09meV, the crossing of the intensities occur at H ⊥ = 4.6T which is in good agreement with the experimental value (∼ 6T ), as well as the overall behavior of the curves. IV. CONCLUSIONS In this paper, we have considered optical transitions with emission of one magnetic excitation, ∆S tot = 1. We give a mechanism in terms of phonon-assisted transitions in which a virtual phonon is involved. The selection rules of such processes were made explicit: in brief we need a coupling to an infrared active phonon that breaks, at least instantaneously, the symmetry of inversion between magnetically coupled ions. The intensity of such a process has been estimated and we argue that it should be larger than a magnetic-dipole transition, at least in systems in which spin-phonon couplings are appreciable. It provides an alternative to purely electronic transitions that are not allowed when an inversion center is present. We note that we have considered uniquely the consequences of phonon assisted optical transitions in the context of single-phonon experiments, i.e. ESR and absorption. The same mechanism can lead to processes in Raman scattering allowing single magnon creation, with similar selection rules concerning centers of inversion in the lattice. The effective operators will have similar symmetry but are not identical, involving the polarisations of both incoming and outgoing photons. Experimentally there are extra contributions linear in both spin operators and spin-orbit couplings that are not present in the single photon case. While for the spectroscopy of single magnons in the materials studied, Raman scattering should be useful, single photon experiments may permit more direct comparison with microscopic estimates of intensities. In the final section we have studied the two specific case of CuGeO 3 and SrCu 2 (BO 3 ) 2 for which polarised experiments are available. We have shown that predictions of the phonon-assisted theory agrees well both with observed extinctions and also, for the case of SrCu 2 (BO 3 ) 2 where detailed results are available, with the dependence of intensities as function of the external magnetic field. Further optical data should be analysed in terms of an effective operator of the Dzyaloshinski-Moriya symmetry for the matrix elements in the electric dipole approximation. Potentially such optical experiments can provide a means of probing microscopically the spin-phonon coupling which may be relevant to other experiments, for example neutron inelastic scattering experiments at finite momentum transfer, and a way of studying four-spin correlation functions involving some sort of local chiralities. We would like to thank T. Rõõm for correspondance and for providing us with his experimental results, J.-P. Boucher, H. Nojiri, and T. Sakai for stimulating discussions. O.C acknowledges financial supports from the I.L.L. and the Indo-French grant IFCPAR/2404.1.
2019-03-22T20:09:17.861Z
2004-01-14T00:00:00.000
{ "year": 2004, "sha1": "8661ceebae1072e3115eba3a369ef1c37e99b1b9", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "8661ceebae1072e3115eba3a369ef1c37e99b1b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256813423
pes2o/s2orc
v3-fos-license
The Prevalence of Bacteria Commonly Related to the Production of Mussels and Oysters in Saldanha Bay , Introduction Shellfsh farming is becoming an important sector for the South African government as it creates much-needed job opportunities for the coastal communities.Te sustainability and safety of shellfsh growing areas are essential in terms of protection from contaminants and preventing contaminants from reaching the bivalves produced [1].Bivalve molluscs play important ecological functions in aquatic ecosystems as well as being highly nutritious.Tey can be found at the bottom of the sea or attached to hard surfaces or on one another.Teir flter-feeding nature assists in purifying the surrounding waters and increases the penetration of sunlight [2].Furthermore, they provide micronutrients to other marine organisms increasing primary production and nutrient recycling, coastal habitat conservation, and restoration [3,4].Tese characteristics and ability to bioaccumulate materials in their soft tissues make the bivalve molluscs suitable aquatic species for biomonitoring of environmental conditions.Bioaccumulation of materials is not selective, as both benefcial and harmful materials are equally accrued [5].Besides being highly nutritious compared to beef, chicken, and pork, bivalves are highly perishable and require proper handling from farm to fork.Failure to adhere to food safety best practices could lead to an increased risk of illness from pathogens, including bacteria, viruses, and protozoa [6,7]. Aquatic environments are home to various microbiota, some indigenous and some introduced through anthropogenic activities around these environments.Te presence of pathogens in aquatic ecosystems is a risk to the shellfsh production industry and poses a public health threat.Several foodborne outbreaks have been reported globally due to the consumption of contaminated shellfsh [8][9][10].Zgouridou et al. [11] in their study indicated that mussels of the genus Mytilus are primarily the bivalve species that are a public health risk to consumers, as well as oysters (Ostrea edulis) and clams (Venus verrucosa).Bioaccumulation and bioconcentration of pathogens vary according to host species and seasonality.Both mussels and oysters can concentrate pathogens in their body tissues.However, oysters are an important medium for infecting humans with these pathogens as they are eaten raw or partially cooked [12].Several studies have been conducted worldwide to determine the bacterial prevalence in shellfsh-growing waters [13][14][15].To date, similar studies have not been conducted in Saldanha Bay except for the microbiological monitoring undertaken by the South African Department of Forestry, Fisheries, and Environment.Tis created a need to investigate the bacterial communities, especially the disease-causing ones that may be present in this Bay.Tis study investigated pathogens commonly associated with shellfsh-related foodborne disease outbreaks, such as Salmonella, Vibrio parahaemolyticus, Vibrio vulnifcus, Vibrio cholera, and the prevalence of Escherichia coli as an indicator species.Te results did not conform to prior expectations, as bacteria such as the Enterobacter cloacae complex, Citrobacter freundii, Klebsiella pneumoniae spp.pneumoniae, Aeromonas sobria, Vibrio alginolyticus, and Sphingomonas paucimobilis were confrmed through biochemical characterisation. Te study applied an experimental design, and interpretations were formed using a multimethod, quantitative strategy for three sampling occasions.Te researcher collected samples during warm, cold, and rainy periods as informed by the obtained literature.Data collection and analysis techniques are detailed under materials and methods [16]. Study Area. Saldanha Bay harbour on the West Coast of South Africa (latitude: −33.027699, longitude: 17.917631) houses the biggest port in Southern Africa operating as an international port for the export of iron ore.Te Bay's water depth is approximately 23.7 m.Construction of a 4 km long iron ore jetty has divided the Inner bay into Small Bay and Big Bay [17], and a 1.7 km long breakwater separated the Inner Bay from the Outer Bay.Small Bay is sheltered from ofshore swells and has constrained water circulation, while Big Bay is semiexposed to wave energy with better circulation compared to Small Bay.Te Outer Bay, which is located at the mouth of the Bay, is regarded as the less polluted site [18].Te Bay is exposed to the disposal of treated and untreated sewage from the nearby wastewater treatment plant, which discharges into the Bok river.Several sewage pumps, ballast water, dredging, stormwater discharge, and ship trafc are some of the pollutants sources close to Small Bay.Two mussels species are farmed, the indigenous black mussels (Choromytilus meridionalis), which are not a preferred species for farming due to the dark fesh colour of the female species.Te exotic Mediterranean mussels (Mytilus galloprovincialis) and the Pacifc oysters (Crassostrea gigas) are also farmed.Figure 1 shows fve sampling points where mussels and seawater were collected. Sample Collection. A total of 27 shellfsh and seawater (mussels (n � 12) and seawater (n � 13) and oysters (n � 2)) samples were collected from various sampling sites.Samples were collected in the morning between 8:00 am and 11:00 am during low tides in order to reach all sites especially the ofshore ones.Oysters were collected from the harbour deck immediately after harvesting by the farmers.Five sampling points were used for the collection of seawater and mussels.Tree of them were located in Small Bay (SP1, SP2, and SP3), one in Big Bay (SP4), and the last one in Outer Bay (SP5) (Figure 1).Samples were collected in March (warm period), July (winter-before heavy rainfall), and August (winter-after heavy rainfall).During sampling, 30 oysters and 30 mussels were hand-picked and stored in sterile whirl-pack bags (Nasco, US).Seawater samples were collected (2 meters below the surface) in 1 liter sterile Schott bottles (Schott, UK) and mussels from a hanging rope.Physicochemical parameters (i.e., water temperature ( °C), salinity (psu), and dissolved oxygen (ppm)) were measured during sampling at each sampling point using a Hanna HI9810-6 multimeter.Samples were transported to the laboratory, in a cooler box packed with ice packs maintaining a temperature between 2 and 8 °C, within 2 hours, and microbiological analyses were performed immediately. Sample Preparation. Upon arrival at the laboratory, the mussel and oyster samples were scrubbed under running tap water to remove shell debris, and attached algae and the shells were opened aseptically with a sterile chucking knife.Approximately, 300 g of fesh and intravalvular liquid of mussels and oysters were stored in 500 g sterile beakers and then transferred aseptically into stomacher bags (circulator 400, Seward, Worthing, UK).Samples were homogenised with 200 ml sterile phosphate water at 230 rpm speed for 2 minutes. Most Probable Number (MPN) (Mussel, Oyster, and Seawater Samples), Escherichia coli.Lauryl Tryptose Broth (LTB) (Merck, Germany), Brilliant Green Broth (BGBB) (Merck, Germany), and Tryptone Water (TW) (Merck, Germany) were prepared following the manufacturer's instructions.Te method was conducted using the method described by Leuta [19] Concentrated mussel and oyster homogenate extracted from the mussel samples was used as stock to conduct the fve-tube MPN technique.Serial dilutions of 10 −1 to 10 −5 of the mussel and oyster homogenate and seawater samples, respectively, was performed before inoculation of 1 ml of each diluted sample into LTB tubes containing Durham tubes.Durham tubes provide a visual indication of gas 2 Aquaculture Research production.Te inoculated test tubes were incubated for 48 hours at 37 °C (indicating all gas-producing organisms). All tubes showing gas formation after a 48-hour incubation period were regarded as a positive presumptive test, and the presumptive total MPN count was read of De Man's tables [20].For each positive presumptive LTB tube, a 10 ml Brilliant Green Bile Broth (BGBB) tube and 10 ml Tryptone Water (TW) tubes were prepared.One hundred microliters (μl) of the sample from each positive LTB tube were reinoculated into BGBB and TW tubes, respectively, according to the guidelines set out by the South African Bureau of Standards [21].Tese guidelines also incorporate the standard methods set out by the American Public Health Association, for the examination of seawater and shellfsh as well as the methods for the examination of water and wastewater (American Society for [22]; American Society for [23].Tese tubes were incubated in a 44.5 °C waterbath for 24 hours (44 °C-44.5 °C) and has the specifc advantage of detecting E. coli, as it is the only faecal coliform present in water capable of producing indole at this temperature).With the observation of positive gas production in the BGBB tubes (indicating faecal coliforms (FC)) after 24 hours, a few drops of Ehrlich's reagent (LabChem, USA) were added to the corresponding TW tubes.Te presence of E. coli was confrmed with a colour change from clear to pink or red in the Tryptone Water tubes. Detection and Isolation of Salmonella. Salmonella spp. was detected according to the protocol based on ISO 6579-1:2007 [24].Bufered Peptone Water (BPW) (Merck, Germany), Selenite Cysteine broth (SCB) (Merck, Germany), and Salmonella shigella agar (SS agar) (Oxoid, UK) were prepared according to the manufacturer's instructions.Twenty-fve grams (25 g) of mussels and oysters homogenate were aseptically weighed and placed into a 225 ml sterile BPW to prepare a pre-enrichment culture. Te mixture was then incubated at 37 °C for 16-20 h.After incubation, the sample was gently mixed, and 1 ml of the BPW was added into a sterile McCartney bottle.Ten milliliters (10 ml) of SCB were added to the sample to prepare an enrichment culture and incubated at 37 °C for 24 h.After the 24 h incubation period, the enriched culture was streaked onto SS agar and incubated inverted at 37 °C for 24 h.Te plates were examined (typical pinkish-red colonies) for the absence or presence of Salmonella spp.Subsequently, Gram stains were performed on the obtained colonies.Te observation of Gram-negative, nonsporeforming colonies confrmed the presence of Salmonella, while biochemical identifcation was carried out using VITEK 2 compact Gram-negative (GN) ID cards (bio Mérieux, France).Salmonella typhimurium (NCTC 12023) strain was used as a positive control. Statistical Analysis. A Pearson correlation was conducted to determine the relationship between seawater samples, shellfsh samples, and physicochemical parameters (temperature, salinity, and dissolved oxygen).For all the tests, the criterion for statistical signifcance was p < 0.05. Results and Discussion 3.1.Physicochemical Parameters.Physicochemical parameters recorded in Table 1 did not show a signifcant variation, where the recorded temperature ranged between 12 °C and 19 °C, salinity ranged between 33.91 psu to 35.45 psu, and dissolved oxygen between 0.71 and 2.96 ppm showing prevailing hypoxic conditions, which are often associated with pollution due to anthropogenic activities [26].Seawater temperature, salinity, pH, dissolved oxygen, turbidity, and organic matter become water quality stressors when available in excessive amounts [27].Tese stressors may infuence the survival, health, and growth of shellfsh, which depend on the water quality of their growing environments.Poor water quality increases the risk of shellfsh contamination with disease-causing pathogens [28]. An increase in salinity during warm periods and a decrease during cold periods were observed throughout the study and correlated with similar fndings reported by Lamine et al. [29].Te lowest rainfall was observed in July and the highest rainfall in August.Colaiuda et al. [30] found in their study that the amount of rainfall and the increased E. coli concentrations in shellfsh depend on the specifc area where the samples were collected.Chahouri et al. [31] and Padovan et al. [32] found that high precipitation increases levels of faecal coliform.In this study, no clear indication of the infuence of rainfall on E. coli levels was detected.Similar Figure 2: (a) Inoculation of undiluted and diluted samples into LTB tubes to obtain the positive presumptive test (adopted and adapted from [19]).(b) Reinoculation of positive LTB tubes into BGBB and TW tubes (adopted and adapted from [19]).(c) Enumeration of faecal coliforms (from BGBB tubes) and E. coli (from TW tubes) in water samples (adopted and adapted from [19]). Aquaculture Research results were observed by Sampson et al. [33], where no association was found between precipitation and bacterial concentrations.Tabanelli et al. [34] included the infuence of the fow rate of the river feeding into the coastal area of their study and concluded that meteorological events could bring a substantial amount of contaminated fresh water into coastal water.Tis could be the case with the Bok river, which feeds into Saldanha Bay.During heavy rainfall, the Bok river fow rate increase is suspected, which could wash down all the runof from upstream agricultural areas and runof from roads and residential areas [31]. Prevalence of Faecal Coliforms and Escherichia coli in Mussels and Oysters.Oyster harvesting did not take place during the March sampling occasion.Sampling sites SP4 (during March and July) and SP5 (during March) could not be reached due to high tides.In addition, no mussels were available during August at the SP4 site (Table 2).Te total MPN count per 100 ml of mussel samples were between 4.9 and 4700 microorganisms/100 ml and for oysters were 18 and 1000 microorganisms/100 ml in (July and August), respectively.Increased total MPN counts of 400 microorganisms/100 ml were observed in mussel samples collected at SP1 in July.Of the recorded total MPN count at this site, FC and E. coli counts of <0.18 microorganisms/100 ml, respectively, were observed.In the August sampling run, mussels collected at SP2 recorded a total MPN count of 4700 microorganisms/100 ml, while a total MPN count of 1000 microorganisms/100 ml) were recorded in oyster samples at the Harbour Deck site.In comparison, the FC and E. coli concentrations at these respective sites were 0.2 microorganisms/100 ml, respectively (mussels), and <0.18 microorganisms/100 ml, respectively (oysters).Mussels and oysters can accumulate and retain suspended particles of phytoplankton size and pathogenic microorganisms in their bodies due to their flter-feeding nature [35,36].Tis creates a public health concern, especially for oysters, as oysters are consumed raw or partially cooked [37,38].Te spikes observed in mussels and oysters suggest possible contamination due to heavy rainfall or pollution sources including the sewage pump stations, stormwater drains, and a sewage discharge point that is located in close proximity to the afected sampling sites [39,40].Saldanha Bay Municipality recently made remarkable improvements to their sewage treatment plants and diverted the majority of treated efuent for the irrigation of sports grounds and use by interested local businesses.However, the little that is being discharged together with efuent from fsh factory industries, untreated stormwater discharge, and ballast water should not be underestimated.According to Clark et al. [41], the shipping trafc has increased in the harbour, which brings large volumes of ballast discharge.All of these need to be monitored closely.Several studies in various parts of the world seem to agree on the fact that microbial contaminants are the results of treated and untreated sewage being discharged into shellfsh growing waters, sewage overfow during rainfall periods, and runof from agricultural areas [42,43].Sewage is loaded with nutrients that, in excessive amounts, could stimulate microbial growth, production of harmful algal blooms, and eutrophication, ultimately afecting the viability of shellfsh mariculture [44].Even though oyster samples were not taken from the farm but at the loading area of the harbour, i.e., the Harbour Deck, the samples came from the same farming area as the mussels. Prevalence of Faecal Coliforms and Escherichia coli in Seawater.Sampling site SP5 could not be reached due to high tides in March and July (Table 3).Te total MPN count/ 100 ml in seawater ranged from <0.18 to 1.3 microorganisms/100 ml, with a high spike recorded at SP2 in August (2400 microorganisms/100 ml).Faecal coliforms and E. coli concentrations were the same (<0.18microorganisms/ 100 ml) at all sampling sites.Te high increase in the total MPN count observed in SP2 in the seawater sample correlates with a spike in mussels collected during the same period.Tis could be attributed to heavy rainfall, stormwater drain discharges and sewage discharges, and the location and proximity of the sampling site to pollution sources.Sampling site SP2 is located in Small Bay, which is subjected to various sources of pollution including a sewage discharge outfall.Understanding the causes of faecal contamination in areas where shellfsh are grown is essential for assessing the associated health risks and determining the way forward to address the problem [45]. During high tide episodes, pollutants can be transported rapidly from the areas where they are highly concentrated through advection, mixing, dispersion, and dilution of sewage [2].Te sampling sites in Small Bay, sheltered from the sea swells and close to the sewage discharge point, sewage pump stations, and stormwater drains may not beneft from this natural process and therefore presented higher contamination levels.Tese natural processes are also evident in the analysis results of the Big Bay and Outer Bay sampling sites where lower contamination levels were observed.Both sites are semiexposed to the sea swells, explaining the relative improvement in water quality.In other words, the possibility of having shellfsh farms far away from sewage discharge points could eliminate the microbial contamination problem.Similarly, Florini et al. [45] reported a decrease in the concentrations of faecal indicator species with an increase in distance from sewage discharge points.Te low concentration results were ascribed to possible dilution and die-of efects. Contamination of water bodies by wastewater is a fundamental problem worldwide.Te bacteria, parasites, and viruses from animals and humans reach the oceans through runof from roads, agricultural areas, and sewage discharges [46].In addition, heavy rainfall may cause sewage overfows and drain leakages [47].As mentioned, faecal coliform and Escherichia coli are indicators of water quality.Te presence of these organisms is undesirable in areas used for shellfsh farming. No correlation could be drawn between the total MPN count in water (microorganisms/100 ml) and shellfsh (microorganisms/100 g) samples, physicochemical parameters, as well as between rainfall patterns and MPN counts in water and shellfsh (p > 0.05) (Table 4).As the total MPN count in water samples increased, the total MPN count in shellfsh samples increased (r � 0.997, n � 11, p ≤ 0.001). Bacterial Species Isolated from Selected Sample Sites. Salmonella spp., Vibrio cholerae, and Vibrio parahaemolyticus were not detected.Bacterial species identifed included the Enterobacter cloacae complex, Citrobacter freundii, Klebsiella pneumoniae spp.pneumoniae, Aeromonas sobria, Vibrio alginolyticus, and Sphingomonas paucimobilis (Table 5).Tese microorganisms may be grouped into pathogens that are often present in aquatic Aquaculture Research environments (e.g., Klebsiella pneumoniae spp., Aeromonas sobria, Vibrio alginolyticus, and Sphingomonas paucimobilis).Tese microorganisms are pathogens of high priority as some are antimicrobial resistant and may cause illnesses in humans.In addition, pathogens naturally present in human beings and animals (e.g., Citrobacter freundii and Enterobacter cloacae complex) are also high priority pathogens and their presence should not be taken lightly [48].Te Enterobacter cloacae complex has also proved to be abundant in aquatic environments [49]. Conclusion Te study used conventional culture methods to isolate Salmonella and Vibrio spp.in mussels, oysters, and seawater samples obtained from the Saldanha Bay Harbour.Te most probable number (MPN) analysis technique was used for detecting and enumerating faecal coliforms and E. coli in the obtained samples.Te identifcation of species was conducted using the VITEK 2 automated system, which successfully identifed species such as Enterobacter cloacae complex, Citrobacter freundii, Klebsiella pneumoniae spp.pneumoniae, Aeromonas sobria, Vibrio alginolyticus, and Sphingomonas paucimobilis.Te fndings observed with correlations between MPN counts in seawater, mussels, and oysters, and correlations between physicochemical parameters and rainfall, did not show any signifcant diference.However, total MPN count spikes were observed in mussels, oysters, and seawater, which could be ascribed to the rainfall period and winter season, although the spikes did not have a signifcant impact on the E. coli concentrations, as the concentrations were below the permissible limits.Tis information may be used as a basis to conduct an indepth investigation of sources of pollutants.Further studies need to be conducted on the bacterial species Table 1 : Physicochemical parameters of shellfsh production areas and rainfall in Saldanha Bay. Table 2 : Faecal coliforms and Escherichia coli in mussels and oysters homogenate. Table 3 : Prevalence of faecal coliforms and Escherichia coli in seawater samples. Table 5 : Bacterial species isolated from mussels and oysters sampling points.
2023-02-13T16:03:47.693Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "d51d86e36d630d9d40992b73d8203522c77be40f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/are/2023/7856515.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00fcaa57ad2bf279f49435549b7158b4f8c18ec1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
16647256
pes2o/s2orc
v3-fos-license
Proteomic Identification of Protein Targets for 15-Deoxy-Δ12,14-Prostaglandin J2 in Neuronal Plasma Membrane 15-deoxy-Δ12,14-prostaglandin J2 (15d-PGJ2) is one of factors contributed to the neurotoxicity of amyloid β (Aβ), a causative protein of Alzheimer's disease. Type 2 receptor for prostaglandin D2 (DP2) and peroxysome-proliferator activated receptorγ (PPARγ) are identified as the membrane receptor and the nuclear receptor for 15d-PGJ2, respectively. Previously, we reported that the cytotoxicity of 15d-PGJ2 was independent of DP2 and PPARγ, and suggested that 15d-PGJ2 induced apoptosis through the novel specific binding sites of 15d-PGJ2 different from DP2 and PPARγ. To relate the cytotoxicity of 15d-PGJ2 to amyloidoses, we performed binding assay [3H]15d-PGJ2 and specified targets for 15d-PGJ2 associated with cytotoxicity. In the various cell lines, there was a close correlation between the susceptibilities to 15d-PGJ2 and fibrillar Aβ. Specific binding sites of [3H]15d-PGJ2 were detected in rat cortical neurons and human bronchial smooth muscle cells. When the binding assay was performed in subcellular fractions of neurons, the specific binding sites of [3H]15d-PGJ2 were detected in plasma membrane, nuclear and cytosol, but not in microsome. A proteomic approach was used to identify protein targets for 15d-PGJ2 in the plasma membrane. By using biotinylated 15d-PGJ2, eleven proteins were identified as biotin-positive spots and classified into three different functional proteins: glycolytic enzymes (Enolase2, pyruvate kinase M1 (PKM1) and glyceraldehyde 3-phosphate dehydrogenase (GAPDH)), molecular chaperones (heat shock protein 8 and T-complex protein 1 subunit α), cytoskeletal proteins (Actin β, F-actin-capping protein, Tubulin β and Internexin α). GAPDH, PKM1 and Tubulin β are Aβ-interacting proteins. Thus, the present study suggested that 15d-PGJ2 plays an important role in amyloidoses not only in the central nervous system but also in the peripheral tissues. Introduction Eicosanoids are divided into two groups, according to their mechanism of action: the conventional eicosanoids, e.g., prostaglandin D 2 (PGD 2 ) and the cyclopentenone-type PGs, e.g., 15-deoxy-D 12,14 -PGJ 2 (15d-PGJ 2 ). PGD 2 has been considered to be a proinflammatory mediator in inflammatory diseases such as Alzheimer's disease (AD) and Asthma. In AD, PGD 2 formation increased in the frontal cortex of the patients when compared with those of the healthy subjects [1]. AD is characterized pathologically by cortical atrophy, neurodegeneration and deposits of amyloid protein in the various regions of brain such as cerebral cortex [2]. Amyloid b (Ab) generated PGD 2 from cortical neurons before inflammation [3]. However, the toxicity of PGD 2 via its GTP-binding protein-coupled PGD 2 receptors does not occur. First, the PGD 2 receptor blocker did not inhibit PGD 2 -induced neuronal cell death [4]. Second, little mRNA of the PGD 2 receptor is observed in the rat [5] and human [6] cerebral cortex. Third, few binding sites of [ 3 H]PGD 2 were detected in the plasma membranes from rat cortices [4]. Fourth, the extent of specific [ 3 H]PGD 2 in total biding is much lower (30-40%) than that of [ 3 H]15d-PGJ 2 (.80%), although binding sites of PGD 2 have been reported in synaptosomes of rat [7] and human brains [6]. Fifth, the LD 50 value (8.2 mM) of PGD 2 is much higher than the affinity for PGD 2 receptor (dissociation constant = 14 nM) [5]. Finally, PGD 2 required a latent time to exert toxicity. PGD 2 was non-enzymatically metabolized to prostaglandin J 2 (PGJ 2 ), D 12 -PGJ 2 and 15d-PGJ 2 [4]. Among PGD 2 metabolites, 15d-PGJ 2 exhibited most potent inflammatory effects [4]. Taken together, PGD 2 appeared to mediate inflammation via 15d-PGJ 2 in the amyloidoses. The surface receptors specific for 15d-PGJ 2 have not been identified, and 15d-PGJ 2 is believed to be actively transported into cells. It possesses an a, b-unsaturated carbonyl group in the cyclopentane ring that can form covalent adducts with free thiols in proteins by Michael addition. 15d-PGJ 2 covalently binds to Cys 285 of its nuclear receptor [8], peroxysome-proliferator activated receptorc (PPARc) [9], [10]. Recently, 15d-PGJ 2 has been implicated in the antiproliferation independently from PPARc [11]. Moreover, 15d-PGJ 2 inhibits the NF-kB-dependent gene expression through the covalent modification at Cys 179 in IkB kinase [12]. Previously, we have found the novel binding sites of 15d-PGJ 2 on the cell surface [4]. [ 3 H]15d-PGJ 2 bound specifically to plasma membranes of cortical neurons. Among PGD 2 metabolites, 15d-PGJ 2 exhibited the highest affinity for the specific binding sites. Other eicosanoids and PPAR agonists did not affect the specific binding sites. 15d-PGJ 2 regulated cell numbers in primary cultures of rat cortical neurons. The neurotoxicity of 15d-PGJ 2 was the most potent among PGD 2 and its metabolites, whereas little effect of other eicosanoids and PPAR agonists was detected. In peripheral tissues, 15d-PGJ 2 also exhibited toxicity independently of PPARc. In response to basic fibroblast growth factor, bronchial smooth muscle cells (BSMC) proliferate and remodel airway in asthma [13]. 15d-PGJ 2 inhibits proliferation in a PPARc-independent manner [14]. Thus, the identification of cell surface targets for 15dPGJ 2 is required to clear how 15d-PGJ 2 induces cell toxicity and involves in amyloidoses. In the present study, we identified cell surface targets for 15d-PGJ 2 in cortical neurons. In general, glycolytic enzymes, molecular chaperones and cytoskeletone identified as membrane targets for 15d-PGJ 2 are known to localize in the cytosol, but their roles on the cell surface have not been elucidated sufficiently. Here, we propose hypothetical role of membrane targets for 15d-PGJ 2 on the cell toxicity and amyloidoses. Tissue cultures All procedures were conducted in accordance with NIH guidelines concerning the Care and Use of Laboratory Animals and with the approval of the Animal Care Committee of the Himeji Dokkyo University. Rat cortical neurons, human BSMC, human hepatocytes and human dermal fibroblasts were cultured as previously reported [15]. Cerebral cortices from the cerebral cortex of day-19 Sprague-Dawley rat embryos were dissociated in isotonic buffer with 4 mg/ml trypsin and 0.4 mg/ml deoxyribonuclease I. Cells were plated at a density of 2.5610 5 cells/cm 2 on poly-L-lysinecoated dishes in conditioning medium, Leibovitz's L-15 medium supplemented with 5% FBS and 5% horse serum at 37uC in 5% CO 2 and 9% O 2 . On day 1 after plating, cultures were treated with 0.1 mM arabinosylcytosine C. On day 4, cortical cultures were immunostained with anti-MAP2 specific for neurons, anti-GFAP for astrocytes, and anti-microglial antigen (OX-42). Cultures prepared by this method, consisted of approximately 95% neurons. Human BSMC were cultured at a density of 3.5610 3 cells/cm 2 on 48-well plates in Molecular, Developmental, and Cellular Biology medium supplemented with 5% FBS, 50 mg/ml gentamicin, 50 ng/ ml amphotericin. Human hepatocytes were cultured at a density of 5610 4 cells/cm 2 on 48-well plates in CS-C medium (Applied Cell Biology Research Institute) supplemented with 10% FBS. Human dermal fibroblasts were cultured at a density of 5610 4 cells/cm 2 on 48-well plates in DMEM supplemented with 10% FBS, 50 units/ml penicillin, and 50 mg/ml streptomycin. Aggregation assessment of fAb A stock solution of fibrillar Ab (25-35) (fAb) was prepared by dissolving Ab at 1 mM in deionized water and incubating Ab at 37uC for 2-5 days to aggregate the peptide and stored at 220uC until use [16]. The aggregation state of fAb was assessed in two ways. First, light microscopy was used to identify the presence of precipitated peptides both in stock solutions and after their addition to tissue culture wells; the observations were confirmed by three observers. Second, the aggregation state of fAb was assessed by migration patterns after sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). Samples of fAb stock solutions were added to reducing buffer, heated at 100uC for 3 min, and electrophoresed on 15% SDS-PAGE at 70 V. Cell viability Two different methods were employed for assessment of cell viability as previously reported [15]. First, the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide dye (MTT) reduction assay reflecting mitochondrial succinate dehydrogenase activity was employed. Second, residual cells were counted according to morphologic criteria; neurons with intact neurites and a smooth, round soma were considered viable, whereas those with degenerated neurites and an irregular soma were considered nonviable. BSMC with extended cell bodies and their bright phase-contrast appearance were considered viable, whereas those with shrank and round cell bodies were considered nonviable. Cell fractionation Cell fractionation was performed as previously reported [17]. Cerebral cortices from rat brains were homogenized in 3 volumes of ice-cold STEA solution (0.25 M sucrose, 5 mM Tris-HCl (pH 7.5), 1 mM EGTA and 50 karikllein units/ml aprotinin). The homogenate was filtered through three meshes and centrifuged at 7006g for 10 min. Fractionations of nuclear and plasma membrane; The pellet was resuspended in 120 ml of STEA solution by gentle homogenization, and the resuspension was dispersed in 1080 ml of isosmotic Percoll solution (15.7% Percoll, 0.25 M sucrose, 1 mM EGTA, 50 karikllein units/ml aprotinin and 10 mM Tris HCI (pH 7.5)). The mixture was centrifuged at 35,0006g for 30 min. The resulting pellet was suspended in HEA solution (50 mM Hepes-NaOH (pH 7.4), 1 mM EGTA and 50 kIU/ml aprotinin) as the nuclear fraction. On the other hand, the second band from the surface in the supernatant was collected, washed by dilution with 2-3 volumes of HEA solution and centrifuged at 10,0006g for 30 min. The pellet was suspended in HEA solution as the plasma membrane fraction and stored in liquid nitrogen until used [18]. Fractionations of cytosol and microsome: The supernatant was centrifuged at 7,0006g for 10 min. The resulting supernatant was recentrifuged at 100,0006g for 1 h. The pellet was used as the microsomal fraction. The supernatant was used as the cytosolic fraction. Binding assay of [ 3 H]15d-PGJ 2 Binding assay of [ 3 H]15d-PGJ 2 were performed as previously reported [18]. The standard reaction mixture of 10 nM [ 3 H]15d-PGJ 2 contained 50 mM Tris-HCl buffer (pH 8.0), 100 mM NaCl and plasma membranes (10 mg) in a total volume of 100 ml. Incubation was initiated by addition of the reaction mixture to plasma membranes, and was carried out at 4uC for 24 h. We determined non-specific binding by performing incubations with [ 3 H]15d-PGJ 2 in the presence of 100 mM unlabeled 15d-PGJ 2 . The specific binding was calculated by subtraction of the nonspecific binding from the total binding. Data are expressed as means 6 standard error of the mean (S.E.M.) values (n = 4). Identification of 15d-PGJ 2 -targeted proteins Gel pieces were washed in 50 mM ammonium bicarbonic acid containing 50% acetonitrile for 10 min, twice. Then, they were dried in block incubator Bl-516S (ASTEC Co., ltd.; Tokyo, JP) at 95uC for 10 min. Each sample was proteolyzed with 10 ml 1 mM ammonium bicarbonic acid containing 200 ng trypsin overnight at 37uC. The peptide in each gel was extracted with 50% acetonitrile containing 0.1% TFA followed by sonication for 15 min. The supernatant was collected, and peptides were further extracted with 75% acetonitrile containing 0.1% TFA followed by sonication for 15 min. Peptide extracts were concentrated to ,10 ml using Speedvac concentrator. Then, they were desalted with Ziptip (Millipore Co.) and mixed with an equal volume of 5 mg a-cyano-4hydroxycinnamic acid (Shimadzu GLC ltd.; Tokyo, JP) dissolved in 0.5 ml 50% acetonitrile containing 0.1% TFA. One micro liter samples were spotted onto a matrix assisted laser desorption/ionization (MALDI) plate. After air drying, spots were identified by MALDI time of flight mass spectrometry gate: Blank 900, P. Ext: 2500, Scenario: Advanced, Profile average: All profiles, Peak width: 2 chans, Smoothing method: Gaussian, Smoothing filter width: 2 chans, Baseline filter width: 16 chans, Peak detection method: Thresh hold Apex, Thresh hold offset 0.500 mV, Use monoisotopic peak picking, Minimum mass 500, Maximam mass: 3500, Resolution of the MS analyzer was 1,000 (0-1k Da), 5,000 (1 kDa-2 kDa) and 10,000 (.2 kDa).Minimum isotope: 1, Maximum intensity variation: 90 and Overlapping distributions Minimum peak percent: 10. Proteins were identified with the MASCOT (Matrix Science, London) searching algorithms using the Swiss-plot database. Probabilitybased MOWSE scores were estimated by comparison of search results against estimated random match population and were reported as-10* log 10(p), where p is the absolute probability. Scores greater than 50 were considered significant, meaning that for scores higher than 50 the probability that the match is a random event is lower than 0.05. The sequence version of the Swiss-Prot were heat shock cognate 71 kDa protein (Hspa8): 1, Internexin a: 2, Tubulin b2b: 1, glial fibrillary acidic protein ( Western blotting The standard reaction mixture of 1 mM biotinylated 15d-PGJ 2 contained 50 mM Tris-HCl buffer (pH 8.0), 100 mM NaCl and plasma membranes (400 mg) in a total volume of 4 ml. Incubation was initiated by addition of the reaction mixture to plasma membranes, and was carried out at 4uC for 24 h. Membrane lysates were incubated with Streptavidin Agarose beads (Invitrogen, Carlsbad, CA) at room temperature for 30 min. The beads were rinsed three times with lysis buffer. The proteins were eluted by boiling the beads in Laemmli sample buffer and analysed by SDS-PAGE followed by immunodetection with antibodies to GAPDH (rabbit polyclonal, abcam ). This procedure was followed by the addition of horseradish peroxidase-conjugated secondary antibody and ECL reagents. Statistical analysis Data are given as means 6 S.E.M. (n = number of observations). Data were analyzed statistically by use of Student's non-paired t test for comparison with the control group, and data on various inhibitors and blocker groups were analyzed statistically by use of two-way ANOVA followed by Dunnett's test for comparison with the PG group (15d-PGJ 2 , D 12 -PGJ 2 , PGJ 2 , PGD 2 and 15d-PGD 2 ). The half maximal inhibitory concentration (IC 50 ), the half maximal lethal dose (LD 50 ) and the half maximal lethal time (LT 50 ) were calculated by Microsoft Excel Fit. Susceptibilities of various cell lines to amyloid protein Sensitivities of various cell lines to amyloid protein were examined in the central nervous system and peripheral tissues. Cortical neurons, BSMC, hepatocytes and dermal fibroblasts were exposed to fAb or vehicle (ionized water) for 48 h, and their viability was quantified by the MTT-reducing activity. In comparison with vehicle, fAb significantly reduced the viability of cortical neurons and BSMC at 10 mM. On the other hand, fAb did not significantly affect the viability of hepatocytes and dermal fibroblasts ( Figure 1A). In neuronal cells and BSMC among tested cell lines, amyloid protein inhibited the cell viability in a concentration-dependent manner ( Figure 1B). Sensitivities of various cell lines to 15d-PGJ 2 We examined susceptibilities to 15d-PGJ 2 in cortical neurons, BSMC, hepatocytes and dermal fibroblasts. These cell lines were exposed to 15d-PGJ 2 or vehicle (0.1% ethanol), and their viability was quantified by the MTT-reducing activity. In comparison with vehicle, 15d-PGJ 2 significantly reduced the viability of cortical neurons and BSMC at 10 mM. On the other hand, 15d-PGJ 2 did not significantly affect cell viability of hepatocytes and dermal fibroblasts ( Figure 2A). As well as amyloid protein, 15d-PGJ 2 also reduced the cell viability of neuronal cells and BSMC, but neither hepatocytes nor dermal fibroblasts in a concentration-dependent manner ( Figure 2B). In control cultures, neurons had extended neurites and smooth, round cell bodies ( Figure 3A). On the other hand, some cell bodies shrank and lost their bright phase-contrast appearance in 15d-PGJ 2treated cultures. By 24 h, there were markedly fewer cells, and extensive debris was seen attached to the substratum ( Figure 3B). In control cultures, BSMC extended cell bodies and exhibited their bright phase-contrast appearance ( Figure 3C). When BSMC were cultured, we confirmed that the cell density was increased (data not shown). This increment was significantly prevented by 10 mM 15d-PGJ 2 ( Figure 3B and 3D). In 15d-PGJ 2 -treated cultures, some cell bodies shrank and became round ( Figure 3D). Thus, there was a close correlation between susceptibilities to 15d-PGJ 2 and amyloid protein, suggesting an involvement of 15d-PGJ 2 in the amyloid protein-induced inflammation. Effects of PGD 2 and Its metabolites on the viability of cortical neurons and BSMC MTT assay is a colorimetric assay for measuring the activity of enzymes that reduce MTT or close dyes to formazan dyes. These reductions take place only when reductase enzymes in mitochondria are active, and therefore conversion is often used as a measure of viable (living) cells. Previously, we have reported that there was a linear relationship between cell density and MTT-reducing activities in cortical neurons [15]. As well as the MTT-reducing activity, the cell density was reduced by 10 mM 15d-PGJ 2 in cortical neurons and BSMC ( Figure 4A). MTT-reduction assay is also established for various cell types other than neurons to enable accurate, straightforward quantification of changes in their cell densities. In most experiments, the neurotoxicity of 15d-PGJ 2 was evaluated at 10 mM for 24 h in the presence of serum. Since PGD 2 can be non-enzymatically metabolized to PGJ 2 , D 12 -PGJ 2 and 15d-PGJ 2 in the present culture medium [4], it is very difficult to compare their neurotoxic potencies. When serum was deprived from culture medium to decelerate the metabolism of PGD 2 , we have succeeded in detecting their neurotoxic hierarchy by the treatment with each PG at 10 mM for 8 h. We observed that serum-deprivation did not induced neuronal cell death within 8 h. The growth-inhibitory effect of PGD 2 and its metabolites at 10 mM was 15d-PGJ 2 . D 12 -PGJ 2 . PGJ 2 & PGD 2 in sequence ( Figure 4B). On the other hand, 15-deoxy-D 12,14 -PGD 2 (15d-PGD 2 ) did not affect MTT-reducing activity of neuronal cells. In BSMC, 15d-PGJ 2 significantly decreased MTT-reducing activities. Although D 12 -PGJ 2 showed a tendency to decrease MTTreducing activity, the inhibitory effect was significantly detected in neither 15d-PGD 2 , D 12 -PGJ 2 , PGJ 2 nor PGD 2 . Specific binding sites of 15d-PGJ 2 in the plasma membranes of cortical neurons and BSMC Cortical neurons were fractionated into nuclear, plasma membrane, cytosol and microsome. Binding assay of [ 3 H]15d-PGJ 2 was performed at room temperature for 1 h. The ratio of specific binding of [ 3 H]15d-PGJ 2 to total binding were 78%, 66%, 45% and 4% in the fraction of plasma membrane, nuclear, cytosol and microsome, respectively ( Figure 5A). Previously, we have reported the binding assay of [ 3 H]15d-PGJ 2 in the plasma membrane under optimal conditions at 4uC for 24 h [4]. The ratio of specific binding of [ 3 H]15d-PGJ 2 to total binding was more than 80% in the cortical neuron. The inhibitory effect of 15d-PGJ 2related compounds at 100 mM was 15d-PGJ 2 . D 12 -PGJ 2 . PGJ 2 & PGD 2 in sequence ( Figure 5B). 15d-PGJ 2 displaced the specific binding of [ 3 H]15d-PGJ 2 in a concentration-dependent manner ( Figure 5B). In BSMC, 15d-PGJ 2 also inhibited the specific binding of [ 3 H]15d-PGJ 2 in a concentration-dependent manner ( Figure 5B). The IC 50 value of 15d-PGJ 2 to the specific binding of [ 3 H]15d-PGJ 2 in BSMC was 31 mM, and 20-fold higher than that (1.6 mM) in neuronal cells. The binding sites of 15d-PGJ 2 in cortical neurons could also be recognized by D 12 -PGJ 2 and PGJ 2 , whereas those in BSMC could be specifically done by 15d-PGJ 2 ( Figure 5B). In the two cells, the MTT-reducing activities of 15d-PGJ 2 and its precursors were paralleled to the affinities of these ligands for the membrane specific binding sites of 15d-PGJ 2 . Isolation of Targets for 15d-PGJ 2 To identify target proteins for 15d-PGJ 2 , membrane proteins were labeled with biotinylated 15d-PGJ 2 under the serum-free condition to reduce non-specific binding. Under this condition, biotinylated 15d-PGJ 2 induced neuronal cell death in a concentration-dependent manner as well as 15d-PGJ 2 . Their LD 50 values were almost 1 mM ( Figure 6A). Biotinylated 15d-PGJ 2 suppressed the extension of neurites and shrank cell bodies in a similar fashion to 15d-PGJ 2 ( Figure 6B). Next, neuronal plasma membranes were incubated with 1 mM biotinylated 15d-PGJ 2 in the absence or presence of 15d-PGJ 2 at the indicated concentrations. Then, membrane proteins modified with biotinylated 15d-PGJ 2 were separated by two-dimensional gel electrophoresis. The patterns that were given by western blot analysis probed with anti-biotin antibody-HRP and SYPRO Ruby fluorescence staining are shown in Figure 7. Several biotinylated 15d-PGJ 2 -protein conjugates were detected as biotin-positive spots ( Figure 7A and 7B). 15d-PGJ 2 inhibited the modification of proteins with the biotinylated 15d-PGJ 2 in a concentration-dependent manner ( Figure 7C and 7D). At 100 mM, 15d-PGJ 2 eliminated almost completely the biotin-positive spots ( Figure 7D). After superimposition of both patterns, the SYPRO Ruby-stained proteins that coincided with the biotinpositive spots were excised from two-dimensional gels ( Figure 7E), subjected to trypsin digestion, and then successfully analyzed by MALDI-TOF MS fingerprint analysis ( Figure 8A). Identification of Targets for 15d-PGJ 2 Spot 8 corresponding to a 50 kDa 15d-PGJ 2 -protein conjugate was one of the targets of the modification by biotinylated 15d-PGJ 2 , as seen in Figure 8. Using MASCOT, the probability based MOWSE score was 267 for GFAP (p,0.05) ( Figure 8B), with 28 Spots that were excised from the gel show in Figure 7E were identified by tryptic digestion and MALDI-TOF MS. Shown are the spot number, name of the identified protein, the accession number in the SwissProt database, the theoretical molecular mass and isoelectric point, the probability based MOWSE score, the number of peptides matched according to the Mascot database, the percentage of the protein sequence that is covered by the identified peptides. doi:10.1371/journal.pone.0017552.t002 peptide matches (error 60.02%) (Figure 9), which represents 56% sequence coverage ( Figure 8C). Table 2 lists the identity of 22 protein spots, which could be identified in three independent experiments. The multiple gel spots for a single identification could be ascribed to posttranslational modification, such as phosphorylation. For example, spot 6 could contain 3 phosphorylation sites (T 129 , T 130 and Y 283 ), which represented the probability based MOWSE score59, 16 peptide matches, 32% sequence coverage. Spot 7 could contain 1 phosphorylation site (Y 283 ), which represented the probability based MOWSE score 188, 31 peptide matches, 51% sequence coverage. On the other hand, the phosphorylation site of spot 8 was not detected. The identified proteins fall into several different functional classes, including glycolytic enzymes (Enolase 1, Enolase 2, GAPDH and PKM1), molecular chaperones (Hsp8a and TCP1a) and cytoskeltones (Tubulin b2b, Actin b, Internexin a, GFAP and CapZa2). Next, we attempted to detect the 15d-PGJ 2 -target adducts in the plasma membranes exposed to the biotinylated 15d-PGJ 2 . by streptavidin agarose pull-down assays. Western blot revealed that 15d-PGJ 2 interacted with Actin b, Enolase 2, GAPDH, Internexin a, PKM1, TCP1a and Tubulin b2b ( Figure 10). Since plasma membranes were prepared from adult cerebral cortices including neurons and astrocytes, non-neuronal enolase1 and GFAP appeared to be derived from astrocytes. Regions homologous to the binding site of 15d-PGJ 2 in targeted proteins Several lines of evidences indicate the covalent binding sites of 15d-PGJ 2 in previous target proteins. To ascertain whether the cysteine residue in the present target proteins responded to the covalent binding sites of 15d-PGJ 2 in previous target proteins or not, homologous regions were searched ( Table 3). As query sequences, we used the amino acid sequences of the previous target proteins, in which the covalent binding sites of 15d-PGJ 2 are identified: Cys 374 of Actin b (P60711) [21], Cys 269 of c-Jun (NP_068607) [22], Cys 184 of H-ras (NP_001091711) [23], Cys 179 of IkB-kinase b (Q9QY78) 8 , Cys 285 of PPARc (NP_619725) [8], Cys 35 and Cys 69 of thioredoxin (NP_446252) [24]. Hspa8 contained Cys 603 responded to the Cys 179 of IkB-kinase b. The amino sequence of Hspa8 from Lys 597 to Leu 610 was homologous to that of IkB-kinase b from Lys 171 to Leu 186 . Based on the comparison between the two sequences, the initial score, the optimal score and the identity were 15, 29 and 31%, respectively. Discussion Cortical neurons and BSMC sensitive to amyloid protein were susceptible to 15d-PGJ 2 . [ 3 H]15d-PGJ 2 bound specifically to the two cells, suggesting that 15d-PGJ 2 played an important role in amyloidoses not only in the central nervous system but also in the peripheral tissues. The specific binding sites of [ 3 H]15d-PGJ 2 were detected in the neuronal subcellular fractions of nuclear, cytosol and plasma membrane, but not in the microsomal fraction. 15d-PGJ 2 binds to the nuclear receptor, PPARc [9] and the cytosolic protein, Ras [23]. In peripheral tissues including nerves, chemoattractant receptor-homologous molecule expressed on Th2 cells has been identified as a type 2 receptor for PGD 2 (DP2), and reported to be also a membrane receptor for 15d-PGJ 2 [20]. Contrary to its mRNA, little protein of DP2 has yet been detected in the central nerve. Furthermore, we ruled out the possibility that the specific binding site of 15d-PGJ 2 in the plasma membrane of cortical neurons was DP2. First, few binding sites of [ 3 H]PGD 2 are detected in plasma membranes from rat cortices [4]. Although binding sites of [ 3 H]? 12 -PGJ 2 and [ 3 H]PGJ 2 are also detected in plasma membranes, those are displaced most potently by 15d-PGJ 2 among PGD 2 metabolites [4]. Second, a DP2 selective agonist, 15d-PGD 2 do not affect the cell number of neuronal cells and BSMC ( Figure 3B and Table 1). Third, the LD 50 value (.10 mM) of PGD 2 is much higher than the affinity for PGD 2 receptor (dissociation constant = 8.8 nM) [20]. In the present study, we identified membrane proteins targeted for 15d-PGJ 2 including glycolytic enzymes, molecular chaperones and cytoskeletons (Table 2 and Figure 10). GAPDH, Enolase 1, Enolase 2 and PKM1 were previously believed to perform exclusively 'house-keeping' glycolysis. GAPDH is not only found in the cytoplasm, but also closely associated with the plasma membrane [25]. GAPDH catalyses the conversion of glyceraldehyde 3-phosphate to D-glycerate 1,3-bisphosphate. Reduction in glycolysis precedes cognitive dysfunction and is therefore believed to be an important early event in AD development [26]. Apart from its glycolytic role, overexpression of the particular membrane-associated GAPDH has a direct role in neuronal apoptosis [27] (Figure 11). GAPDH is located in amyloid plaques [28], interacts with the C-terminal region of amyloid precursor protein (APP) [29], and co-precipitates with fAb [30]. Furthermore, GAPDH associates tightly with Enolase 2 and Hspa8, and makes up trans-plasma-membrane oxidoreductases (PMOs), the extracellular redox sensor for signaling external oxidative stress to the cell [31]. Enolase 1 and Enolase 2 belong to a superfamily of abundantly expressed carbon-oxygen lyases known for the catalysis of 2phosphoglycerate to phosphoenolpyruvate. Ubiquitous enolase1 and neuron specific enolase 2 exist as monomers and also as dimmers on the neuronal membrane surface [32]. Recent studies have demonstrated that enolases possess different regulatory properties from glycolysis in the brain [33]. Enolase1 is one of the most consistently up-regulated and oxidatively modified proteins in brain of subjects of early-onset AD [34]. Enolase1 and enolase 2 are autoantigen targets in post-streptococcal autoimmune disease of central nervous system (Figure 11). The anti-enolase antibodies induce neuronal apoptosis [35]. Enolase 2 is part of neuronal PMOs, and the anti-enolase2 antibody can inhibit PMO activity on the plasma membrane [31]. Pyruvate kinase transfers a phophate from phosphoenolpyruvate to ADP. Pyruvate kinase is also defined as the autoantigen, and its antibodies induce neuronal apoptosis [35] (Figure 10). The significant increase in pyruvate kinase activity is found in frontal and temporal cortex of AD brains [36]. Pyruvate kinase is elevated in the cortical neurons undergoing Ab-mediated apoptosis [37]. Pyruvate kinase is co-precipitated with fAb [30]. Biotinylated 15d-PGJ 2 binds to PKM1 in mesangial cells [38], supporting our results. Hsp8a is dnaK-type molecular chaperone heat shock protein 72-ps1 in the PMO complex [31]. It is located in the cytoplasm [39], but nuclear localization and accumulation near or at the plasma membrane in stressed cells and in synaptosomal membranes has been observed [40]. Hsp8a binds to the cytoplasmic domain near the post-transmembrane region of APP ( Figure 11). TCP1a is a selective molecular chaperone in tubulin biogenesis, by that nascent tubulin subunits are bound to TCP1a and released in assembly competent forms. Cytoskeletal proteins are deficient and aggregated in AD. When TCP1a is related to its natural and Figure 11. Hypothetical roles of targets for 15d-PGJ 2 on amyloidoses. Membrane target proteins for 15d-PGJ 2 were glycolytic enzymes (Enolase 2, PKM1 and GAPDH), molecular chaperones (Hsp8a and TCP1a), and cytoskeletal proteins (Actin b, CapZa2, Tubulin b and Internexin a). These proteins were factors associated with the two remarks of AD, the amyloid plaque and the neurofibrillary tangle. Beyond classical roles as glycolytic enzymes and molecular chaperones, GAPDH, Enolase2 and Hsp8a appear to form the complex of PMOs and contribute to the generation of reacting oxygen species by 15d-PGJ 2 . doi:10.1371/journal.pone.0017552.g011 Table 3. Regions homologous to the binding site of 15d-PGJ 2 in targeted proteins. specific substrate tubulin b, the ratio is significantly decreased in the temporal, frontal, parietal cortex and in thalamus of AD patients [41]. Relatively decreased molecular chaperoning of tubulin b by TCP1a is suggested to lead to misfolded tubulin aggregating and accumulating in plaques and tangles, a hallmark of AD ( Figure 11). Tubulin has been identified as a membrane component of synaptosomes and various plasma membranes. Both tubulin a and b have been shown to associate with the amyloid deposits of familial amyloidosis [42] and to bind to the Ab sequence of APP [43]. Moreover, tubulin b is retained by a monomeric Ab column [44], and co-precipitated with fAb [30] (Figure 11). The tau protein interacts with tubulin to stabilize microtubules and promote tubulin assembly into microtubules. PGJ 2 induces caspase-mediated cleavage of tau, generating Dtau, an aggregation prone form known to seed tau aggregation prior to neurofibrillary tangle formation [45]. Hyperphosphorylation of the tau protein (tau inclusions) can result in the self-assembly of tangles of paired helical filaments and straight filaments, which are involved in the pathogenesis of AD [46]. Biotinylated 15d-PGJ 2 binds to tubulin b in mesangial cells [38], supporting our results. AD-linked human Ab synergistically enhances the ability of wild-type tau to promote alterations in the actin cytoskeleton ( Figure 11) and neurodegeneration [47]. The ability of globular actin to rapidly assemble and disassemble into filaments is critical to many cell behaviors. F-actin-capping protein subunit a-2 (CapZa2) regulates growth of the actin filament by capping the barbed end of growing actin filaments ( Figure 11). Members of the actin-depolymerizing factor (ADF)/cofilin family are important regulators of actin dynamics. ADF and cofilin's ability to increase actin filament dynamics is inhibited by their phosphorylation on Ser 3 by LIM kinase 1 and other kinases [48] Ab dystrophy requires LIM kinase 1-mediated phosphorylation of ADF/cofilin and the remodeling of the actin cytoskeleton [49]. Biotinylated 15d-PGJ 2 covalently binds to actin b in various cells [38] other than neurons, supporting our results in neurons. Internexin ais classified as a type IV neuronal intermediate filament. Internexin a also co-assembles with the neurofilament (NF) triplet proteins [50]. The protein is expressed by most, if not all, neurons as they commence differentiation and precedes the expression of the NF triplet proteins [51]. Although the interaction of internexin a with amyloid proteins has not yet been reported, Internexin a, and not NF triplet, ring-like reactive neurites are present in end-stage AD cases, indicating the relatively late involvement of neurons that selectively contain Internexin a ( Figure 11). Another intermediate filament protein, GFAP is expressed exclusively in astrocytes. Ab increased the total number of activated astrocytes, and elevated the expression of GFAP by Ab-induced spontaneous calcium transients [52]. 15d-PGJ 2 suppresses inflammatory response by inhibiting NF-kB signaling at multiple steps as well as by inhibiting the PI3K/Akt pathway independent of PPARc in primary astrocytes [53]. In conclusion, membrane target proteins for 15d-PGJ 2 were factors associated with the two remarks of AD, the amyloid plaque and the neurofibrillary tangle. Beyond classical roles as glycolytic enzymes and molecular chaperones, GAPDH, enolase 2 and Hsp8a can form the antioxidant complex of PMOs responded to the extracellular oxidative stress. 15d-PGJ 2 might regulate the activity of PMOs during inflammation and degeneration. Apart from glycolysis, pyruvate kinase and enolase might be involved in the 15d-PGJ 2 -induced apoptosis as autoantigens. Thus, the present study sheds light on the ecto-enzymes targeted for 15d-PGJ 2 as a prelude to the death receptor stimulated by 15d-PGJ 2 or the antioxidant complex regulated by 15d-PGJ 2 .
2014-10-01T00:00:00.000Z
2011-03-18T00:00:00.000
{ "year": 2011, "sha1": "f852954b2d8ac8514318a3002561d3005defae66", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017552&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f852954b2d8ac8514318a3002561d3005defae66", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54460215
pes2o/s2orc
v3-fos-license
Imaging retinal melanin: a review of current technologies The retinal pigment epithelium (RPE) is essential to the health of the retina and the proper functioning of the photoreceptors. The RPE is rich in melanosomes, which contain the pigment melanin. Changes in RPE pigmentation are seen with normal aging and in diseases such as albinism and age-related macular degeneration. However, most techniques used to this day to detect and quantify ocular melanin are performed ex vivo and are destructive to the tissue. There is a need for in vivo imaging of melanin both at the clinical and pre-clinical level to study how pigmentation changes can inform disease progression. In this manuscript, we review in vivo imaging techniques such as fundus photography, fundus reflectometry, near-infrared autofluorescence imaging, photoacoustic imaging, and functional optical coherence tomography that specifically detect melanin in the retina. These methods use different contrast mechanisms to detect melanin and provide images with different resolutions and field-of-views, making them complementary to each other. Background Melanin is naturally present in the eye within the choroid, iris, and retinal pigment epithelium (RPE), a single layer of epithelial cells located posterior to the photoreceptors in the retina. The RPE plays an important role in the overall health of the retina, transporting nutrients from the blood vessels in the choriocapillaris to the photoreceptors, and disposing of retinal waste and metabolic end products [1]. An interruption in these functions can lead to degeneration of the retina, loss of the photoreceptors and eventually blindness. The melanin in the RPE is thought to play a protective role, absorbing excess light from the photoreceptors and protecting the retina from light-generated oxygen reactive species [2][3][4]. However, melanin in the RPE does not regenerate, and the damage accumulated over time from light exposure could affect the overall health of the RPE [2,5]. In the past, most methods available to researchers to study melanin in the RPE were destructive to the tissue and labor intensive, which has led to a limited understanding of the role of melanin in the intact living eye. To further study the RPE, new imaging techniques have been developed to specifically detect and quantify melanin at the clinical and pre-clinical levels in patients and animal models. Eye imaging has multiple roles, both to improve patient care and to perform basic research. Clinical imaging is used in patients to screen and diagnose eye conditions, plan and monitor ocular surgeries and evaluate treatment response [6,7]. In animal models, non-invasive imaging methods enable observation of how different ocular structures interact with each other in a living system. Disease progression can be studied over time in the same animal, which can lead to the identification of new disease markers. Alternatively, new drugs can be dynamically evaluated, which could accelerate clinical translation. Fundus photography, scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) are all non-invasive imaging techniques that are part of the toolset for clinicians and researchers to image the eye. These techniques could be adapted to image melanin in the living eye and improve our knowledge of the RPE. Changes in retinal pigmentation normally happen with aging [8] and are present in many ocular diseases. Albinism, for example, is characterized by various degrees of ocular hypopigmentation and is associated with low visual acuity and other visual abnormalities [2]. Retinitis pigmentosa, another example, is a group of genetic disorders that cause progressive visual loss and includes both photoreceptor degeneration and RPE cells loss [9]. Finally, age-related macular degeneration (AMD) is the most important cause of vision loss in adults above 65 years old in the US and involves dysfunction of the RPE and changes in pigmentation [10]. At early stages of the disease, AMD is usually characterized by changes in pigmentation and the presence of drusen. At later stages, "dry" AMD is characterized by regions of atrophy of the RPE and photoreceptors, while in "wet" AMD neovascular lesions invade the retina from the choroid and lead to vascular leakage, scaring and central vision loss [11]. In dry AMD, hyperpigmentation in the RPE (potentially from dysfunction in the RPE cells) followed by hypopigmentation (from the loss of RPE cells) could appear before dysfunction in the photoreceptors or choriocapillaris and could be predictive for the progression of the disease [11]. In wet AMD, it is possible that loss of the choriocapillaris causes the RPE cells to become hypoxic and to produce angiogenic substances, resulting in the formation of neovascular lesions [11]. To this day, there is no cure for AMD and vision loss cannot be reversed, although anti-VEGF treatment can slow down or stop disease progression [12][13][14]. Clinical imaging in the eye is already used to facilitate diagnosis, evaluate treatment response and reduce the need for repeated treatment in AMD [15,16]. However, changes in pigmentations are still difficult to quantify since many non-invasive measurements are highly dependent on the optical properties of the eye and on the imaging parameters used. As a result, there are currently no standard in vivo techniques to quantify melanin levels in the eye. The aim of this manuscript is to explore the different ways melanin can be imaged in the living eye. It is believed that light damage accumulated over time reduces the melanin's ability to protect the retina. Imaging and quantifying melanin in the eye could provide information about the overall health of the RPE and of neighboring structures. As a result, melanin imaging could play a role in creating and evaluating new treatments in animal models or diagnosing ocular diseases before irreversible vision loss. The following key technologies enable noninvasive detection of melanin in the eye at the clinical and pre-clinical level and will be reviewed in this manuscript: fundus photography, fundus reflectometry, near-infrared autofluorescence imaging (NIR-AF), photoacoustic imaging (PA), optical coherence tomography (OCT), polarization-sensitive OCT (PS-OCT) and photothermal OCT (PT-OCT). A brief summary of existing ex vivo methods to quantify melanin in samples is also presented to provide context. Quantifying melanin ex vivo Multiple methods have been developed to quantify melanin in cells or in ex vivo tissue samples. In early studies of the RPE, changes in pigmentation were observed qualitatively [17,18] or quantitatively [19] by counting melanosomes on high resolution micrographs. To accelerate the process, melanin is now quantified using chemical degradation of the sample followed by high-performance liquid chromatography (HPLC) [20]. Electron spin resonance spectroscopy (ESR) has also been used to quantify melanin and characterize the different types of melanin pigments [5,21,22]. ESR spectroscopy measures the magnetic field strengths at which electrons in a sample can change their spin magnetic moment (from parallel to anti-parallel) by absorbing the energy from a microwave source of fixed frequency. The resulting spectrum of energy absorption as a function of magnetic field strength is specific to a given chemical compound and can be used to differentiate pigments. Melanin can also be quantified in terms of light absorption. Absorbance of solubilized melanin at a specific wavelength measured with a spectrophotometer is another technique used to quantify melanin in ex vivo samples [5,[23][24][25]. Light transmission measurements can also provide a measure of melanin concentration in tissue slices [26]. Ex vivo methods provide a highly specific and quantitative measurement of melanin and are used to study melanin production, distribution and degradation as a function of age and diseases. However, these methods cannot be used in live animal models to monitor diseases over time or test new treatments, and they cannot be translated to the clinic for use in patients. As such, in vivo techniques that can detect melanin have been a focus of many researchers. Fundus photography and fundus reflectometry Fundus photography is a commonly used clinical imaging modality that produces a two-dimensional, en face color image of the retina where the optic nerve head, macula and major blood vessels can be seen. Most modern table-top fundus systems have a field-of-view of~45°and do not require pupil dilation [27]. Fundus images can be recorded on 35 mm-film or with a digital camera [7]. The basic components of a fundus system are a white light source to illuminate the retina, a central obscuration in the illumination path (annular aperture), an objective lens to form an image using the reflected light from the retina, a zoom lens to correct for the patient's refractive error, and a camera to detect the image [28]. This results in an annular illumination pattern at the pupil, a circular illumination pattern at the retina and a circular image detected at the camera. The annular illumination pattern at the pupil reduces the back reflection from the cornea and allow for a better detection of the reflected light from the retina. The illumination and collection paths can be combined with a beam splitter, or a mirror with a central hole to deflect the illumination path while transmitting the collected light [28]. Researchers and clinicians can visually assess changes in pigmentation based on the color of the retina as seen on fundus images. For example, multiple manual grading systems are used to evaluate fundus images in patients with AMD and the presence of hypopigmentation or hyperpigmentation is evaluated as part of the overall assessment [29]. Additionally, adaptive optics has been used to correct light aberrations in the eye, effectively improving the lateral resolution of fundus photography, and providing images of pigment migration over time in "dry" AMD [30]. However, this method of evaluating fundus images cannot differentiate between melanin contained in the RPE or the choroid, nor is it quantitative. To collect quantitative information from the fundus image, fundus reflectometry was developed. Fundus reflectometry can be performed with a retinal densitometer, an instrument composed of a light source, some filters to change the wavelength of the light entering the eye and a detector such as a photomultiplier, capable of quantifying the light exiting the eye [31]. When performing fundus reflectometry using this technique, a high intensity white light is first sent to the eye to bleach the retina. A lower intensity light of a specific wavelength (e.g. 500 nm) is then sent to measure the presence of a pigment such as melanin [31,32]. The light reflecting from the retina is then quantified as it is reaching the detector over time. In other instruments, a white light source is used to illuminate the retina and a spectrometer is used at the detector to measure the reflected light at multiple wavelengths [33]. Different theoretical models describing how incoming light would be reflected or absorbed by the different tissue layers of the retina can then be fitted to the recorded light, and properties such as the optical density of melanin can be calculated [34]. Fundus reflectometry studies have found different optical density values for choroidal melanin in healthy eyes based on different models [35,36]. Recently, Hammer et al. used the adding-doubling approach, a technique used to simulate light distribution in a multi-layered tissue based on the reflection and transmission properties of a thin homogeneous tissue layer, to obtain relative concentrations of melanin in the RPE and choroid [33]. Bone et al. used a model based on the absorption of four components (macular pigments, cones and rods, and melanin) at four different wavelength to obtain 2D images of the fundus (see Fig.1) showing the relative optical density of melanin [37]. Kanis et al. compared the optical density of melanin from the right and left eye of patients and found a strong interocular correlation in healthy eyes [38]. This could open the door to diagnostic tests that evaluate large differences between melanin optical density between the eyes of a patient [38]. In another study by the same group, fundus reflectometry was used to image melanin in patients with age-related maculopathy (ARM) but did not detect differences in melanin optical density between healthy patients and patients with ARM, or between patients with different stages of ARM [32]. Fundus reflectometry is thus providing quantitative information about melanin distribution. This is an improvement over fundus photography where pigmentation changes can only be interpreted qualitatively. However, fundus reflectometry requires complex models to determine how the light entering the eye was scattered and absorbed by the different tissue layers of the eye. This can lead to widely varying results, including non-physical values of melanin optical density when layer thicknesses are not estimated correctly [33]. Additionally, while some models can produce 2D images of melanin distribution [37], most fundus reflectometry techniques do not produce an image, which renders data interpretation more difficult and does not account for heterogenous distributions of melanin. As a result, fundus reflectometry has not yet become a standard imaging technique in the clinic and has not been used extensively to study different diseases of the eye involving melanin. In conclusion, fundus reflectometry can obtain quantitative measurements of the melanin optical density, but the complex models required for quantification make this technology difficult to implement in practice. Near-infrared autofluorescence imaging (NIR-AF) An alternative to fundus photography is scanning laser ophthalmoscopy (SLO) [39], which has enabled near-infrared autofluorescence imaging of the eye (NIR-AF). Like fundus photography, SLO produces two-dimensional en face images of the retina. However, a pinhole can be used to selectively collect light from a specific layer of the retina (~300 μm axial resolution [40]), which is not possible using a fundus camera [41]. Instead of a white light source, SLO uses a laser source focused onto a point and raster-scanned across the retina to build an image. This enables a small portion of the eye's pupil to be used for illumination, while the rest of the pupil is used for light collection [41]. In comparison, fundus photography requires most of the pupil to be used for illumination (annular illumination pattern) with only the center of the pupil used for collection. As a result, SLO can be performed with illumination powers much lower than those required for fundus photography [39] and SLO is sensitive to lower levels of emitted light than fundus photography, enabling autofluorescence imaging of the eye [42]. Two endogenous fluorophores are most commonly imaged with SLO: lipofuscin and melanin [43,44]. In most commercial and clinical SLO systems, the choice of excitation and emission wavelengths for fluorescence imaging is often dictated by the wavelengths used to image two exogenous fluorophores that are commonly used in the clinic to perform angiography: fluorescein and indocyanine green. However, these emission and excitation wavelengths are appropriate for lipofuscin (excitation: 488 nm, emission: > 500 nm, similar to fluorescein) and melanin imaging (excitation: 787 nm, emission: > 800 nm, similar to indocyanine green) [40,45]. SLO thus enables qualitative imaging of the melanin and its distribution throughout the RPE. The near-infrared autofluorescence signal of melanin in the retina was first reported, to our knowledge, by Piccolino et al. [46] in 1996 in a study that recorded near-infrared fluorescence before indocyanine green injection using fundus photography. At the time it was unclear what the source of the fluorescence signal was, and the authors hypothesized that it could be a combination of melanin, lipofuscin, and porphyrins. Later, Huang et al. confirmed that melanin in the skin and synthetic melanin produce fluorescence emission following near-infrared excitation [47]. Weinberger et al. confirmed the results from Piccolino et al. in the eye using an SLO system and further supported the hypothesis that the NIR fluorescence signal is caused by autofluorescence of melanin and not simply light reflected from the fundus (i.e. pseudofluorescence) [48]. Further evidence was provided by Keilhauer and Delori who imaged normal subjects and patients with AMD or other retinal diseases with NIR-AF, and determined that melanin in the RPE and choroid was a likely candidate for the source of the near-infrared autofluorescence signal [45]. Finally, Gibbs et al. demonstrated that the autofluorescence signal was specific to the melanosomes from the RPE and choroid by isolating them ex vivo [49]. NIR-AF was performed to detect melanin in patients and study diseases such as AMD [45,48,[50][51][52] (see Fig.2), idiopathic choroidal neovascularization [53], chloroquine retinopathy [54], various inherited retinal diseases [55], ABCA4-associated retinal degenerations [56][57][58], retinitis pigmentosa [9,59,60], Usher syndromes [49,61], Best vitelliform macular dystrophy [62], diabetic macular edema [63], central serous chorioretinopathy [64,65], and torpedo maculopathy [66]. NIR-AF has multiple advantages as a melanin imaging technique: it offers a large imaging fieldof-view, does not require exogenous contrast agents, is safe and comfortable for the patient, can be performed using commercially available equipment, and produces images that are easy to interpret by researchers and clinicians. However, NIR-AF does not have the axial resolution to produce three-dimensional images of the melanin distribution and it is likely that melanin from the RPE and choroid are both contributing to the NIR-AF signal. Additionally, the interpretation of the NIR-AF is mostly qualitative since the fluorescence intensity is highly dependent on imaging conditions. The NIR-AF signal can thus be quantified within one eye [45,63] but it has been difficult to directly correlate the NIR-AF signal to an absolute measure of melanin concentration that would be valid across multiple eyes. However, quantitative autofluorescence has been performed in the eye to quantify lipofuscin in short-wavelength autofluorescence (SW-AF) images with the use of an internal fluorescent reference [67][68][69], which is encouraging for future quantitative autofluorescence measurements of melanin in the eye. In conclusion NIR-AF is easily performed using commercially available instruments and has been used to study multiple human diseases. However, RPE melanin cannot be separated from choroidal melanin and further research is needed to obtain quantitative NIR-AF results. Fluorescence lifetime imaging ophthalmoscopy (FLIO) [70] is a technique similar to NIR-AF that not only measures the autofluorescence signal from fluorophores in the retina, but also the time it takes for fluorescence to be emitted following excitation (i.e. fluorescence lifetime). The fluorescence lifetime of a fluorophore such as melanin is highly dependent on the microenvironment but not dependent on fluorophore concentration, thus making FLIO particularly complementary to NIR-AF. The fluorescence lifetime of melanin has been recorded in hair samples [71]. However, the fluorescence lifetime signal obtained from the retina includes contributions not only from melanin but also from multiple fluorophores such as lipofuscin and macular pigments [70,72,73], and further studies are needed to isolate the lifetime signal of retinal melanin from other fluorophores in vivo. Photoacoustic imaging (PA) Photoacoustic imaging (PA) is an ultrasound-based modality which can detect optical absorbers such as blood and melanin in the eye [74]. PA uses a pulsed-laser and an ultrasound transducer to detect absorbers in tissue. The laser light is absorbed by the contrast agent (e.g. melanin), which creates heat, rapid tissue expansion and an ultrasonic wave via the photoacoustic effect [75]. Such wave is detected by an ultrasound transducer coupled to the eye. Two types of information about the sample can then be obtained from the ultrasonic wave. First, a one-dimensional signal of absorption as a function of depth into the eye can be computed. The pulsed laser is then scanned across the sample to create two-or three-dimensional images of the absorbers within the sample. Second, the amplitude of the signal can be correlated to the absorption coefficient of the sample, and thus can serve as a measurement of the concentration of absorber (e.g. melanin) within the sample. As a first demonstration, Silverman et al. acquired PA images of melanin in the iris in excised porcine eyes [76]. In the first in vivo demonstration, Jiao et al. integrated PA into an OCT system to collect photoacoustic images of the blood and melanin in the healthy rat retina with a 23 μm axial resolution [77]. This system used a needle transducer in contact with the eyelid to detect the ultrasound signal. Multiple follow-up studies have been produced by the same group. Zhang et al. added short-wavelength autofluorescence imaging to the PA system to detect lipofuscin in addition to melanin, first in retinal tissue [78], then in vivo in pigmented and albino rats [79]. Song et al. built upon this work and developed a multimodal system that includes PA, SLO, OCT and fluorescein angiography to image the eye [80]. The resulting system was able to simultaneously image tissue structure, retinal and choroidal blood vessels and melanin from the RPE and choroid in vivo in the retina of albino and pigmented rats [80]. This system was also adapted to image melanin in the mouse eye in Song et al. [81]. Previous PA systems by this group had used visible light (532 nm) to excite and detect ocular melanin, however, near-infrared light is less damaging to the eye than visible light. Liu et al. thus demonstrated in vivo melanin imaging in rats using a near-infrared laser (1064 nm) for PA excitation [82]. Liu et al. also combined a PA system to a fundus camera, which could visualize the position of the PA laser onto the retina and accelerate the alignment procedure when imaging melanin in rats [83]. Liu et al. were the first to perform in vivo optical coherence photoacoustic microscopy (PA and OCT combined using the same 800 nm wideband light source) in the rat eye, which lead to perfectly co-registered images of the tissue structure and melanin distribution (see Fig. 3) [84]. Images acquired up to this point had been qualitative and suffered from low axial resolution. PA has the potential to provide a quantitative reading of melanin concentration in the eye, similar to previous work imaging cutaneous melanin [85]. Shu et al. performed a Monte Carlo simulation to understand light absorption in the retina and evaluate the potential of PA imaging for quantitative imaging of melanin in the eye [86]. This model used blood absorption as a reference point for calibration. However, to specifically quantify RPE melanin and separate it from choroidal melanin, a higher axial resolution was necessary. Shu et al. used a micro-ring resonator detector to increase the axial resolution of their PA system (< 10 μm) and obtained images where the RPE and choroid can be distinguished in ex vivo porcine and human samples [87]. Quantitative melanin measurements of the choroid and RPE were then performed in ex vivo samples using a calibration curve obtained in phantoms. PA imaging can provide volumetric images of ocular melanin, which was not possible using fundus reflectometry or NIR-AF fundus imaging. The increased axial resolution also allows for a more localized signal collection, and possibly for independent measurements of RPE and choroid melanin. PA imaging also relies on simpler light absorption and propagation models than fundus reflectometry, which may lead to more accurate measurements of melanin concentration. However, PA imaging has been demonstrated in few animal eye models and has yet to be demonstrated in the human eye. Additionally, no eye disease models have been explored using PA, thus it is unclear how the information provided by PA imaging will be used by eye researchers and clinicians in the future. In conclusion, PA imaging provides a quantitative measurement of melanin absorption and has the potential to separate signal from the RPE and the choroid. However, the technique has yet to be performed in the human eye. Optical coherence tomography (OCT) OCT provides three-dimensional, high resolution images of the different tissue structures of the eye over a large field-of-view. First commercialized in 1996, OCT is now a standard imaging technique both for pre-clinical and clinical eye imaging [88][89][90]. OCT uses low-coherence interferometry to measure the echo time delay and intensity of the backscattered light as it penetrates tissue. Light is sent into a Michelson interferometer composed of a beam splitter, a sample arm (ending at the sample, in this case the retina) and a reference arm (ending with a reflective surface). A Fourier Transform of the resulting interferogram is used to obtain the OCT signal as a function of depth. The processed OCT signal is thus a complex signal where both the signal magnitude and phase vary as a function of depth. A single OCT scan (A-scan) is a one-dimensional measure of sample reflectivity as a function of depth. Two-and three-dimensional images can be acquired by raster-scanning the OCT beam over the sample. Typical OCT lateral resolution falls between 1.5 μm and 9 μm, dependent on the objective used and the imaging source wavelength. The axial resolution is determined by the imaging source wavelength and bandwidth, where, up to a point, small wavelengths and large bandwidth lead to better resolution. Ophthalmic OCT systems will often be centered around 850-860 nm with a 50 to 100 nm bandwidth, resulting in axial resolutions between 3 μm and 6 μm [91]. With such contrast mechanism and high axial resolution, different tissue layers such as the nerve fiber layer, photoreceptors, and RPE can be distinguished on OCT images [92]. Changes in melanin content are visualized as a change in RPE reflectivity on OCT images. Wilk et al. have analyzed these changes in OCT signal by comparing images obtained in wild-type and albino zebrafish, and by imaging patients with albinism [93]. Zhang et al. have also observed a change in intensity of the OCT signal in the RPE with dark adaptation in frogs [94]. However, the main source of contrast on OCT images is tissue backscattering, which provides limited functional information and low specificity when imaging melanin. Techniques such as polarization-sensitive and photothermal OCT have been developed to add functional contrast to OCT and can be used to specifically detect melanin. Polarization-sensitive OCT (PS-OCT) provides information about the birefringence of a sample and has been used to image the cornea and retina [95,96]. To perform PS-OCT, incoming OCT light must be circularly polarized. After passing through the sample, the outgoing light then maintains an arbitrary ellipsoid polarization pattern determined by the composition of the sample [97]. From there, individual detectors are used to measure the vertical and horizontal components of the polarized light. Different algorithms are used to extract the polarizing properties of the sample, which can then be mapped onto a depth-resolved OCT intensity image. Pircher et al. first noted that light reflected from the RPE/Bruch's membrane complex has a highly variable polarization when measured with PS-OCT in vivo in a volunteer [98]. Follow-up studies by different groups later confirmed that the polarization-scrambling layer was likely the RPE. This conclusion was made by comparing PS-OCT images obtained in healthy patients and images obtained in patients with RPE detachment, RPE tear, RPE atrophy, drusen or choroidal neovascular membrane [99][100][101]. Baumann et al. used melanin phantoms to determine the source of the PS-OCT signal within the RPE and observed that the degree of polarization uniformity (DOPU) is correlated with melanin concentration [102], a result later confirmed in rats [103]. However, this relationship was strongly dependent on the scattering properties of the sample, i.e. the size and shape of the melanin granules [102]. PS-OCT was also performed in pigmented rats and mice [104], albino rats [103][104][105], and patients with ocular albinism [102,106], which confirmed the specificity of the PS-OCT signal to melanin. PS-OCT has been used to segment the RPE from 2D or 3D OCT data sets in healthy eyes [107] and in patients affected by AMD [108][109][110][111], RPE detachment [111] and pseudovitelliform dystrophies [108], and to compute retinal [109,110] (see Fig. 4) or choroidal thickness [112]. Miura et al. showed that PS-OCT is complementary to other melanin imaging techniques by combining PS-OCT with polarization-sensitive SLO and NIR-AF to study RPE cells migration in patients with AMD [113]. PS-OCT has [84]. Copyright Optical Society of America also been performed in combination with other functional OCT modalities, such as OCT angiography, to acquire information not only about the RPE but also about the structure and vasculature of eyes affected by AMD [111,114,115]. New algorithms [116] and instruments [117] have also been developed for PS-OCT to improve the detection of melanin and improve axial resolution down to < 1 μm. Photothermal OCT (PT-OCT) is another type of functional OCT technique [118,119]. PT-OCT detects optical absorbers in tissues, with similar resolution and imaging depth as OCT. PT-OCT takes advantage of the photothermal effect, where photons absorbed by the contrast agent (e.g. melanin) are re-emitted as heat. To perform PT-OCT, an amplitude-modulated laser is combined to a phase-sensitive OCT system, with the wavelength of this additional laser corresponding to the absorption peak of the contrast agent. The increase in temperature following photon absorption causes a thermoelastic expansion surrounding the absorber, and a change in the refractive index of the tissue. Both phenomena cause a change in optical path length, which is detected as a change in the OCT phase signal. The PT-OCT signal intensity is proportional to the absorption coefficient of the tissue, which allows for quantitative measurements of the contrast agent concentration [119]. PT-OCT was first used to detect melanin by Makita et al. to image cutaneous melanin with PT-OCT [120]. PT-OCT was first performed in the eye by Lapierre-Landry et al. where signal from melanin was detected in the RPE in pigmented mice but absent in albino mice [121]. A follow-up study was performed in tyrosinase-mosaic zebrafish, a genetic line in which the zebrafish have pigmented and non-pigmented regions within the RPE of each eye. This study confirmed that the PT-OCT signal is specific to melanin in the zebrafish eye [122]. PT-OCT also detected melanosome migration within the RPE by comparing dark-adapted and light-adapted wild-type zebrafish (see Fig. 5) [122]. Both PS-OCT and PT-OCT are considered functional OCT techniques. They produce high-resolution images like OCT and they both can acquire volumetric images of the retina that are perfectly co-registered to the OCT intensity images. Both PS-OCT and PT-OCT instruments can be combined to other modalities such as OCT angiography to perform multimodal imaging. As PS-OCT and PT-OCT use different contrast mechanisms to detect melanin (polarization-scrambling and absorption, respectively), they can provide complementary information about melanin distribution within the retina. PS-OCT has the advantage of being low in illumination power, and it has been performed in both animal models and patients with a range of eye conditions. It has the potential of being a quantitative imaging modality for melanin, although it is unclear how the signal is dependent on the shape and size of the melanin granules and how small changes in pigmentations would be detected. PT-OCT has a more straightforward relationship with the absorption coefficient of a sample, with a linear increase in PT-OCT signal as a function of absorption. The PT-OCT signal is thus highly sensitive to small changes in pigmentation within the RPE. However, PT-OCT has yet to be performed in the human eye, and laser powers within safe levels (below ANSI standards) have only been demonstrated ex vivo [123]. In conclusion, both PS-OCT and PT-OCT have a high axial resolution and can separate the RPE from the choroid, but while PS-OCT has been used to study multiple diseases in both animal models and patients, PT-OCT has only been recently demonstrated in the eye in animal models. Conclusion Melanin is present in the iris, choroid, and RPE, and may act as a protector to the photoreceptors to promote the overall health of the retina. Changes in pigmentation are observed in diseases such as albinism, retinitis pigmentosa and AMD, and studying these pigmentation changes could offer insights on disease mechanism, disease progression and treatment options. Here we reviewed non-invasive techniques to detect and quantify retinal melanin in the living eye. These methods have advantages over traditionally used ex vivo methods, since they can be used for longitudinal studies in animal models, where cost, time, labor and inter-animal variability are reduced by imaging the same animal over many time points. Many non-invasive imaging methods can also be used in patients for diagnosis and treatment, which is not possible with ex vivo methods. In this review, we covered multiple techniques that have been used to detect melanin using a variety of contrast mechanisms. Changes in pigmentation can be seen using fundus photography, but observations are only qualitative and the signal produced by melanin contained in the RPE cannot be separated from the signal produced in the choroid. Fundus reflectometry can quantify melanin in the RPE, but the complex models required for quantification make this technology difficult to implement in practice. NIR-AF can be accomplished using commercially available SLO instruments and produces images that are simple to interpret by a clinician. However, it is difficult to quantify melanin across multiple eyes using NIR-AF and [122] under creative commons license RPE melanin cannot be separated from choroidal melanin with the existing axial sectioning capabilities of commercial SLOs. PA imaging uses an ultrasound transducer to produce three-dimensional images of the eye and a pulsed laser to detect optical absorbers such as melanin. The PA signal intensity is directly correlated with melanin absorption and recent advances have made it possible to separate the signal from RPE and the choroid. However, the axial resolution is still limited, and the technique has not been performed the human eye. Finally, OCT is a three-dimensional imaging technique that is commonly used in the clinic. Since melanin does not produce a specific change in OCT signal, functional OCT techniques such as PS-OCT and PT-OCT have been developed to detect melanin using its polarization-scrambling properties and its absorption properties, respectively. While PS-OCT has been used in multiple animal models and in patients, PT-OCT is an emerging technology that has only been recently demonstrated in the eye. These methods are complementary to each other and together provide researchers and clinicians with a range of field-of-views, in 2D or 3D, obtained at different resolutions, and using properties such as absorption, fluorescence or light polarization as contrast mechanisms. We expect that in the future, in vivo experiments will lead to a better understanding of the role of melanin in the retina, which could lead to new diagnosis methods and new treatment options.
2018-12-16T18:46:00.765Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "e6e4545eb409b55153eec9d951eafb8fadf83774", "oa_license": "CCBY", "oa_url": "https://jbioleng.biomedcentral.com/track/pdf/10.1186/s13036-018-0124-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6e4545eb409b55153eec9d951eafb8fadf83774", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238224340
pes2o/s2orc
v3-fos-license
Constructing the Discourse of Marginalised Ethnic Community in Rai's Fire Cares not its Birthday Anniversary The decade of 2010s is very crucial in literary creation, particularly poetry writing in Nepali literature because the trend of writing shifted to the representation of Mainly the indigenous poets are concerned to the issues of marginalised indigenous people. Bhupal Rai's collection of poem Fire Cares not Its Birthday Anniversary falls in the same trend that deconstructs the cultural discourses of the state power and reconstructs the discourse of the indigenous people. In this context, this study aims to find out the issues of cultural discourses in the poems that the poet resists against and reconstructs a new body of knowledge, i.e. a counter discourse of marginalized. In the same way, it attempts to unfold how he resists against the existing body of cultural discourses and reconstructs the discourses from the perspective of marginalised people. Similarly, this study aims to analyses the logical reasons of redefining and reconstructing the existed ruling groups' body of knowledge. The interpretive method has been used to analyses the texts. For this, Foucault's concept of power/discourse has been applied as a theoretical tool. This research article gives the insights to see the interwoven power relations in social practices and construction of knowledge. 2010s. Such kind of writing particularly appeared in poetry. It means that the new trend of writing shifted to the representation of marginalised groups with their political and cultural consciousness and revolt against the state power and its monolithic and mono-cultural discourses. Mainly, the poets from indigenous nationalities appeared with their racial and cultural consciousness challenging and contesting the dominant cultural discourse of the elite ruling class in Nepali literature. Abhi Subedi evaluates the socio political and historical context of this trend: After the political changes in 2006, a new trend of writing appeared in literature. Feelings with strange mixture of anger and celebration of marginalised class, women and Dalits appeared in literature, especially in poetry (27). Though the indigenous poets have followed the same pattern and medium of mainstream Nepali poetry, the theme and poetic aesthetics are quite different in their writings. Abhi Subedi further opines that indigenous consciousness, political resistance and aesthetic consciousness can be found in indigenous poetry (26). The poetic creations of indigenous poets have concentrated on the issues of politically and culturally dominated indigenous communities. In the changing context of writing poetry, the Nepali poets from indigenous communities such as Shrawan Mukarung, Rajan Mukarung, Upendra Subba, Bhupal Rai, Pragati Rai, Bimala Tumkhewa, Swapnil Smriti, Chandrabir Tumbapo and Heman Yatri appeared with the voices of common people at the margin. Among them, Bhupal Rai is one of the poets having affluent creativity. He has given the central space to the marginalized language, culture and history of indigenous community aiming to construct the discourse from the margin. At the same time, he resists the cultural discourse of the ruling class. His collection of the poems entitled Fire Cares not Its Birthday Anniversary falls in the same poetic creation that resists the cultural discourses of the state and reconstructs the cultural and historical discourses of the common people at the margin. So, this study claims the same argument as it intends to deal with the two poems Ksha the same collection. Problem, Objectives and Methodology The selected poems under this study are the reflection of the conscious marginalised indigenous people who can make their own world view and they construct a distinctive body of knowledge. In the poems, the speaker, the poet himself, is very conscious and defiant enough to raise questions to the existing cultural discourses of the state. The defiant resistance reconstructs the discourse of marginalized. This counter discourse redefines and produces new meanings and knowledge. The poems have raised the questions to the domination of the state to the indigenous groups in terms of culture, language and nationality. The indigenous community has been suffering from poverty and illiteracy. Though there are many general problems in the poems, primarily this study concentrates to address the following specific research questions: In what issues does the poet resist and reconstruct a new body of knowledge? How does the poet resist the cultural discourses of the state and reconstruct the discourse of marginalised? Why does the poet redefine and reconstruct the existing body of knowledge constructed by the ruling power? The specific objectives of this study are to explore the issues prevail in the poems that the poet resists and reconstructs a new body of knowledge from the perspective of marginalised. In the same way, this article unfolds the way of resisting the existing body of cultural discourses constructed by the state. Similarly, this study aims at interpreting the rationality of redefining and reconstructing the existing body of knowledge defined by the state power. For this, interpretive method has been used to analyses the selected poems to achieve the objectives and derive the conclusion. Then the selected poems have been interpreted from the perspective of Cult nation and so on (Giri, in Globalisation 252). Particularly, Michel Foucault's concept of power/discourse has been applied as a theoretical tool to unfold power relations in the selected poems. Foucault's power/discourse helps to examine how the poet resists the existing body of knowledge and constructs a new world discourse of marginalised people. Power and discourse have vice versa relation. Power produces discourse, and no power is produced without discourse. To support the idea of Foucault, the relevant ideas and arguments of other cultural critics related to concept of power/ discourse have been used to analyses the selected poems. The concept of power/discourse examines all the cultural products unfolding the interwoven power relations as they are constru Untying the Text 53). Discourse is power that enables persons to resist the domination and suppression. It is the power through this medium changeable and temporary social truths are constructed (Uprety 40). Foucault further argues subject, but, on the contrary, a totality, in which the dispersion of the subject and his discontinuity with himself may be determined. It is a space of exteriority in which a network of the Archeology of Knowledge 55). Foucault's argument claims that knowledge is subjective and constructed in a specific historical context through the power body of knowledge as the power relations change in a specific historical moment. Similarly, language for talking about -a way of representing the knowledge about -a particular topic at a particular historic discourse produces and defines a body of knowledge. Changing Paradigm of marginalized Consciousness: A Review of Literature In the crucial moment of socio-political transformation in Nepali society, Bhupal Rai's collection of poem Fire Cares not its Birthday Anniversary was published in 2015 with image of fire for the reconstruction of marginalised discourse that resists and challenges the cultural discourse of ruling class. Due to its current issues of marginalized people with the utmost consciousness and discursive power, the anthology was successful to draw the attention of numerous intellectuals and critics along with its publication. Many critics have presented their critical evaluations on this collection. Some reviews made on this collection are presented herewith. The poems collected in the anthology Fire Cares not Its Birthday Anniversary comprises of defiance from the marginalised perspective. Connecting this factor of power and defiance, Abhi Subedi evaluates the anthology: Powerful and defiant poems have been included in this collection (48). Here, the evaluation of Subedi suggests that the poems collected in the anthology are very powerful. Subedi indicates how the poet has raised aggressive voice for the justice to those excluded indigenous subjects who have been suffering in the cultural hegemony and political domination of the state. The revolt of the poet is basically against the injustice to the common people who have been always kept at the margin. In this line, Abhi Subedi further posits th people who advocate for justice and there is politics for it (48). Subedi has pointed out the anger and revolt of the poet expressed in the poems. Subedi has evaluated and analyzed the attitudes of the poet towards the existing cultural discourses of the state power. In the same way, Tarakant Pandey has analysed the aspects of political consciousness imbedded in the poems. In this line, Pandey posits that there is the expression of utmost cultural consciousness expressed in the poem that has strongly raised the question to the imposition of cultural and political domination of the state through language, culture and historical discourses. Furthermore, the poems in the anthology have represented the effects of globalization and its capitalist values in Nepali society. Connecting this context, Amar Giri rightly points out that the poet has presented today's capitalism and its characteristics (115). Giri has explored the effects of globalised capitalism how it treats everything as commodity, even humanity and emotions are equated to the monetary value. Giri has connected it to the context of Nepali society in which poverty, discrimination, exploitation and conflicts have become commodity to be exchanged. The argument of Giri indicates that all these social problems have become the business matter in the capitalistic world. The aforementioned reviews of some intellectuals and critics have simply unfolded thematic aspects of the collection of poems as whole. They have just simply raised the issues of consciousness of the marginalized, revolt against the state power and effects of globalised capitalism in Nepali society. But they have not gone through in depth in what issues the poet specifically resists the cultural discourses and historical documentation of the state power; and in what way he constructs a counter discourse of marginalized. In the same way, they have not interpreted how and why the poet resists the cultural discourses of the state and constructs the counter discourse of marginalised. So, this article intends to fill this gap. Constructing the Counter Discourse of Marginalised Ethnic Groups The collection of poem Fire Cares not its Birthday Anniversary consists of thirty short poems all together which are equally powerful to express not only the voices of marginalised indigenous people but also the realities of the present globalised capitalistic world and its as these poems express anger and revolt against the dominant cultural values of the state in the favor of indigenous people; and there is strong claim for cultural identity and existence resisting the cultural discourse of the state. The poems are the discourses of marginalized ethnic groups in which the poet deconstructs the cultural discourse of the power state. For this he has established a new perspective or world view that redefines the existing cultural discourse of ruling class. Then, the new world view or form of knowledge reconstructs and redefines the existing knowledge constructed by the state. The discriminated monolithic discourse of the ruling class (During 15). This counter discourse from the margin changes and challenges the existed body of knowledge. The poet has expressed his anger and revolt against the cultural discourses of the state. At the same time, he raises the questions to the existing body of knowledge with the aim of reconstructing the counter discourse of the marginalized. The poet has also expressed his utmost desire of having social justice, dignity and self-respect to the common people at the margin who have been under the cultural and political domination of the state power for centuries. Primarily, the poet reconstructs the cultural and historical discourse of the Resisting and Reconstructing the Cultural Discourse in domination on the indigenous people. The speaker of the poem, may be the poet himself, presents his bitter and difficult moment in his childhood days when he was learning Nepali language. It was difficult for him since the whole system of knowledge -from Nepali letters or alphabet to political structures -was based on the cultural values of ruling groups. The construction of Nepali letters is not independent ones, but cultural production of the rulers. It is is not just a way of speaking or writing, but the whole 'mental set' and ideology which encloses to go ahead reciting the Nepali alphabets as he letter is not simply the alphabet, but it is ideologically contingent to the cultural values of the rulers. It reminds the fierce face of the rulers. So, the poet finds difficult himself since it is the cultural discourse constructed by the mind set and cultural values of the state power. merely an alphabet, rather resembles the picture of ruler. It is the ruler of state who has fierce ruler with his sword: The matted mustache would appear In front of my eyes dramatically And did the face of Maila Mukhiya From the another village That was resembled any Royal descents And blocking path of my house With his sword on the way (Rai 11) and alphabets here. In this sense, the primary stage of learning language is the discursive ruling class constructs the mechanisms of language as an instrument to control over the indigenous marginalized people. Culturally constructed language is ideologically contingent discourse of the state; and it constructs a world view of the rulers that produces and defines knowledge or truth about the history of ruling groups. In the same way, the speaker finds it very difficult for him-self to understand the cultural mechanisms and structures of the state power. Consequently he loses his creative age of his childhood life because he finds himself unable to follow the structures and parameters of the language of the rulers. Moreover, he is unable to find the way of life to lead it smoothly in the domination of monolithic language of the state. So, he expresses his difficulty: I could not sing the song In the rhythm At my first learning Probably I lost my tender fellow And did the first music of life The speaker expresses the feelings of loss and frustration that the cultural and lingual domination and discriminatory mechanisms of the state cause to the common indigenous people. The cultural discourse of the ruling class marginalises the language and culture of the indigenous people. In this condition, it is very difficult for them to move ahead and achieve success in life. Now, the poet deconstructs the existing cultural discourse and body of knowledge constructed by the state. He strongly resists the monolithic discourse of language that excludes the racial identity and existence of the indigenous people. In this marginalised situation, the speaker raises the questions resisting the existing discourse of language. He raises the questions to change and challenge the cultural discourse of the state power. The cultural discourse of the state has ever been raised the questions neither in the past nor at the present. So, the poet, now, raises questions to construct a counter discourse: Neither at that time Nor at present Does anyone dare to ask Raute Why not? It is a resistance, a counter discourse against the established discourse of state power as John resistance to the power. The poet does the same thing by reconstructing the discourse of the state through the deconstructive ways; and this deconstruction of the discourse of state power constructs a new avenue to see the objects in another way. The poem, a counter discourse from the margin, reconstructs and redefines the cultural values of the people at the margin. The poet deconstructs lingual discourse of the state by raising the questions to the monolithic discursive forms and body of knowledge that discriminates and excludes racial ply refers to a stretch of text or spoken utterances that cohere into a meaningful exposition . . . Discourse constructs, defines and produces the objects of knowledge in an intelligible way while excluding other forms of reasoning as unintelligible (Barker, in SAGE Dictionary 54reconstructing the discourse of marginalised, the poet produces a new body of knowledge that redefines the existing reality with a new concept. The reconstructed discourse from the margin brings the marginalised groups in the center. Now, they come to the position of Subject to set of regulated discursive meanings from which discourse makes sense. To speak is to take up subject position of the marginalised people who can define themselves and claim their identity from their own new world view. The poet further raises questions to the state power why the language of indigenous has been excluded and kept at the margin. He constructs a counter discourse by breaking the ruling class in the central position with the identity of bravery and patronage of nation. In this context, the speaker raises the voice from the margin concerning to lingual discrimination: What happened to the racial alphabets? In case they were produced By the innocent lips Or could their inseparable nation be broken That w The poet raises the issue of inclusion and recognition of indigenous language which has always remained at the margin as if it is an untouchable. The revolt against the domination of state a topic can be meaningfully talked about and reasoned about (Foucault of marginalised culture and language of indigenous people. The poet attempts to unfold the culture and language of indigenous nationalities which Millions of indigenous people have lost their lives and creativity due to the exploitation and dominatio lost their cultural and lingual identity due to the domination and imposition of the language of people have been under the domination of state power and its cultural discourses, the speaker wants to get back all those language and culture of his community which is only in his memory, not in the existing discourse of the state. So, he claims: Now I am excavating the very graves I recall the lost music now In the tunnel of the anarchical language Of the government (Rai 12). It is not only the loss of language and culture, but the loss of history and identity of the millions of indigenous people. Now, the poet resists the monolithic and racial discourse of language of state and reconstructs discourse in order to unfold the lost history and identity. It is of knowledge constructed by the state through the counter discourse that he reconstructs against the ruling class. History has reconstructed the history of Tamang people which has been kept at the margin in the history of elite rulers. In the historical discourse of state power, the hardships, struggles and physical labors of poor Tamang people has been excluded. There is no documentation of pains and sufferings that Tamang people had gone through as they were compelled to sacrifice their life in the service to the rulers, mainly the Ranas. In the same way, the poet has constructed the discourse of Tamang people who still live in poverty in Bhimphedi, a village of Tamang community. On the one hand, Bhimphedi is still backward and its people have become the victim of marginalisation. On the other hand, the dignity and self-respect of Tamang people are still in crisis and it has been ignored by the state. In this context, the poet reconstructs the historical discourse of elite rulers and constructs the discourse of common Tamang people representing their poor and backward condition of life. The poet represents how the history of state power only mentions the glorious deeds of Ranas who first time brought the motor car in their country. It has been represented as if it is a glorious history. The poet writes: Only the things are mentioned That is Shree 3 Ranaji That was brought the first time In the nation of Rana (Rai 78) The historical document of the state is monolithic that only mentions the Rana rulers who brought motor car in the country first time. It is the discourse of the state that ignores and excludes the common people who had sacrificed their lives in the service of the rulers. It does not mention the physical sufferings, pains and miserable deaths of innocent Tamang people in the service of Ranas carrying the vehicle from Bhimphedi to Kathmandu. The discursive formation of the poet constructs the history of marginalized common people by depicting the image of carrying the motor car: A black and white picture a century ago In which are the thin people gathering Like those of the ants in caravan And dragging the motor of Shree 3 In a Doli Like a single file procession Who carry the corpse (Rai 77) The poet has unfolded the historical deeds and hard works of innocent Tamang people. Unfolding the excluded history of common people constructs discourse of marginalised group. asserts: democratic republic has been established), the poet reconstructs the historical discourse of the knowledge that redefines the existing discourse of the state. The poet brings the historical moments of the marginalised in the center through the construction of discourse from the marginal perspective. The poet further unfolds the historical document of common people that has been excluded the forms of representation, conventions and habits of language use producing specific fields of culturally common people in the service of the rulers, but it has been mentioned nowhere in the history (Brooker 78). This representation constructs a new discourse and meaning from the perspective of marginalized people. The poet unfolds the ignored, but painful history of common people: Nowhere is mentioned ---How many times they were beaten By the whips Onto their naked backbone How many of them were sacrificed Red tongue? And how many days it took To bring motor in the valley? (Rai 78) The poet, here, rewrites the historical moments that count the sufferings, pains, physical tortures and deaths that the common people or subjects of the then king (Ranas) had gone through while carrying the motor car. But all these historical conditions are not mentioned in the historical documents of the state. As the poet is in power relations, he constructs discourse articular and determinate historical conditions under which statements are combined and regulated. Regulation forms and defines a Cultural Studies 101). The poet constructs this power discourse to bring the marginalised in the central position and claim their existence and identity. In the same way, the poet represents the present condition of Bhimphedi and backward lives of Tamang people in its surroundings. Though everything has been changed and become new, Bhimphedi is still in the same backward condition as it was in centuries ago. Moreover the dignity and self-respect of Tamang community around Bhimphedi is still in deteriorated y has been ignored by the state even in the drastic changes in socio-political structures. Everything has been changed, but the lives of Tamang and condition of Bhimphedi have remained the same as it was a century ago. The poet reflects poor condtion: Almost everything has been new But there Two things remained unchanged One the old Bhimphedi still been Shadowed by the Phulchoki Two Shir of the Tamang nation Who are still stoop for centuries (Rai 79) It is the representation of regionally marginalised Bhimphedi village which has not been developed. It is still under the shadow of centralized policy of the state. In the same way, the deserving dignity and recognition of indigenous Tamang people has been ignored by the state power. Consequently, dignity and self-respect of Tamang community has not been raised. On the one hand, Bhimphedi is still far away from development due to the policy of the state. On the other hand, the state has still not equally provided social justice to the indigenous Tamang people. As they do not have equal social justice, they lack the dignified life. It means that they are still under the cultural and political domination of the state. The poet constructs the discourse of indigenous Tamang people through the representation of dominated and ignored Tamang community. The representation of backward Bhimphedi and culturally the issue of dignity and self-respect for the indigenous Tamang community. Obviously, it strongly claims dignity, self-respect and cultural identity of the marginalised indigenous people, particularly Tamang Community. Conclusion tructed the existing cultural discourse of the state power in which the language, culture and historical realities of marginalised indigenous people have been excluded. The discourse of the state is monolithic and mono-cultural. So, the poet has resisted and revolted against it. For this, the poet has strongly raised questions to the existing body of knowledge and reconstructs it with the claim of lingual and cultural identity of indigenous people. As he constructs the discourse of marginalised it provides a new world view that defines and produces the objects of knowledge in another way, the excluded lingual identity. In the same way, the historical document of ruling class has been deconstructed as it has only included the glory of Ranas. The poet constructs discourse of common people and their significant deeds in a particular historical condition in the service of rulers. The new body of knowledge defines and represents the sufferings, pains, physical tortures and deaths that the innocent people had faced in the service of Ranas. The discourse of common people claims dignity and self-respect for the ages long dominated Tamang people. The poet has also brought Bhimphedi, a historical place, in the center as it is still backward. It has remained backward as the state power has ignored it. This article cannot include all aspects to analyse the selected poems due to the constraints of time and scale of this study. So, this study has left some other theoretical perspectives such as Deconstructive and Marxist approach that can be used to interpret the poems.
2020-08-06T09:08:41.024Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ea1cc7c784421a771d5ec939cdbf58aaf1282f34", "oa_license": null, "oa_url": "https://doi.org/10.3126/jodem.v10i1.30399", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4c4c65eb4f1f3dea51fedf51b5ef93522d2c5cc7", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
258652551
pes2o/s2orc
v3-fos-license
Like aggregation from unlike attraction: stripes in symmetric mixtures of cross-attracting hard spheres Self-assembly of colloidal particles into striped phases is at once a process of relevant technological interest—just think about the possibility to realise photonic crystals with a dielectric structure modulated along a specific direction—and a challenging task, since striped patterns emerge in a variety of conditions, suggesting that the connection between the onset of stripes and the shape of the intermolecular potential is yet to be fully unravelled. Hereby, we devise an elementary mechanism for the formation of stripes in a basic model consisting of a symmetric binary mixture of hard spheres that interact via a square-well cross attraction. Such a model would mimic a colloid in which the interspecies affinity is of longer range and significantly stronger than the intraspecies interaction. For attraction ranges shorter enough than the particle size the mixture behaves like a compositionally-disordered simple fluid. Instead, for wider square-wells, we document by numerical simulations the existence of striped patterns in the solid phase, where layers of particles of one species are interspersed with layers of the other species; increasing the attraction range stabilises the stripes further, in that they also appear in the bulk liquid and become thicker in the crystal. Our results lead to the counterintuitive conclusion that a flat and sufficiently long-ranged unlike attraction promotes the aggregation of like particles into stripes. This finding opens a novel way for the synthesis of colloidal particles with interactions tailored at the development of stripe-modulated structures. Introduction 2][3] Self-assembly into striped structures is observed in a variety of systems, many of which belonging to the realm of soft matter, including, among others, colloidal dispersions, [4][5][6] active materials, 7 liquid crystals, 8 amphiphilic mixtures, 9 corecorona systems, 10 and magnetic particles. 11n the celebrated ANNNI model, 12 which is the simplest (spin) model where stripes are present, spatially modulated phases are induced by superimposing to a short-range ferromagnetic interaction a directional antiferromagnetic interaction of longer range.4][15][16][17][18][19][20][21] On the other hand, it has been shown [22][23][24][25] that ''lanes'' or lamellae may even appear in onecomponent fluids interacting via purely repulsive potentials, typically modeled by a combination of hard-sphere (HS) and square-shoulder interactions.At variance with the ANNNI model, the other examples of stripes imply a genuine spontaneous symmetry breaking: the number density acquires a modulation along a definite direction which is not apparent in the system Hamiltonian. 8][29][30][31] In principle, mixtures of colloidal particles with different optical characteristics can find application in photonic devices, sensors, and even microchips. 32,33In particular, a binary mixture self-assembling in a striped crystal can be assimilated to a photonic crystal with dielectric properties modulated along a specific direction.In a completely different context, striped patterns have been observed in two-component assemblies of living cells, both in vitro 34,35 and in silico. 36,37For example, in ref. 36 tegumentary cells of zebrafish are modeled This journal is © the Owner Societies 2023 as hard disks of two different types, interacting through Hookean forces; upon tuning the sign and strength of the couplings, a wealth of two-dimensional patterns are obtained, including stripes. When modeling binary mixtures, one has to fix like interactions (between particles of the same species) and the unlike, or cross interaction (between particles of different species).In ref. 27 and 29 the like interactions are HS plus squareshoulder potentials, whereas the cross interaction is a squarewell (SW) potential.In ref. 31 the like interaction is a HS potential for one species and HS plus a SALR (i.e., short-range attractive and long-range repulsive) potential for the other species, while the cross interaction is modeled by a SW potential.Finally, in ref. 30 the like interactions are of SALR type, while the cross interaction has an attractive tail. 9][40][41] However, ref. 31 suggests that the cross interaction too plays a role in the stabilization of stripes, as the latter are only present when the SW range is long enough. From the above considerations, it follows that in order to better understand the formation of stripes in mixtures it is crucial to disentangle the effects of like and unlike contributions, and thoroughly investigate the impact of the cross interaction.To this aim, we examine a binary mixture of colloidal-like particles where the like interactions are simply HS, while the cross interaction features, in addition to a hard core, also a SW of variable width.A SW interaction makes it possible to precisely define and tune the range of the interspecies attraction.To exclude the possible role of size and/or composition asymmetry in the formation of stripes, as in ref. 29, we consider an equimolar mixture of HS particles with equal diameters.This model is meant to represent a mixture of two colloidal particles of similar sizes, having some degree of interspecies affinity.When modeling the cross interaction by a SW potential, the range of the attraction considered is usually very small, [42][43][44][45][46] due to the short-range nature of colloidal interactions. 47On the other hand, there exist conditions where the mutual forces can be long-range even in colloidal solutions, for instance when screened Coulomb interactions act.Therefore, in investigating the role of the SW range in the behaviour of the model mixture, we allow for an attraction range comparable to the sphere diameter, or even larger. We perform our investigation by means of numerical simulations; specifically, we use Monte Carlo (MC) simulations in the canonical ensemble to investigate structural properties, while employing the Gibbs Ensemble MC (GEMC) method to study liquid-vapour coexistence.Crystalline order is probed by means of orientational order parameters, whereas the relevant solid structures are identified by zero-temperature total-energy calculations.][51][52] We find that for small values of the attraction range the mixture behaves like a simple fluid, with the two species mixed together for every density and temperature.On the other hand, as the SW range increases, compositional order eventually sets in: both the liquid and the solid acquire a patterned structure, where stripes (i.e., bands or layers) of type-1 spheres systematically alternate with stripes of type-2 spheres.Therefore, aggregation of particles of the same species may occur as the sole result of an attraction between particles of different species. The remainder of the paper goes as follows: in Section 2 we provide details on the mixture, the simulation and theoretical methods adopted.In Section 3 we present and discuss our results.Conclusions and perspectives follow in Section 4. Models and methods Our system is an equimolar mixture of identical hard spheres (1 and 2) with diameter s, mutually interacting through a SW potential of range gs: where r is the interparticle distance.Throughout the paper, s and e are taken as units of length and energy, respectively.Therefore, the overall number density r is expressed in units of s À3 and the temperature T in units of e/k B , where k B is the Boltzmann constant.We carry out Monte Carlo simulations in the canonical ensemble using samples of N = 2048 particles, enclosed in a cubic simulation box with periodic boundary conditions.We typically perform up to 5 Â 10 6 MC cycles at equilibrium, one cycle corresponding to N elementary MC moves.To speed up relaxation to equilibrium, we also implement swap moves, where the positions of two randomly chosen unlike particles are interchanged.This is necessary in order to achieve genuine equilibrium for moderate to high densities.The acceptance of all moves is ruled by detailed balance. Liquid-vapour equilibria are obtained by GEMC simulations, 53 using N = 1728 particles that initially are evenly distributed between two boxes with density r in the range 0.20-0.30.We typically carry out 10 6 GEMC cycles, one cycle corresponding to N displacement moves plus one volume exchange plus a few hundred particle exchanges plus a few tens swap moves (these numbers are just mean relative proportions of different kinds of trial moves, since at every step of the run the choice between the moves is made at random).Critical temperatures and densities are estimated by fitting the GEMC liquid-vapour coexistence points by means of the scaling law for the density difference and the law of rectilinear diameters. 53per PCCP To obtain fast estimates of structural and thermodynamic properties of the fluid mixture, we solve the Ornstein-Zernike (OZ) equation combined with the hypernetted chain (HNC) closure. 48For a binary mixture, the HNC approximation for the direct correlation functions reads: Open where i and j take the values 1 and 2, b = 1/T, u ij (r) is the interaction potential, and y ij (r) = g ij (r) Àc ij (r) À 1, g ij (r) being the radial distribution functions.Density and concentrations enter the OZ-HNC theory through the diagonal matrix r ij = rw i d i,j , where w 1 and w 2 are the concentrations of species 1 and 2, respectively (in the present analysis, w 1 = w 2 = 0.5).The OZ-HNC set of equations is solved using an iterative Picard algorithm on a grid of 8192 points and a mesh of 0.005s.In the solution scheme, the HNC closure and the OZ equation are applied in real and reciprocal space respectively, with the switch between the two implemented by Fast Fourier Transforms.Calculations are done in terms of the indirect correlation function y ij (r), so that the kr Fourier inversion is performed on such a continuous function.A standard mix of two consecutive iterations has been adopted to ensure the convergence of the algorithm.We assume that convergence is reached when the difference between old and new values of y ij (r) is less than 10 À4 between two successive iterations. Within the HNC theory we have determined the pseudospinodal line: 54,55 for each fixed r, we reduce T gradually until the HNC iteration fails to converge.This occurs at a temperature T PS where the isothermal compressibility is usually very large and rapidly increasing on cooling.The locus T PS (r), i.e., the pseudo-spinodal line, can be taken as an approximation to the true spinodal line, which is where the isothermal compressibility diverges.In turn, the maximum of T PS (r) provides a reasonable estimate of the critical temperature T cr . The structure of striped patterns has been characterised through a cluster analysis, performed by using the Hoshen-Kopelman algorithm. 56In our study, two like particles are considered to be bonded together, and thus belonging to the same cluster, if their mutual separation is smaller than d bond = 1.25s.The cluster size distribution (CSD) is defined as: where n(s) is the average number of clusters of size s in a given configuration and the distribution is normalised in such a way that P s NðsÞ ¼ 1. Finally, to assess the structure of the mixture in its evolution from solid to liquid, we monitor some orientational order parameters and the pair entropy per particle s 2 .For a binary fluid mixture, the latter is defined as: 57 While, strictly speaking, in a crystalline solid the definition of s 2 would be different, 58 the same formula valid for a fluid system is also employed for the solid, which generally results in a large negative s 2 value. 3 Results and discussion Emergence of stripes at low-to-moderate density We preliminarily assess the performance of HNC in reproducing the fluid structure of the mixture as a function of T. For this purpose, in Fig. 1 we compare HNC and MC results for g ij (r) (A) and S ij (k) (B) for g = 1, r = 0.2, and two different temperatures.For T = 2 the distribution functions show little structure (A), as can be expected for a homogeneous fluid at high temperature.Moving to T = 1.5, the contact value of both g 11 (r) and g 12 (r) increases, as does the height of the second peak of g 12 (r), due to the attraction between unlike species.The behavior of structure factors for low wavevectors is more interesting (B): looking at the picture, we realize that T = 1.5 is not far away from liquid-vapor coexistence.As is clear, the HNC predictions for the fluid structure match quite well with simulation data. This journal is © the Owner Societies 2023 We anticipate that stripes occur spontaneously in the mixture, even at low density, provided that the temperature is sufficiently low.To illustrate this point, we first compute the liquid-vapour coexistence envelopes for a few values of g using the GEMC method, while employing the HNC pseudo-spinodal line as a guide to drive GEMC simulations to the relevant region of thermodynamic parameters.We emphasize that the HNC estimate of the liquid-vapor spinodal is usually expected to be good, [59][60][61] at least insofar as one-component fluids are considered. GEMC liquid-vapour envelopes and HNC pseudo-spinodal lines are plotted in Fig. 2 for g = 1, 1.5, and 2, in all cases, the shape of the coexistence curve is flat on top (the more so the smaller g, consistently with the expected disappearance of the liquid for small enough g) and asymmetric around the critical point.Numerical values of critical temperatures and densities are reported in Table 1.As can be expected, the critical temperature increases with g, since a longer cross-attraction implies that critical density fluctuations are developed at higher temperatures.Noticeably, in all cases examined, the GEMC coexistence line lies above the HNC pseudo-spinodal line and the maxima along the two curves are nearly equal.Critical densities are less accurate, but this problem should be ascribed to the flatness of coexistence curves, which makes the determination of this property more uncertain.For comparison, HNC critical parameters are also reported in Table 1. Once that the liquid-vapour equilibrium is worked out, we are in the position to investigate the behaviour of the mixture within the coexistence region by means of canonical MC simulations.In Fig. 3 we show a sequence of typical equilibrium configurations drawn for different values of g and T at increasing densities.3][64] Indeed, for r = 0.10 (A) the liquid forms a spherical droplet in equilibrium with vapour.For r = 0.20 (B) the shape of the droplet is cylindrical, whereas for r = 0.40 (C) the droplet shows a slab-like geometry.Finally, for r = 0.50 (D) a cylindrical hole appears in an otherwise liquid system.At each change of shape, the average energy per particle has a jump downward, a behaviour reflecting the increase in the ratio of bulk-to-surface particles.As well known, these geometric transitions are finite-size effects induced by periodic boundary conditions; 65,66 they are observed in equilibrium in every system undergoing a phase separation. As is clear from Fig. 3, for g = 1 and T = 1 the two species are randomly mixed.The same is found for g = 2 at T = 4, a slightly subcritical temperature.On the other hand, moving to g = 2 and T = 2 the scenario changes drastically: as can be appreciated from the bottom panels of Fig. 3, the distribution of the two species within the droplet is no longer random and patterned structures emerge spontaneously in the full range of densities.A comparison with the structures reported in panels A-D reveals that the sequence of shapes in panels E-H is identical, but type-1 and type-2 spheres are now distributed in alternating parallel stripes, a pattern that pictorially reminds us of the pigmentation of the tail of Gila monster, the largest native lizard in the US. 67he evidence of a modulated composition for g = 2 leads us to the conclusion that aggregation of like particles into separate layers is promoted by a sufficiently long-range unlike attraction.The formation of stripes is driven by energy: planar stripes maximise the number of attracting unlike spheres, making the striped configuration more stable than its compositionally disordered counterpart.Strange as it may seem, stripes can be energetically preferred even when g is close to zero, at least at very low temperature and very high density.Indeed, for g = 1 stripes are not present in the liquid-vapour region but they are stable in the solid phase, as we shall show in the next section.The argument goes as follows: take, for simplicity, the twodimensional case.If g is small, the SW attraction only reaches the first neighbors.In a triangular crystal with single-layer stripes, the central particle has four unlike neighbors (over a total of six), while in the substitutionally disordered crystal the average number of unlike neighbors is three.Therefore, the energy (and also the enthalpy) of the striped crystal is more negative. To further characterise the stripes observed in our system, we inquire into the distribution of particles inside the droplets, to ascertain whether it reveals some kind of spatial order.To this aim, we compute the orientational (Steinhardt) order parameters q 4 and q 6 , which efficiently discriminate between a crystal and a dense liquid: 68,69 while q 4 and q 6 vanish altogether in a bulk liquid, they are non-zero for crystalline environments. In panels A 1 and A 2 of Fig. 4 we show the statistical distribution of q 4 and q 6 for g = 2, various densities, and two temperatures.Data points refer to 1000 uncorrelated configurations taken from the last part of the simulation run.For T = 4 (A 1 ), there is no significant density dependence and q 4 and q 6 are close to zero.Conversely, for T = 2 (A 2 ) the values of q 4 and q 6 are clearly non-zero, suggesting a local crystalline structure, and therefore that a solid-vapour separation is taking place in the mixture. In this regard, we conclude that the triple temperature for g = 2 lies between 2 and 4. The local coordination of spheres within a droplet can be examined through the probability distribution of the number of Fig. 4 (A 1 and A 2 ) Orientational order parameter q 6 plotted as a function of q 4 for g = 2 and three densities (in the legend of panel A 2 ).Each circle corresponds to a different configuration of the droplet.(A 1 ) T = 4; small non-zero values in the liquid are a finite-size effect.(A 2 ) T = 2; the slightly larger values of q 6 and q 4 at higher density reflect the parallel increase in the bulk-to-surface ratio.(B 1 -B 4 ) Probability distribution of the number of bonds for g = 2, resolved into like and unlike contributions (values of T and r in the legends).As expected, the 1-1 and 2-2 distributions are identical. peak appears in all distributions, which is suggestive of crystalline order.In particular, 1-1 and 2-2 distributions show a peak at N b = 8, whereas the 1-2 peak falls at N b = 4-hence, on average each sphere has 12 nearest neighbors, as expected for a closepacked configuration-meaning that the majority of neighbors are of the same type as the central sphere, which in turn signals a tendency towards the local segregation of the two species.At variance with T = 4, at T = 2 the density plays no significant role. Having ascertained that for g = 2 stripes form in a crystalline environment, an increase in the range of the SW attraction enhances the stability of stripes in two respects: (1) they become robust to the configurational disorder of a liquid environment, at least provided that r is not too small, and (2) they get thicker in a solid-like droplet.The first statement is illustrated in Fig. 5, where a sequence of typical equilibrium configurations is shown for g = 3 and four increasing densities at T = 5, i.e., well below the HNC pseudo-spinodal prediction for T cr (E12).In the figure, the sequence of shapes in A-C is akin to panels E-G of Fig. 3, except for the major difference that the droplet in equilibrium with vapour is now liquid, as we draw from the values of the orientational order parameters.As seen in the figure, stripes are evident in the slab-like droplet (C), whereas their presence is less clear in the cylindrical droplet (B).Admittedly, in the spherical droplet (A) the surface-tovolume ratio is still too high for the mixture to sustain stripes in the liquid phase.Stripes also form at r = 0.9 (see panel D), a state point in which the mixture is a bulk liquid, as witnessed for example by the small values of the orientational order parameters and isothermal compressibility. As for the thickness of solid stripes, we report in Fig. 6 two typical equilibrium configurations for g = 2 and T = 2 (A) and for g = 3 and T = 4 (B), at fixed r = 0.5-for clarity, only stripes of one type of spheres are shown.In both conditions, the mixture exhibits solid-vapour separation with a slab-like droplet.Seven stripes are counted for the state with g = 2, whereas only five thicker stripes are seen for g = 3.This finding will be further corroborated in the next Section.For completeness, we mention that the case g = 1.5 and T = 0.5 is the only one in which we have found undulating, rather than planar stripes (see panel C).We can rule out an entropic effect: even though curved, stripes keep perfectly parallel to each other.Probably, stripes are undulating just to fit the slab-like droplet at this particular density. To summarize, the very existence and structure of stripes depend on the balance between energetic and entropic effects: in this regard, the former are marginally relevant for g = 1, where indeed the mixture is compositionally disordered (at least for moderate densities and not too low temperatures); as g increases, stripes are first formed in the solid phase and then also in the liquid phase (Fig. 5C), even in bulk (Fig. 5D), and become progressively thicker.The existence of well-definite stripes in the liquid indicates that the configurational disorder (entropy) is insufficient to suppress the compositional order promoted by energy.Entropic effects are nevertheless important, to the extent that the surface separating two adjacent stripes in the liquid is corrugated rather than flat. We close this Section by looking at the existence of stripes from the complementary perspective of a cluster-size distribution (CSD) analysis, where by ''cluster'' we mean a connected assembly of like spheres.To avoid the possibility that, owing to periodic boundary conditions, the observed stripes are actually part of a unique cluster, we perform the cluster analysis by just With this agreement, a striped phase (either solid or liquid) will be ideally characterised by a CSD with as many peaks as the apparent stripes, unless the latter are parallel to a box face, in which case the CSD has only one peak.On the contrary, a compositionally disordered phase will be described by a CSD with a prominent peak at a size of order N/2 (if the ''particle color'' percolates throughout the box) or with a broader peak at small size (if no such percolation occurs).Results for fixed r = 0.50 and different g and T are reported in Fig. 7: for symmetry reasons, only the CSD of type-1 spheres is considered.For g = 1 and T = 1 (liquid with a cylindrical hole, as in Fig. 3D) there is only one peak for a size comparable to the total number of type-1 spheres, meaning that they form a single aggregate, which is consistent with the absence of stripes.For g = 1.5 and T = 0.5 and for g = 2 and T = 2 (solid slab in vapour) the existence of multiple CSD peaks indicates that the HS are now arranged in many disconnected clusters (the stripes).For g = 3 and T = 4 we still find a solid slab in vapour, but the behaviour of the CSD now points toward a lower number of aggregates of larger size, a clear signal that the thickness of stripes grows with g. Stability of stripes in the solid phase In this Section we elucidate why the stripes spontaneously emerging in our simulations actually reflect a tendency peculiar to the solid phase.To uncover the relevant crystalline phases of the mixture, we estimate the chemical potentials of a number of striped crystals and compare them with the chemical potential of a substitutionally disordered mixture (sdm), so as to confirm the preference of the system for stripe order.In order to settle the question in simple terms, we perform total-energy calculations at zero temperature, a condition for which the chemical potential is equal to the total enthalpy per particle (see, e.g., ref. 70), only considering the densest packings (fcc and hcp) and a few possible orientations of the stripes (alternating layers of type-1 and type-2 spheres).Thus, In fcc crystals, layers will be oriented perpendicularly to [001], [011], or [111] (these structures are denoted fcc001, fcc011, and fcc111, respectively). In the hcp case, layers will consist of (0001) lattice planes.For each lattice and high-symmetry direction (normal to the layers) we minimise the enthalpy of the mixture over the full range of densities, for each given pressure P. Using as many particles as needed to make the calculation exact (up to E30 000 for the larger g values), we find that the minimum enthalpy occurs at the highest density r max ¼ ffiffi ffi 2 p À Á for every P, as the number of pairs of unlike spheres is always maximised at closest packing-hence, we take P = 0 in the following.The zerotemperature energy of a striped crystal is then plotted as a function of the number n p of planes per stripe/layer.As far as the sdm is concerned, the total potential energy at T = 0 is averaged over ten different random equimolar compositions: it turns out that the enthalpy of the sdm as a function of pressure is again minimum for P = 0 (but the optimal density is no longer ffiffi ffi 2 p ). Results for striped crystals are reported in Fig. 8 for a few integer and half-integer values of g; to remove any ambiguities related to the definition of u(r) at r = 1 + g, we have chosen r = 1.41 t r max .We see that the optimal number % n p of planes per layer is an increasing function of g, corroborating the findings of the previous section.However, due to the intrinsic discreteness of the problem, the increase of % n p with g should actually be regarded as an average trend: for instance, when moving along the hcp branch % n p decreases in the step from g = 1.5 to g = 2, see triangles in panels C and D. It is instructive to compare the T = 0 results for g = 2 and 3 with the structures reported in Fig. 6.Therein, the stripes are fcc layers oriented perpendicularly to [111].We see from Fig. 8D and E that the minimum energy for fcc111 is close to the deepest minimum which instead occurs for hcp; therefore, it cannot be excluded that entropy considerations (and a lower density) will favor fcc111 for high enough T. Gratifyingly, the energy minima corresponding to fcc111 for g = 2 and 3 are found for a number of planes equal to 2 and 3, respectively, i.e., exactly the same numbers observed in Fig. 6. In Fig. 9 we compare the enthalpies, for P = 0 and r = 1.41, of the most stable striped solids with the sdm (of fcc or hcp structure).Observe that this comparison is sufficient for our purposes, since for P 4 0 the enthalpy gap can only be larger.We see from the picture that the best striped solid is systematically more stable than the substitutionally disordered mixtures, even for g as small as 0.5, and its relative stability moreover grows with g.This may seem in contrast with the outcome of our simulations for g = 0.5, where for T = 0.5 and r = 1.1 we have found that stripes are rapidly wiped out by swap moves, whereas stripes made of single planes are found in simulation to be stable in hcp for g = 1.The way out of this apparent contradiction is to recognize that the analysis made at T = 0 is only partially indicative of the behaviour of the mixture at non-zero temperature and not too high density.After all, the only kind of compositional disorder tested at T = 0 is the maximal disorder possible.The disorder observed for g = 0.5 is indeed different: for example, in the hcp solid for T = 0.5 and r = 1.1, we see a predominance of like neighbors over unlike ones.In conclusion, stripes are absent for sufficiently small g, and the minimum g needed to observe stable stripes in the solid is probably between 0.5 and 1. Once we have identified the most relevant solid phases of the mixture, we go on to determine the phase boundaries of the solid phase: indeed, the characterization of stripes would not be complete without identifying the thermodynamic conditions for which the compositional order of the mixture is stable or otherwise it is washed away upon e.g.isothermal expansion.Our procedure to identify the phase boundaries of the solid is based on the numerical analysis of a few structural indicators, once that a reasonable assumption about the structure of the stable solid has been done.This will allow us to obtain the approximate freezing and melting lines of the mixture.We illustrate our scheme for the representative case g = 2: for a few temperatures in the range from 2 to 5, we expand the mixture gradually, starting from r = 1.1 and reducing the density in steps of 0.01 (for each density, averages are made over 2 Â 10 5 cycles).Clearly, we do not know which striped crystal is stable at the various temperatures and densities: to keep it simple, for all temperatures we assume at r = 1.1 the same solid structure that is most stable at T = 0, namely, for g = 2 (see panel D of Fig. 8), a striped fcc crystal with layers oriented perpendicularly to [001] and consisting of two planes each; we take for granted that for T Z 2 this solid is entropically favored over hcp with n p = 1.We emphasize that the compositional order set in the initial configuration is preserved during the simulation, i.e., it resists thermal disorder and swap moves, at least insofar as the system is a bulk solid.Stripes in the solid phase should eventually fade away, as the temperature becomes sufficiently high.We have however not ascertained whether the disappearance of stripes corresponds to a true solid-solid phase transition or rather to a progressive rearrangement of particles inside the solid.Finally, it is also worth noting that swap moves are very efficient in probing the alleged stability of a striped solid: for example, for g = 1, r = 1.1, and T = 0.5, if we start the simulation from a striped hcp crystal with n p = 2, after about 10 4 cycles the mixture has spontaneously rearranged its composition to that typical of hcp with n p = 1. In order to assess the structure of the mixture during its evolution from solid to liquid and from solid to vapour, we monitor the values of q 4 and q 6 , as well as the pair entropy Fig. 9 Comparison, at zero temperature and pressure, between the enthalpies of the optimal striped crystals with fcc or hcp structure and the substitutionally disordered crystals of same structure (sdm-fcc and sdm-hcp).Inset: Same quantities as in the main figure, but the energy per particle is now referred to that of the optimal striped fcc crystal. View Article Online This journal is © the Societies 2023 Phys.Chem.Chem.Phys., 2023, 25, 16227-16237 | 16235 per particle, s 2 . 71The latter is a sensitive probe of the overall translational order, as much as the Steinhardt parameters are for orientational order.The onset of coexistence will be signaled by a net decrease of the orientational order parameters and a substantial decrease of the absolute pair entropy.We plot our data in panels A and B of Fig. 10.For T Z 3, as r is progressively reduced q 4 and q 6 keep roughly constant down to r = r 1 (T); then, they start decreasing until reaching a density r = r 2 (T) where they vanish abruptly (A).The same densities r 1 and r 2 mark significant changes also in the behaviour of s 2 (B), while inspection of the system configuration confirms that between r 1 and r 2 solid and liquid coexist.We assume that the loci r 1 (T) and r 2 (T) reasonably approximate the true melting and freezing lines, r m (T) and r f (T).The steady decrease of q 4 and q 6 between r 1 (T) and r 2 (T) is the result of the gradual increase of the liquid fraction in the mixture.A prominent exception is T = 3, where a hump is seen between r = 0.92 and 0.95: in this interval, the liquid droplet has the shape of a cylinder, while being spherical for higher densities.Below r = 0.92 the liquid is confined in a slab sandwiched by the solid.When the slab is sufficiently thin and the temperature is not too high, the liquid is striped like the solid, meaning that the compositional order of the solid propagates across the liquid.But when the slab becomes thicker, liquid stripes disappear altogether; indeed, no stripes are seen in the bulk liquid for r = 0.8 and T = 4.We should make g larger to see stripes also in the liquid phase (see Fig. 5D).As established in the previous section, for T = 2 the mixture falls below the triple temperature.In this condition, the expanding solid enters, for r E 1.02, the region of coexistence with a vapour of almost vanishing density.In this region, the values of q 4 and q 6 keep only a little smaller than in the bulk solid, since most of the particles are bulk-solid particles, irrespective of the shape of the solid-vapour interface.However, a series of humps occurs in all order parameters as a function of r: each hump corresponds to a well-definite shape and size of the vapour inclusion, which is heralded, for densities near the surrounding minima, by the formation of a thin liquid film at the interface between solid and vapour.As r is reduced, a spherical vapour bubble first appears, followed by cylindrical and bicylindrical 64 bubbles. For still lower r, the solid droplet finally acquires a slab-like shape.An analogous fine structure is present in s 2 , see Fig. 10B, where s 2 valleys correspond to maxima of q 4 and q 6 , and vice versa. Combining all results together, we draw in Fig. 10C the phase diagram of the equimolar mixture for g = 2. Notice, in particular, that the triple temperature is only slightly less than 3. Conclusions and perspectives The origin of striped phases in colloids is currently the object of debate, since the connection between their onset and the intermolecular potential has not yet been elucidated in detail.Herein, we have addressed this issue by demonstrating that a system as simple as a binary mixture of identical hard spheres can have a striped structure, provided that a flat and sufficiently long-ranged cross attraction is set between the two species.In addition, stripes are the more pronounced the longer the attraction range, leading to the conclusion that like aggregation is promoted by unlike attraction.More precisely, we have used numerical and theoretical methods to study the phase behaviour of a symmetric binary mixture of hard spheres, interacting via a square-well (SW) attraction of variable range g between unlike spheres only.Despite the extreme simplicity of the model, we identify a variety of remarkable behaviours.For g = 1 or larger the solid phase exhibits a patterned structure, where planar stripes of particles of one species alternate with stripes of the other species.Starting from gE3, stripes also emerge in the bulk liquid.Occasionally, stripes are also present in the liquid coexisting with vapour or solid, even when they are absent in bulk. Stripe order, i.e., a modulation of composition along a definite direction, comes totally unexpected in a mixture of hard spheres with isotropic cross attraction.It can be explained by energy considerations: planar stripes ensure the largest possible number of attractive unlike spheres (see our zerotemperature calculations in Section 1B). Compared with other works, 27,[29][30][31] our study clarifies that the presence of stripes essentially depends on the range of cross attraction rather than on the specific interaction between particles of the same species.We emphasize that stripes in our model arise spontaneously.This is to be contrasted with previous studies of HS mixtures, where stripes are forced to occur by a suitable confinement 72 or a striped substrate. 73nother form of compositional order under equimolarity, namely the checkerboard order, which too has a low energy, could be safely excluded in the present case, since it would only be relevant for a crystal with two-sublattice structure (which is not the case of fcc and hcp). We guess that, aside from the range, also a nearly flat profile of the cross attraction could be important in the stabilization of stripes: indeed, the freedom of particles to adjust their relative positions inside the well without affecting the energy gain is probably crucial in view of maximizing the number of contacts between unlike spheres.We postpone to a future study a thorough investigation of the role played by the shape of cross attraction and/or a departure from equimolarity in making the mixture striped. Our findings suggest new directions for the engineerization of colloidal particles tailored at the appearance of stripemodulated structures.According to our predictions, the emergence of stripes in the solid phase is expected for values of g between 0.5 and 1.Since this interaction range is close to real colloidal regimes, our model is open to an experimental implementation, so as to verify whether our results are predictive of real-life behavior. Conflicts of interest There are no conflicts of interest to declare. Fig. 2 Fig. 2 GEMC liquid-vapour coexistence points (full circles) with corresponding critical points (stars) for three different values of g, in the legend.Full lines are best fits of simulation data according to the law of rectilinear diameters and the scaling law for the density difference.The HNC pseudospinodal points (open circles) are also reported, with dashed lines as guides for the eye. bonds, P(N b ), which is again computed for g = 2 and the temperatures 2 and 4. Similarly to what we did in ref.31, two particles are said to be bonded together when, irrespective of the types, their separation is smaller than d bond = 1.25 (we have checked that results are insensitive to the specific d bond in the range from 1.15 to 1.4).In panels B 1 -B 4 of Fig.4we show results at two different densities and temperatures, distinguishing between like and unlike pairs of particles.For T = 4 and r = 0.10 (B 1 ) P(N b ) is monotonically decreasing, indicating that the vapour density is relatively high and most particles are isolated.For r = 0.30 (B 2 ) the maximum of the distribution moves to N b = 1, just because of the higher density, and again no long-range order occurs.Conversely, for T = 2 (B 3 , B 4 ) a sharp Fig. 5 Fig. 5 Typical equilibrium configurations for g = 3 and T = 5, for a number of densities. Fig. 6 Fig.6Slab-like solid droplets for different g and T. In all cases r = 0.5.For clarity, only one species is shown. Fig. 8 Fig.8For a number of g values, we plot in separate panels the zero-temperature energy in units of the SW depth e, see eqn (1), as a function of n p for various striped crystals (in the legend of panel A).Notice that g = 2 is a degenerate case, since exactly the same minimum enthalpy belongs to hcp (n p = 1) and fcc001 (n p = 2). Table 1 GEMC and HNC critical parameters, T cr and r cr , for three values of g Paper PCCP Open Access Article.Published on 10 May 2023.Downloaded on 9/16/2023 9:23:10 AM.This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.View Article Online This journal is © the Societies 2023 Phys.Chem.Chem.Phys., 2023, 25, 16227-16237 | 16231
2023-05-13T15:19:55.063Z
2023-05-16T00:00:00.000
{ "year": 2023, "sha1": "b60f488484bbef59ddde1c9cbc6874c1b08ac37f", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/cp/d3cp01026k", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5ae8fda41e418c08007e4a8b1df03f284a3eb762", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
4428419
pes2o/s2orc
v3-fos-license
Synthesis , Spectral Characterization and Antimicrobial Activity of Some Transition Metal Complexes with New Schiff Base Ligand ( BDABI ) A new Schiff base derivative ligand (LI) have been produced by condensed isatin and 2,3-diaminobutane (2:1) molar ratio. The (LI) ligand has been separated and distinguished by 1H,13C-NMR, (CHN) elemental analysis, UV-visible, mass spectroscopy and Fourier Transform infrared (FTIR) methods. The metal ions, cobalt(II), nickel(II) and copper(II) complexes were synthesized with the ligand. The complexes were typified by, UV-Visible, FTIR, atomic absorption, molar conductance, magnetic susceptibility and elemental analysis (CHN) techniques. Octahedral geometry are suggested for the metal complexes based on the results of physico-chemical and spectral techniques. The TLC for (LI) ligand and complexes demonstrated single spot for each, signifying their compounds purity. All these compounds were determined aligned with two classes of human pathogenic; bacteria Gram positive and Gram negative. INTRODUCTION Isatin represents synthetically 1H-indole-2,3-dione and it is a flexible lead molecule for latent bioactive agents 1 .The chemical versatility of isatin motivated the widespread use of this compound in biological synthesis 2 .Isatin nucleus having carbonyl group both the lactam and keto position 2 and 3 respectively can either go through addition reaction at the C-O bond or form concentration products with water release.Throughout the NH group, isatin compound series are able to access N-acetylation and N-alkylation 3 .Isatin is the mainly proficient class of fragrant heterocyclic organic complexes that has a lot of remarkable active outlines and able-bodied in human subjects 4 .The isatin ring is a major structural pattern existing in numerous pharmaceutically active complexes.This is essentially because of the straightforward synthesis and the pharmacological activity significance.Consequently, the discriminating functionalization and isatins synthesis are the central point in the reported papers over the years 5 .The derivatives of isatin have been particularly important recently.They have appeared to be antibacterial and antifungal 6 agents of a huge attention owing to their wide-ranging spectrum of in vivo and in vitro therapeutic activities 7 .The bases of Schiff are typified by the -N=CH-(azomethine group) that are typically created from the prime amines condensation and effective carbonyl group 8,9 .Schiff bases are forming some imperative kinds of biological compounds in view of the fact that they have different donor atoms and are capable of changing reaction based on the initial reagent.This category of compounds includes a broad diversity of organic features 10 .Schiff bases can be used in industrial chemistry and inorganic chemistry field 11 .Also, they are adopted as substrates in the preparing of some organically active complexes by means of ring cycloaddition, closure, and substitution reactions 12 .Furthermore, Schiff bases originated from diverse heterocycles had been recorded in the literature to have cytotoxic 13 , antimicrobial 14 , anticancer 15 , and antifungal activities 16 .As a result of huge flexibility and varied structural characteristics, diverse Schiff bases have synthesized and their complexation performance was investigated 9 .The compounds of Schiff base can be employed as ligands in coordination chemistry, it usually bi, tri, tetra-dentate ligands able to form extremely steady complexes with transition metals.Tetradentate ligands having imine clusters are utilized as modulators of constitutional and electronic characteristics of transitional metal centers 17,18 .Schiff bases are typified by their capacity to fully organize a metal ion, structuring chelate rings and the ligand.They locate their appliances as the investigative reagent for determination of metals 17 .Transitional metal complexes reliant on the Schiff base ligands have studied for a lot of years.Reported papers on Schiff base complexes of metals attract inorganic chemists up to date as a consequence of their widespread fields and applications, synthesis simplicity and use as organic models 19 . In this study, a straightforward synthetic technique to synthesize the Schiff base ligand from isatin and 2,3-diamino butane and its metal chelate complexes has been presented.The formation of these compounds has been explained through spectroscopic techniques. EXPERIMENTAL Instruments, materials and methods The entire compounds had been bought from BDH and Fluka.FTIR spectra had been measured in KBr on Shimadzu-spectrophotometer within (4000-400) cm -1 .Spectrum in ethanol had been measured using the UV-visible spectrophotometer of Shimadzu type within (200-1100) nm range with 1 cm quartz cell length.Melting points had been calculated by SMP30 electro thermal Stuart equipment.The measurements of complexes electrical conductivity had been measured at (25 o C) for 10-3 mol.L -1 solution in dimethyl sulfoxide (DMSO) samples by means of WTW inolabcond 720 digital conductivity meter.Mass spectra in agilent mass spectrometer 5975 quadropoleanalyser. 1 H NMR and 13 C NMR spectra had been calculated on a DRX (500-MHz) spectrometer in DMSO and Bruner DRX (500-MHz).The shifts of compounds are in ppm in relation to interior Me4Si.Fundamental microanalyses of the ligand and their complexes had been achieved by using Euro Vectro-3000A.The materials and solutions which used in the biological study Sterilized by using Autoclave, Gallen Kamp.The cultivated bacteria dishes incubated by using Memmert Incubator, 854 Schwach. The content of complex metal had been evaluated using atomic absorption method via Analytic Jena(A.A350) atomic absorption Spectrophotometer.Magnetic susceptibility magnitudes had been found at room temperature by the Gouy technique and Johnson Mattey Catalytic system.Thin Layer Chromatography (TLC) had been achieved on aluminum plates covered with silica gel (Fluka), and detected by iodine. Synthesis of ligand (LI) An isatin solution (2.94 g, 0.02 mol) in absolute ethanol (30 mL) had been inserted to a refluxing solution of 2,3-diamino butane (0.88 g,0.01 mol) in the similar solvent (15 mL) in (100 mL) round bottomed flask.A small number of glacial acetic acid drops had been inserted.The reactional combination had been excited under reflux at (80 °C) for 6 h with nonstop stirring.The solution color had been altered from orange to light brown and subsequently the brown precipitate Antimicrobial activity Study Antibacterial behavior of the Schiff base and its metal complexes had been monitored by Kirby-Bauer disk diffusion method 20 .This study used two strains of pathogenic bacteria first, a Staphylococcus aureus (Gram Positive) and the other strain, Escherichia coli.(Gram Negative).The chemical solutions used in the biological study are prepared by using dimethyl sulfoxide (DMSO) as solvent, where attended a single concentration (C) 1x10-3M.The dishes are incubated at a temperature of 37 °C for 24 hours.Taking the inhibition zones, Inhibition diameter mm was formed after 24 h as a criterion for the intensity of the effect synthetic chemical compounds on the growth of cultivated particular bacteria strains. General The Schiff base ligand (LI) is yellowishorange crystal, which is partially soluble in water and soluble in universal biological solvents.The reaction of this ligand with the metal ions gives different color crystals.All complexes are reasonably air-stable, insoluble in water, but soluble in most organic solvents. Physical characteristics and Elemental Investigation The physical characteristics and outcomes taken from C.H.N. investigation and metal substances of the arranged compounds are explained in Table .1.The investigative data had been acceptable with planned magnitudes.The molecular procedure of the ligand and its metal compounds had been proposed in relation to these data jointly with those acquired from spectral in addition to magnetic susceptibility of metal compounds.Every (1:1) metal to ligand solid complexes have been separated. IR spectra The infrared spectra give worthy details concerning the features of the functional groups in the ligand and some of which added to the metal ion. 17,21The IR spectrum of the free ligand (LI), showed the absorption band nonexistence at 3266 cm -1 at the level of stretched vibration of υ(N-H) of the isatin moiety 22,23,24 , the location of this band stayed at almost the similar frequency in spectra of the metal complexes, signifying the uncoordinated group 25,26 .The band at 1739 cm -1 in the free ligand spectrum based on υ(C=O) lactam of isatin moiety 17,27 shifted towards lower values, around 1694-11680 cm -1 , in the complexes, signifying the coordinated lactam carbonyl oxygen atom of the isatin residue 17,28 .The band appearing at 1653 cm -1 in the free ligand, assignable to the υ(C=N) vibration mode 27,29 , is moved to minor wave numbers with a Δυ =(37-26) cm -1 in the complexes spectra, this signifying the concern of azomethinic nitrogen atom in coordination 21,28 .The form of dual new bands in the region 482-472 and 457-439 cm -1 in the spectra of the complexes is due to υ(M-O) and υ(M-N) stretching vibrations correspondingly 21,29 .Besides, it proved the metal complexes structure.The characteristic IR data for all compounds are described in Table .2. The 1 H-NMR spectrum of (LI) in DMSO-d6 in Fig. 1 showed a solitary peak come into view at δ(8.413) and a manifold peaks at δ(7.517-7.761)ppm that had been attributed to chemical shifts of NH and aromatic protons of isatin moiety 24,27,30 .The observed doublet signal at δ(1.369, 1.397) ppm and multiplet peak at δ(3.450-3.569)ppm were assigned to N-C-H 27,30 and C-CH 3 30 protons on the diaminobutane moiety in the ligand respectively.The 13 C-NMR spectrum Fig. 2 displayed a peak at δ(172) ppm which is due to lactam carbonyl group 25 , while the N=C (azomethine group) carbon signal is showed at δ(168) ppm 23,24 .The multiplet peaks at δ(122-146) ppm are due to aromatic carbons of isatin moiety 24 .The signals at δ(64 and 20) ppm are assigned to the middle and terminal 23,30 . Thin layer chromatography (TLC) The ligand (LI) solution and its compounds in ethanol as solvent have come into view as single spot.Each has confirmed that every one of these compounds are clear and have just single isomer.Table .3 illustrates the Rf for complexes and the ligand. UV-VIS Spectral studies The electronically absorption bands plus the conductivity magnitudes have been presented in Table 4.The UV-Visible spectrum of LI in ethanol appeared two absorptions at (240 and 291) nm (41666 and 34364) cm -1 , which is due to π → π* transition and a broad low intensity band at (422) nm (23696) cm-1,which had been related to Π → Π* changeover 23,27 . Mass spectrum for the Schiff base ligand (LI) The mass spectrum data for ligand(LI) Fig. 3 depicted a peak of molecular ion at (m/z =346.7) 34.The molecular ion peak corresponds to (C 20 H 18 N 4 O 2 ).Other fragments are summarised in Table .5 and Scheme 2. Antimicrobial activity The response of the bacteria had been studied.We noticed the great biological influence of the Schiff base ligand and its complexes had been studied at previous concentration with pathogenic bacteria (Staphylococcus aureus), which is Gram Positive.While for the germ (Escherichia coli), which is a Gram negative bacteria, showed less response to ligand and its complexes studied from the other type of bacteria, characterized it's resistance to many chemical compounds and antibiotics 35 .The reason for this resistance is the colon bacteria that are found in a single bacilli containing thick casing surrounds of its cell.This casing contains a high proportion of lipid works to resist these materials from entering the cell, while the Staphylococcus aureus bacteria do not have this property, so it will be less resistant in the arrival of the chemical and antibiotic substances to internal of the bacterial cell 19,36 .Generally, the complexes showed biological effect more than the ligand(LI), although the ligand contains nitrogen and oxygen atoms biological retardant 37 . The positive charge ion in the chelated complex is somewhat shared with the donor atoms orbital in the ligand and there is n-electron delocalization as compared with the entire chelate ring, which leads to reduction in the polarization of the metal ion to a bigger level.This in turn amplifies the lipophilic character of the metal chelate and helps its access throughout the lipid layers of the membrane the microorganisms 38 .The corresponding results are depicted in Tables 6 and Figure CONCLUSION The Schiff base ligand, LI, coordinates with Co(II), Ni(II) and Cu(II) ions through the tetradentate carbonyl and amino groups resulting in six-coordinated metal ions.These activities are with M:L mole ratio of 1:1.The entire complexes are octahedral geometries as depicted in Fig. 5 and Fig. 6 in 3D structures.The biological activity results showed that all the compounds have variety of antibacterial activities. pattern.The reaction development had been observed by TLC.After completion of the reaction and standing for approximately 24 h at room temperature.The resultant solid had been composed by filtration, washed with absolute ethanol and dried in open air and purified by recrystalization from hot absolute ethanol and dried at ambient temperature.Scheme 1 shows the preparation of the ligand ; yield: 79.2%, m.p.: 174-175 o C. (Co, Ni and Cu) Complexes Synthesis Toward a solution of the (LI) (0.346 g, 2 mmol) in 20 mlabsolute ethanol, (1 mmol of metal chloride) in 20 ml ethanol 0.238 g CoCl 2 .6H 2 O, 0.237 g NiCl 2 .6H 2 O, 0.170 g CuCl 2 .2H 2 O, had been put in.The solutions had been refluxed for sixty minutes and left to fade away gradually to overthrow the complexes.The complexes were washed from mixture of ethanol + distilled water (1:1), and recrystallized from hot absolute ethanol.The secluded complexes are highlighted solids by colors, steady in air and unsolvable in water but totally soluble in most organic solvents such as DMSO and DMF.A number of chemical and physical features for every synthesized ligand (LI) and its complexes have been depicted in Table. 1.
2018-03-28T15:27:27.101Z
2018-02-05T00:00:00.000
{ "year": 2018, "sha1": "694f784c62191d5fee172cff6f11912e4878c4c6", "oa_license": "CCBYNCSA", "oa_url": "http://www.orientjchem.org/pdf/vol34no1/OJC_Vol34_No1_p_434-443.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "694f784c62191d5fee172cff6f11912e4878c4c6", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
197664625
pes2o/s2orc
v3-fos-license
Inverted organic solar cells with non-clustering bathocuproine (BCP) cathode interlayers obtained by fullerene doping Bathocuproine (BCP) is a well-studied cathode interlayer in organic photovoltaic (OPV) devices, where it for standard device configurations has demonstrated improved electron extraction as well as exciton blocking properties, leading to high device efficiencies. For inverted devices, however, BCP interlayers has shown to lead to device failure, mainly due to the clustering of BCP molecules on indium tin oxide (ITO) surfaces, which is a significant problem during scale-up of the OPV devices. In this work, we introduce C70 doped BCP thin films as cathode interlayers in inverted OPV devices. We demonstrate that the interlayer forms smooth films on ITO surfaces, resulting from the introduction of C70 molecules into the BCP film, and that these films possess both improved electron extraction as well exciton blocking properties, as evidenced by electron-only devices and photoluminescence studies, respectively. Importantly, the improved cathode interlayers leads to well-functioning large area (100 mm2) devices, showing a device yield of 100%. This is in strong contrast to inverted devices based on pure BCP layers. These results are founded by the effective suppression of BCP clustering from C70, along with the electron transport and exciton blocking properties of the two materials, which thus presents a route for its integration as an interlayer material towards up-scaled inverted OPV devices. the acceptor, through an ideally zero energy barrier at that interface 19 . Potentially, such layers also provide a transparent spacer to optimize the optical field distribution within the active layer, and thus enhance the OPV performance even further [20][21][22] . Peumans et al. 23 introduced for the first time a BCP (2,9-dimethyl-4,7-diphenyl-1,10-ph enanthroline) layer as a combined EBL and ETL in between the electron acceptor and cathode in bilayer, standard configuration OPV cells based on CuPc and fullerene, C 60. Bathocuproine (BCP) is widely used as ETL and EBL material in OPVs. BCP has a highest occupied molecular orbital (HOMO) level at 7.0 eV and lowest unoccupied molecular orbital (LUMO) at 3.5 eV 24,25 . Despite the relatively high lying LUMO level, BCP efficiently transports electrons from the acceptor to the cathode due to the formation of a BCP-metal complex, formed when the metal cathode is evaporated on top of the BCP layer 26 . Additionally, BCP works as an EBL and electron selective contact due to its relatively low lying HOMO level of 7.0 eV compared to the HOMO level of e.g. C 70 at 6.1 eV 27 . The exciton blocking properties of BCP results in a higher charge generation yield at the D-A interface, which again leads to enhanced device performance through enhanced short-circuit current densities [28][29][30] . Work reported by Gommans et al. has documented that BCP also can act as an optical spacer layer, to best exploit optical interference effects in OPV cells 31 . Several studies have focused on the function of BCP as ETL and EBL in OPV devices with standard device architecture 24,31,32 , highlighting the aforementioned properies. In our previous study, an area-dependent behavior of BCP used as ETL and EBL in inverted OPV devices was reported 33 . It was observed that while scaling up the OPV device area, the performance and device yield of the inverted OPV devices decrease significantly compared to standard configuration cells, which was demonstrated to be due to the clustering of BCP on ITO surfaces 33,34 . While BCP on small device areas works well as both EBL and ETL, the probability of BCP clusters penetrating the active layer (approx. 50 nm thick in that study) increases for increased device area. This potentially results in electrical shunting of the inverted OPV devices, which dramatically decrease the device yield for up-scaled cells. In recent work, this has also been demonstrated to lead to faster degradation of inverted OPV devices based on pure BCP ETL and EBL layers 35 . The integration of Ag doped BCP layers in inverted OPVs as buffer layers has previously been reported on 36,37 . However, although these layers provide improved electrical properties, Ag doped BCP may lead to unwanted exciton quenching processes in the fullerene acceptor layer and thus deteriorate the device performance 37 . Such quenching processes between metals and adsorbed molecules are well-know 38 . Incorporation of interlayers or buffer layers fabricated from a blend of two or more organic materials is a common practice in OPV devices. The blended layers potentially improve the device performance by enhancing the electrical properties at the respective interface (interlayers), and/or the optical properties of the devices (interlayers or buffer layers) 17,39,40 . Bartynski et al. used a blend of C 60 and BCP layer as ETL and EBL in standard OPV devices that improved the electron conductivity, while efficiently blocking excitons and reducing exciton-polaron recombination 27 . Furthermore, Xiao et al. reported that a blend of BPhen:C 60 increases the electron conductivity and, as well, decreases exciton recombination effects in the devices 41 . Liu et al. used a BCP:C 60 layer as EBL and ETL in standard configuration OPV devices to optimize the optical properties of the devices, and also the device lifetime 42 . However, compared to C 60, C 70 offers higher stability upon air exposure 43 , and also a higher conductivity 43 , which may be beneficial when used as an interlayer material in organic photovoltaic devices. In this work, we studied bathocuproine:fullerene (BCP:C 70 ) acting as ELB and ETL functional blends in inverted architecture OPV devices, as sketched in Fig. 1a. The optimization of the BCP:C 70 ratio as well as the thickness of the blend layer was investigated. The optimized BCP:C 70 layers were employed in inverted OPV devices having active areas of up to 100 mm 2 , and the results were compared against inverted OPVs based on pure BCP layers. The investigation shows that the BCP:C 70 blends suppress the clustering of BCP on top of ITO surfaces, leading to a significantly improved device performance and especially device yield for up-scaled inverted OPV devices. Figure 1 shows the inverted bilayer OPV device architecture studied in this work, having BCP:C 70 as ETL and EBL, as well as a schematic energy diagram of the device stack made from literature reported energy level values. DBP possesses a high optical absorption strength in the visible wavelength regime and a HOMO level at 5.5 eV 44 , making it a good match to fullerene acceptors such as C 60 and C 70 11 . Figure 2a shows an atomic force microscopy (AFM) image of 3 nm pure BCP deposited on top of an ITO coated glass substrate. The clustering of BCP occurs due to a large interface energy between ITO and BCP 33 , and may take place immediately after BCP deposition even at room temperature 34 . Such clustering can be explained by Ostwald ripening, in which some aggregates grow at the expense of others by adsorbing molecules from the surrounding surface area 45 . At larger surface area, the probability of forming clusters that cause device shunts are larger 33 , making device upscaling more challenging in inverted OPV architectures. One possible solution to overcome BCP aggregation is to conduct co-evaporation with another organic small molecule in order to obtain smoother films 34 . As shown in Fig. 2b, doping C 70 molecules into the BCP film via co-evaporation is effective in preventing the aggregation of BCP molecules, resulting in a nano-grained surface on ITO. Results and Discussion As reported in our previous work, the optimized BCP ETL thickness for small area inverted DBP/C 70 based bilayer OPV devices is 1.5 nm 33 , which was therefore chosen as initial ETL thickness in this work. Optimization results from 2 mm 2 inverted OPV cells with various ratios of the BCP:C 70 ETL and EBL are listed in Table S1, showing that devices with 2:1 ratio show slightly higher Fill Factor (FF) with an average value of 55%, V oc with an average value of 0.82 V, as well as PCE reaching an average value of 2.28%. Even though the 2:1 blend films lead to reasonable device performance, an improvement in the short-circuit current density is not seen, compared to reference cells, which is otherwise expected from the exciton blocking properties of the interlayer. This could be due to the relatively low thickness (1.5 nm) of the blend layer. As a next step, we turned our attention to optimizing the thickness of the ETL and EBL blend layer in 2 mm 2 OPV devices. The JV characteristics and performance parameters of the OPV devices are shown in Fig. 3 and in Table 1, respectively. A summary of the performance parameters from Table 1 is plotted in Fig. 4. As shown in Fig. 4b, as the thickness of the BCP:C 70 layer increases from 1.5 nm to 3 nm, the J SC also increases, which can be well explained by the exciton blocking properties of BCP 46 . When increasing the thickness of the BCP:C 70 layer above 5 nm, the device performance parameters decrease, and the JV curves show clear s-shape characteristics (Fig. 3). The S-shape could be attributed to charge accumulation close to the active layer and ETL interface 42,47,48 . Charge accumulation close to the thicker ETL and EBL films could take place due to the non-ideal energy level alignment between C 70 and BCP, although further studies are required to understand that interface in detail. Electron-polaron accumulation at the electron acceptor and blocking interface may lead to exciton-polaron recombination effects 20,31 , a well-known cause for performance drops in OPV devices 49 . In order to further elaborate on the electron transport properties of the 3 nm BCP:C 70 blend ETL and EBL layer, electron-only devices (EODs) were fabricated. The structure of the EODs is shown in Fig. 5b, where 0 and 3 nm of BCP:C 70 (2:1) blends were investigated. In the EODs, electrons were injected into the devices through the Ag electrode and extracted out at the ITO electrode. The JV characteristics of the EODs with 3 nm BCP:C 70 ETL show an improvement in the electron extraction properties at the ITO electrode, compared to EODs without the combined ETL and EBL layer (Fig. 5a). Such improvements has also recently been demonstrated for pure, ultrathin BCP layers in small area inverted OPV devices 33 . To this point, the exact energy level alignment scheme across the ITO/BCP:C 70 interface needs to be examined in detail to point on the origin of the improved electron extraction properties. This highlights the importance of future photoelectron measurements to elucidate the detailed interfacial electronic structure and energetic alignment across the interface. www.nature.com/scientificreports www.nature.com/scientificreports/ Photoluminescence (PL) intensity measurement was performed in order to elucidate the exciton blocking properties of the BCP:C 70 blend layers. In Fig. 6a, PL spectra of the pristine C 70 layer on ITO shows two peaks, a distinct narrow peak at around 690 nm followed by a broader peak at higher wavelengths, corresponding to characteristic electronic and vibrational modes for polycrystalline C 70 , as previously reported 50 . The PL spectra show significant increase in PL intensity from C 70 when deposited on top of the 3 nm BCP:C 70 layer, compared to reference stacks based on pure C 70 layers. The increase in the PL intensity is attributed to the enhanced exciton blocking properties of the BCP:C 70 blend layers, and thus minimum quenching at the ITO/C 70 interface 44 . The change in the relative intensity between the two peaks in the PL spectra can be explained by reduced quenching of specific vibronic transitions, upon insertion of the new ETL. The reduced symmetry of the C 70 results in more allowed optical transitions and therefore significantly stronger absorption in the visible region compared to symmetrical C 60 46 . However, for the investigated ETLs, we have used ultra-thin interlayers of BCP:C 70 (only 3 nm at 2:1 ratio i.e. ~1 nm of C 70 ). Hence, the impact of light absorption due to either C 60 or C 70 should be negligible in this case. This can be seen in Fig. 6b, where the transmittance spectra of the pure BCP as well as the C 70 doped BCP:C 70 (2:1) layer on ITO-coated glass are shown. Clearly 3 nm of BCP or BCP:C 70 (2:1) show almost no change in optical transmittance, and thus negligible absorption when inserted as an ETL in the inverted device configuration used here. The blended ETL consists of a mixture of C 70 , which efficiently conducts electrons, and the wide energy gap bathocuproine (BCP) that blocks excitons, as demonstrated by our electron only devices and the photoluminescence results. This ETL therefore appears to separate excitons and electrons at the blocking interface as an effective filter, blocking excitons from quenching at the cathode, while promoting electron extraction through the same interlayer in the devices. As demonstrated from Fig. 2, the BCP:C 70 blend layer possesses a smooth surface without BCP aggregation, which otherwise is a main problem in employing BCP in large area inverted devices, due to device shunting 33 . As a final investigation, the optimized BCP:C 70 blend layers were thus employed in cells with up-scaled device areas of 100 mm 2 , see Fig. 7. As a general observation, a reduced of J SC , FF (Table 2) and hence PCE were observed when increasing the active area from 2mm 2 to 100mm 2 , which in part can be understood from the increased ITO www.nature.com/scientificreports www.nature.com/scientificreports/ resistance for up-scaled areas 14,33,51 . Devices with 3 nm BCP:C 70 (2:1) show V OC and J SC of 0.85 V and 4.9 mA/cm 2 , respectively, but low FF values of 44% (Table 2). This reduction in FF may be attributed to surface defects of the BCP:C 70 layer, which may arise due to thickness variations in the very thin ETL and EBL layer. Increasing the BCP:C 70 thickness leads to an enhancement of the FF, and devices with 5 nm BCP:C 70 (2:1) ETL and EBL show the highest Fill Factor (FF) values of 51%, and power conversion efficiencies (PCE) of 2.04% ( Table 2). The performance of the OPV devices reduces significantly when the thickness of the BCP:C 70 is increased to 10 nm. This can be explained by the increased series resistance and exciton-polaron recombination 20,31 taking place at the acceptor and blocking layer interface. Initial aggregation could potentially also promote further recombination effects. For the large area inverted OPV devices with the BCP:C 70 (2:1) layers, the device yield was at 100%, even for OPV devices with incorporated blends of up to 10 nm in thickness. This is notable when compared to inverted OPV devices based on pure BCP as ETL and EBL, where very low device yields for 100 mm 2 cells are observed, mainly due to BCP clustering 33 . Doping of BCP with C 70 thus suppresses the clustering of the BCP molecules, resulting in smoother BCP:C 70 ETL and EBL on ITO surfaces, giving rise to 100% device yields even for large area devices. Conclusion In this work, development of inverted organic solar cells using mixed bathocuproine:fullerene (BCP:C 70 ) electron transport and exciton blocking layers has been demonstrated. Incorporation of C 70 molecules into the BCP layer suppresses clustering of the BCP molecules, resulting in smooth layers on ITO surfaces, a prerequisite for using them as efficient ETL and EBL in inverted OPV device configurations. While electron-only devices demonstrate improved electron extraction in the cells, photoluminescence studies reveals strong exciton blocking properties of the interface layer. Combining these material properties leads to well performing bilayer C 70 /DBP based inverted devices, reaching power conversion efficiencies up to 3.28%. While BCP clustering is know to be a severe problem for large area OPV cells, leading to significant reduction in device efficiency and device yield, the novel interlayer leads to well-functioning large area cells (100 mm 2 ), reaching an impressive device yield of 100%. This work thus demonstrates a viable route for the use of the well-known interlayer material bathocuproine (BCP) in inverted OPV devices. Materials and Device Fabrication. Pre-patterned ITO coated glass substrates (Kintec Company, Hong Kong) were used for 2 and 100 mm 2 cell area OPV devices. The sheet resistance of ITO was approximately 15Ω/ sq. The substrates were cleaned sequentially in an ultrasonic water bath with detergent, deionized water, Acetone and IPA (10 min for each) then blow dried with a nitrogen gun. In the first step, OPV devices were fabricated on the cleaned ITO substrates with 2mm 2 cell areas. The BCP:C 70 (Sigma-Aldrich, Germany) blend layers with 1.5 nm thickness and different ratio (1:1, 2:1 and 4:1) were grown by co-evaporation, simultaneously depositing from two sublimation sources at a base pressure of 3 × 10 −8 mbar. This was followed by 30 nm C 70 at a growth rate of 0.2 Å/s and 20 nm DBP (Luminescence Technology Corp., www.nature.com/scientificreports www.nature.com/scientificreports/ Taiwan) deposited at 0.3 Å/s without breaking vacuum in between the steps. Then, 10 nm of molybdenum oxide (MoO 3 ) (Sigma-Aldrich, Germany) and 100 nm of Silver (Ag) (AESpump ApS, Denmark) were deposited by thermal evaporation at a base pressure of 5 × 10 −7 mbar. The deposition rates for the MoO 3 and Ag were 0.3 Å/s and 0.5 Å/s, respectively. In the second step, 2 mm 2 OPV devices were fabricated using optimized BCP:C 70 ratio (2:1) with different thickness (1.5, 3, 5 and 10 nm). The deposition rates for the BCP and C 70 were 0.2 Å/s and 0.1 Å/s, respectively. Finally, optimized BCP:C 70 blend layers were used for fabrication of the up-scale OPVs devices (100 mm 2 cell area). All deposition parameters of the other layers were kept the same as in the first step. Electron-only devices (EODs), having the structure shown in Fig. 5b, were fabricated by sandwiching the BCP:C 70 mixed layers between the respective contact bottom ITO and top C 70 (100 nm)/BCP(10 nm)/Ag(100 nm) layers, using the same deposition rates as for OPV device fabrication. Device Characterization. All characterizations were performed in an ambient environment. The current density-voltage (J-V) characteristics of the OPV devices were measured using a 2400 source measure unit (Keithley Instruments Inc., USA) and a class AAA solar simulator (Sun 3000, Abet Technologies Inc., USA). The J-V characteristics were measured by applying a voltage sweep from +1 to −0.5 V under a calibrated lamp intensity of 100 mW/cm 2 . Atomic force microscopy (AFM) images were taken using a Veeco Dimension 3100 scanning probe microscope. JV characteristics of the EODs were measured by applying a sweeping voltage from +1 to −1 V using a Keithley 2400 source measure unit (Keithley Instruments Inc., USA). For Photoluminescence (PL) intensity measurements of the ITO/C 70 (100 nm) and ITO/BCP:C 70 (3 nm)/C 70 (100 nm) structures, a microscope objective (Nikon E Plan 50 × 0.75 EPL) with a fluorescence microscope (Nikon Eclipse ME600) connected to a Maya2000Pro Spectrometer (from Ocean optics) was used to record the spectra. A mercury short arc lamp having a filtered excitation wavelength centered between 330-380 nm was used as excitation light source. Transmittance spectra were obtained from a Shimadzu 2700 spectrophotometer.
2019-07-20T13:04:17.256Z
2019-07-18T00:00:00.000
{ "year": 2019, "sha1": "b7e03026edf6bd1030547ea3b501815cb7530945", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-46854-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7e03026edf6bd1030547ea3b501815cb7530945", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
225100825
pes2o/s2orc
v3-fos-license
Diversity and the Splice of Life: Mapping the 17q12–21.1 Locus for Variants Associated with Early-Onset Asthma in African American Individuals The introduction in 2005 of the genome-wide association study (GWAS) ushered in an era of disease gene discovery at an unprecedented pace and scale, touching every field in medicine. This is certainly true for asthma, in which in fewer than 15 years, more than 100 asthma risk loci have been identified, implicating a wide range of previously unrecognized biologic processes to asthma pathogenesis. Among the most notable is a common haplotype on chromosome 17q12-2, which has emerged as the most consistently reproducible locus for childhood asthma, conferring risk in populations of varying geographic and ethnic origin (1, 2). Although conferring substantive main effects, the locus also interacts with key environmental factors, including tobacco smoke exposure, viral respiratory illness, vitamin D, and others, establishing 17q21 among the most impactful asthma loci identified to date (3). But how does this locus confer genetic risk? An inherent limitation of GWASs is that the association often cannot be narrowed to a specific disease-causing variant but rather to a broad region of strong linkage disequilibrium (LD), where alleles at multiple variants in close proximity to one another are highly correlated. This is the case for the 17q21 locus, a region spanning more than 100 kb, containing numerous highly correlated variants that exhibit statistically similar association with asthma, and containing multiple candidate genes. Which SNPs and genes are most relevant? A broad armamentarium of genomic and experimental approaches has been called on to answer these questions. Expression quantitative trait locus mapping found strong association of the risk haplotype with increased expression of the following two 17q21 genes: ORMDL3 (ORMDL sphingolipid biosynthesis regulator 3) and GSDMB (gasdermin B) (1, 4). Hierarchical functional fine mapping employing multiple techniques identified a causal functional variant (rs12936231) that increases ORMDL3 and GSDMB expression by disrupting the binding of the insulator protein CTCF (5). Chromatin immunoprecipitation sequencing (6) and CpG methylation studies (7, 8) found that the risk haplotype also confers broad epigenetic modification that contributes to the regulation of both genes (and others), providing linkage to the previously noted environmental factors. Definitive evidence implicating both genes in asthma pathogenesis came from transgenic mouse models developed by David Broide. ORMDL3 transgenic mice spontaneously exhibit physiologic and histologic features of asthma, including airway hyperresponsiveness, airway smooth muscle hypertrophy, and airway wall remodeling, all in the absence of airway inflammation (9). ORMDL3 plays a role in several asthma-relevant biologic processes, including intracellular calcium flux, the unfolded protein response, and sphingolipid metabolism (10, 11). What about GSDMB? Like the other five members of the gasdermin family of pore-forming proteins, GSDMB is activated after caspase-mediated cleavage and is a potent inducer of both inflammatory cell death (pyroptosis) and extracellular inflammatory cytokine release (11). Although GSDMB does not naturally exist in mice, Broide’s group demonstrated that, like the ORMDL3 mouse, mice expressing a human GSDMB transgene display airway hyperresponsiveness and airway remodeling (12). Also like with ORMDL3, these changes were observed in the absence of accompanying airway inflammation. Though additional studies suggested a role for TGF-b1 in promoting the noninflammatory manifestations, the lack of airway inflammation this model, given the prominent role of the gasdermins in inducing epithelial inflammation, is curious. Why would this be so? In this issue of the Journal, Gui and colleagues (pp. 424–436) seem to provide an explanation (13). They took a fresh look at the 17q21 locus by performing a comprehensive next-generation DNA-sequencing association study in 5,630 African American children. Genetic association studies in populations of African ancestry, in which genetic diversity is much greater compared with that observed in European populations (both in terms of the total number of variants and the extent of LD), offer two important advantages. First, the greater diversity in total variation ensures that a larger number of variants will be discovered by sequencing, with the potential of identifying novel disease-causing variants not observed in European populations. More importantly, because LD is considerably narrower in African American individuals, fine mapping in this population can help isolate functional variants to shorter DNA segments. Indeed, by using this approach, Gui and colleagues found that the asthma association localized most convincingly to a single variant (rs11078928) situated in a consensus splice site of exon 6 of GSDMB (Figure 1). Although this variant is also present in Europeans, the shorter LD in this African American cohort around rs11078928 (4 kb) focused the association with asthma more narrowly to this variant over others. Subsequent peripheral blood RNA sequencing and expression quantitative trait locus studies found rs11078928 to be associated with alternative splicing of GSDMB, with the allele conferring lower asthma risk (the protective “C” allele) associated with increased expression of The introduction in 2005 of the genome-wide association study (GWAS) ushered in an era of disease gene discovery at an unprecedented pace and scale, touching every field in medicine. This is certainly true for asthma, in which in fewer than 15 years, more than 100 asthma risk loci have been identified, implicating a wide range of previously unrecognized biologic processes to asthma pathogenesis. Among the most notable is a common haplotype on chromosome 17q12-2, which has emerged as the most consistently reproducible locus for childhood asthma, conferring risk in populations of varying geographic and ethnic origin (1,2). Although conferring substantive main effects, the locus also interacts with key environmental factors, including tobacco smoke exposure, viral respiratory illness, vitamin D, and others, establishing 17q21 among the most impactful asthma loci identified to date (3). But how does this locus confer genetic risk? An inherent limitation of GWASs is that the association often cannot be narrowed to a specific disease-causing variant but rather to a broad region of strong linkage disequilibrium (LD), where alleles at multiple variants in close proximity to one another are highly correlated. This is the case for the 17q21 locus, a region spanning more than 100 kb, containing numerous highly correlated variants that exhibit statistically similar association with asthma, and containing multiple candidate genes. Which SNPs and genes are most relevant? A broad armamentarium of genomic and experimental approaches has been called on to answer these questions. Expression quantitative trait locus mapping found strong association of the risk haplotype with increased expression of the following two 17q21 genes: ORMDL3 (ORMDL sphingolipid biosynthesis regulator 3) and GSDMB (gasdermin B) (1,4). Hierarchical functional fine mapping employing multiple techniques identified a causal functional variant (rs12936231) that increases ORMDL3 and GSDMB expression by disrupting the binding of the insulator protein CTCF (5). Chromatin immunoprecipitation sequencing (6) and CpG methylation studies (7, 8) found that the risk haplotype also confers broad epigenetic modification that contributes to the regulation of both genes (and others), providing linkage to the previously noted environmental factors. Definitive evidence implicating both genes in asthma pathogenesis came from transgenic mouse models developed by David Broide. ORMDL3 transgenic mice spontaneously exhibit physiologic and histologic features of asthma, including airway hyperresponsiveness, airway smooth muscle hypertrophy, and airway wall remodeling, all in the absence of airway inflammation (9). ORMDL3 plays a role in several asthma-relevant biologic processes, including intracellular calcium flux, the unfolded protein response, and sphingolipid metabolism (10, 11). What about GSDMB? Like the other five members of the gasdermin family of pore-forming proteins, GSDMB is activated after caspase-mediated cleavage and is a potent inducer of both inflammatory cell death (pyroptosis) and extracellular inflammatory cytokine release (11). Although GSDMB does not naturally exist in mice, Broide's group demonstrated that, like the ORMDL3 mouse, mice expressing a human GSDMB transgene display airway hyperresponsiveness and airway remodeling (12). Also like with ORMDL3, these changes were observed in the absence of accompanying airway inflammation. Though additional studies suggested a role for TGF-b1 in promoting the noninflammatory manifestations, the lack of airway inflammation this model, given the prominent role of the gasdermins in inducing epithelial inflammation, is curious. Why would this be so? In this issue of the Journal, Gui and colleagues (pp. 424-436) seem to provide an explanation (13). They took a fresh look at the 17q21 locus by performing a comprehensive next-generation DNA-sequencing association study in 5,630 African American children. Genetic association studies in populations of African ancestry, in which genetic diversity is much greater compared with that observed in European populations (both in terms of the total number of variants and the extent of LD), offer two important advantages. First, the greater diversity in total variation ensures that a larger number of variants will be discovered by sequencing, with the potential of identifying novel disease-causing variants not observed in European populations. More importantly, because LD is considerably narrower in African American individuals, fine mapping in this population can help isolate functional variants to shorter DNA segments. Indeed, by using this approach, Gui and colleagues found that the asthma association localized most convincingly to a single variant (rs11078928) situated in a consensus splice site of exon 6 of GSDMB ( Figure 1). Although this variant is also present in Europeans, the shorter LD in this African American cohort around rs11078928 (4 kb) focused the association with asthma more narrowly to this variant over others. Subsequent peripheral blood RNA sequencing and expression quantitative trait locus studies found rs11078928 to be associated with alternative splicing of GSDMB, with the allele conferring lower asthma risk (the protective "C" allele) associated with increased expression of The authors' research on the genetics and genomics of the 17q21 asthma locus is supported through grants R01 HL123546 and P01 HL132825 from the NHLBI of the NIH. Originally Published in Press as DOI: 10.1164/rccm.202010-3802ED on October 27, 2020 a GSDMB isoform lacking exons 6 and 7 (isoform 2). Panganiban and colleagues (14) previously found that this asthma-protective C allele induces the skipping of exon 6 in human bronchial epithelial cells and results in the production of a GSDMB protein (isoform 1) that is resistant to caspase-mediated activation and lacks pyroptotic activity. It is isoform 1 that was used to construct the GSDMB transgenic mouse, providing the following explanation for the absence of inflammation in that model: they evaluated an isoform that does not promote inflammation. From the totality of the evidence, two things are now clear. First, despite being asthma-protective relative to other isoforms, increased expression of GSDMB isoforms that lack pyroptotic potential nonetheless promote the development of noninflammatory airway manifestations of asthma, suggesting alternative GSDMB functions. More importantly, however, they argue for the evaluation of additional murine models that employ the GSDMB isoforms associated with increased asthma risk (i.e., those containing exon 6, whose expression is increased in the presence of asthma risk alleles) to adequately assess the role of GSDMB in asthma pathobiology. In addition to furthering of our understanding of the 17q21 locus, Gui and colleagues offer two timely lessons. First, they provide a glimpse of what is to come as we leverage the full power of next-generation sequencing. The ability to assess all genetic variation at genome scale promises to propel the use of genetic approaches in pulmonary medicine to even greater heights than those achieved by GWASs. Second, Gui and colleagues remind us of the tremendous value of ethnic diversity in population genetic research. Although race is a purely social (not biologic) construct, divergent genealogical histories and mating patterns have resulted in important differences in variant distribution, allele frequency, and LD patterns. As shown here, these differences can be leveraged to facilitate gene discovery. Moreover, in a time when our society is attempting to confront the ills of racial discrimination, including the ongoing racial disparities in health care, it is imperative that future studies are more inclusive to ensure that all peoples benefit equally in the postgenome era. n (3). In other words, HRRP is an effort to promote high-value care by reducing healthcare costs and utilization. Under the HRRP, hospitals with higher than expected readmissions of patients recently hospitalized for heart failure, pneumonia, or myocardial infarction received reduced Medicare reimbursements starting in October 2012. Chronic obstructive pulmonary disease (COPD) exacerbations were added to the list of HRRP penalty-sensitive conditions in October 2014. Patients, front-line clinicians, and administrators have raised concerns about the appropriateness of 30-day readmissions as a quality measure for hospitals because hospital-based care is only one of many factors that contribute to posthospital outcomes (4). For example, limited access to high-quality posthospital care and patients' socioeconomic resources (e.g., social support, stable housing, transportation, and food) also contribute to readmissions (5). In addition, the published literature about how hospitals can safely prevent hospital readmissions is limited and contradictory, the International Classification of Diseases codes used for administrative purposes (e.g., reimbursement) may not be sufficiently sensitive nor specific to reliably identify hospitalizations for COPD exacerbations, and, perhaps most importantly, it is unclear whether decreasing readmissions after a COPD exacerbation leads to excess postdischarge mortality (6). It is in this context that the study in this issue of the Journal by Puebla Neira and colleagues (pp. 437-446) offers important new information (7). Puebla Neira and colleagues conducted a retrospective cohort study of Medicare fee-for-service beneficiaries age 65 years or older using administrative billing codes from over 4.5 million COPD hospitalizations from 2006 to 2017. In this population, they report an all-cause in-hospital mortality rate of 3% and an allcause 30-day posthospital mortality rate of 5.3%. The authors report the mean hospital-level risks of readmission and mortality after hospital discharge in the following three periods: the "preannouncement" period before the Affordable Care Act (December 2006-March 2010), the "announcement" period when the HRRP was announced (April 2010 to August 2014), and the "implementation" period when hospitalization for COPD exacerbation was added as a penalty-sensitive HRRP condition (October 2014-November 2017). Findings from the study by Puebla Neira and colleagues (see Table 3 and Figure 3 in Reference 7) suggest that 30-day all-cause hospital readmission rates dropped from 20.5% to 18.7% over the 11-year period from 2006 to 2017. Nearly all of the improvement in 30-day hospital readmissions among patients with an index hospitalization for a COPD exacerbation occurred before the inclusion of COPD in HRRP in October 2014, presumably because changes in transitional care services from hospital-to-home for patients hospitalized for heart failure, pneumonia, or myocardial
2020-10-29T09:07:56.813Z
2020-10-27T00:00:00.000
{ "year": 2021, "sha1": "a0208fa1b188f786d07922e65078e231dc44fab7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1164/rccm.202010-3802ed", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ca4660f435ebb43a9288a2d66ba8e622b2ddd6c8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
106119538
pes2o/s2orc
v3-fos-license
ON DERIVATIVE UV-SPECTROPHOTOMETRY ANALYSIS OF DRUGS IN PHARMACEUTICAL FORMULATIONS AND BIOLOGICAL SAMPLES REVIEW The review article deals with theoretical aspects of Derivative UV-Spectrophotometry. The method gains significance using the first and second derivative of the transmission spectra with respect to wavelength. Generated optical derivatives are compared to the known numerical derivatives. The derivative spectra from 1st to 4th are consequently discussed. This provides valuable insight into the uses and limitations of this technique for chemical analysis. Measurement techniques and methods of obtaining derivative spectra are discussed. The degree of polynomial fit on the smoothness of derivative spectra and signal-to-noise ratio is described. Application of UV derivative spectrometry for determination of single and multicomponent analysis is shown. Derivative spectrophotometry possibly improves the selectivity and sensitivity of determination which has been illustrated. INTRODUCTION Derivative UV-spectrophotometry is an analytical technique of enormous implication commonly in obtaining mutually qualitative and quantitative in order from spectra that are of unresolved bands, with respect to qualitative and quantitative analysis, it uses first or higher derivatives of absorbance in accordance with wavelength [1].Derivative spectroscopy was originally brought in 1950s with its applicability in a lot of features, but because of its complication in producing derivative spectra via UV-Visible spectroscopy the method found less practice.The weakness was conquering in 1970s with microcomputers which gave derivative spectra in more specific, simple, rapid and reproducible way.This made to enlarge applicability of derivative method; Derivatization of spectra augments selectivity by eradicates spectral interferences [2][3]. Derivative Spectroscopy It is a spectroscopic technique that differentiates spectra's mainly in IR, UV-Visible absorption and Fluorescence spectrometry [4].The objective with which derivative methods used in analytical chemistry are: • Spectral differentiation • Spectral resolution enhancement • Quantitative analysis Spectral differentiation As a qualitative method that distinguish small variation between almost similar spectra's. Spectral resolution enhancement Overlapping spectral bands gets resolved to simply estimation the number of bands and their wavelengths. Quantitative analysis It facilitates multicomponent analysis and corrects the irrelevant background absorption.Derivative spectroscopy method forms the beginning of differentiation or resolution of overlapping bands; the vital characteristics of derivative process are that broad bands are suppressed relative to sharp bands [4]. Measurement Techniques of the Derivative Spectroscopy Differentiation of a zero order spectrum of a combination of components shows the way to derivative spectrum of any order.There are many methods are used for discrimination of a spectrum viz., by analog or numeric method, spectral differentiation may be deliberate either graphically on paper or registered in a computer memory [5].Measurement of derivative spectra value is achieved out by three methods viz.graphic measurement, numeric measurement, zero crossing technique Graphic measurement Graphic measurement is theoretical method for calculate the derivative spectra on paper, its manual method it suffer from disadvantage that it gives inaccurate results because the value can determined numerically can be abolish or diminish beyond restriction [5]. Numeric measurement The method uses set of points where derivative values is carried out by estimating the derivative value at a given wavelength.It gives derivatives by spectral differentiation using suitable numerical algorithm [5]. Zero crossing technique The method measures the derivative spectra at a particular wavelength, where the derivative crosses the point at zero line.Interference of one component in determination of other component can be eliminated by zero crossing technique [5]. Derivative Spectra In quantitative analysis, derivative spectra enlarge difference between spectra to resolve overlapping bands [6].The digital algorithm method called as Savitzky-Golay is most outstandingly referred for obtaining derivative spectra.In universal technique involves plotting the rate of change of the absorbance spectrum vs wavelength [7].Derivative spectra can obtain by variety of experimental techniques; the differentiation can be done numerically even if the spectrum has been recorded digitally or in computerized readable form.When spectrum is scanned at a constant rate, real time derivative spectra can be recorded either by achieving the time derivative of the spectrum or by wavelength modulation [8].Wavelength modulation device is used to record the derivative spectra, where a beam of radiation differs in wavelength by a small change (1-2 nm) and difference between the two readings is recorded, computerized method is widely used to obtain derivative curves. Quantitatively for second or fourth order derivative curves, peak heights are measured of long-wave peak satellite or for short-wave peak satellite [9].The degree of difficulty of derivative spectra increases with presence of satellite peaks.Second derivative spectra are represented by presence of two sharp peaks and troughs.The solvents have amazing effect over peaks [10].On the basis of solvents polarity, peaks and troughs shifts either to shorter or longer wavelength (Fig. 1). The way of obtaining the derivative orders Derivative spectroscopy accomplishes conversion of a normal or zero order spectrums to its first, second or higher derivative spectrum.It yields considerable changes in shape of derivative achieved.Appropriate selection of derivative order gives useful separation of overlapped signals.Criterion like signals height, their width and distance between maxima in basic spectrum is achieved by optimal derivative order, to attain wide spectrum bands it is expected to use low orders and for narrow spectral bands-higher orders.A Gaussian band represents an ideal absorption band gives clear idea about transformation occurring in the derivative spectra.Plotting absorbance versus wavelength gives a graph ,showing peak with maxima and minima (also points of inflection) that is supposed to passed through zero on the ordinate [10] (Fig. 2). Zero order derivative spectrum Zero order derivative is initial step of giving further derivatives i.e., zero th order spectrum can give n th order derivative.In derivative spectroscopy, D 0 spectrum i.e. zeroth order is a representative feature of normal absorption spectrum [12].The 1 st , 2 nd , 3 rd and 4 th order derivative spectra can be obtained directly from the zero th order spectrum.An increase in order of derivatives increases the sensitivity of determination [14].If a spectrum is expressed as absorbance (A) as a function of wavelength (λ), the derivative spectra is given as, A= f (λ), First order derivative spectrum Spectra obtained by derivatizing zero order spectrum once.It is a plot of change of absorbance with wavelength against wavelength 10 i.e.rate of change of the absorbance with wavelength, dA/dλ = f'(λ) Even if in derivatized form it is more complex than zero order spectrum.First order spectra passes through zero as λ max of the absorbance band. 6 Absorbance band of first order derivative shows certain positive and negative band with maxima and minima [6].By scanning the spectrum with a minimum and constant difference between two wavelengths, dual-wavelength spectrophotometer obtains first-derivative spectra [8]. Second order derivative spectrum Derivatizing the absorbance spectrum twice gives this type of spectra [7].It is a plot of curvature of absorption spectrum against wavelength [16]. Second derivative has direct relation with concentration i.e. directly proportional.d 2 A/dλ 2 must be large, large the ratio greater is the sensitivity [8].The method is useful in obtaining atomic and gas molecular spectra. Third order derivative spectrum Unlike second order spectrum third derivative spectrum shows disperse function to that of original curve [11]. Fourth-derivative spectrum Fourth order is inverted spectrum of second order and has a sharper central peak than the original band, Narrow bands are selectively determined by fourth derivative (UV-high pressure) [9]. Polynomial degree Polynomial degree has a great impact on number of polynomial points rather than on shape of derivative [5].The scope of polynomial is less; differentiation of spectra of half-width is used by low degree polynomials and that for spectra of small half -width by higher degree polynomials [5].Distorted derivative spectrum is a result of inappropriate polynomial degree.In case of multicomponent analysis, the spectral differences of assayed compounds and their selective determination can be increased by the use of different polynomial degrees [2]. Signal-to-noise ratio Derivative technique becomes difficult when used with higher orders that produce signal-to-noise worse [1].The result is decrease in S/N with higher orders.The noise is responsible for sharpest features in the spectrum.There are increased demands on low-noise characteristics of the spectrophotometer by negative effect of derivatization on S/N. 5 S/N can be improved prior to derivatization if spectrophotometer would scan spectra and average multiple spectra [6].Best signal-to-noise ratio can be obtained by taking the difference between the highest maximum and the lowest minimum, but this leads to enhanced sensitivity to interference from other components [2].Noise of signal is expressed by standard deviation σ.Standard deviation σ 0 expresses the noise of normal spectrum of the absorbance of blank while standard deviation σ n expresses n th order derivative that can be calculated by σ 0 [1,2]. Smoothing of spectra Increase in signal-to-noise ratio generates many worse conditions, to lessen the condition or to decrease the high-frequency noise, technique is used viz; low-pass filtering or smoothing.Smoothing is an operation that is performed on spectra separately on each row of the data and acts on adjacent variables [14].The noise can be lower significantly without loss of the signal of interest when variables are close to each other in the data matrix and contain similar information [12].Derivative spectrum may be altered with a high degree of smoothing so, care must be taken [1,6].The smoothing effect depends upon two variables mainly on: (a) Frequency of smoothing and (b) the smoothing ratio i.e. ratio of width of the smoothed peak to the number M of data points [15]. Advantages and Disadvantages of Derivative UV-Spectrophotometry Advantages UV Derivative Spectroscopy has increased sensitivity and selectivity.It has multiple advantages viz., single component analysis and simultaneous determination of several components in a mixture, determination of traces in matrix, protein and amino acid analysis, environmental analysis, identification of organic and inorganic compounds [5]. Specific benefits of derivative spectral analysis includes viz; • Even in small wavelength range, in presence of two or more overlapped peaks, absorbance bands can be identified. • In presence of strong and sharp absorbance peak, weak and small absorbance peak can be identified. • Broad absorbance spectrum gives clear idea about the particular wavelength at that maximum spectrum. • Even in presence of existed background absorption, the quantitative analysis can studied as there is linear relationship between the derivative values and the concentration levels [13,14]. Disadvantages Even though it is sensitive method still it is highly susceptible to various parameters.The method is limited to particular system only and has limited applications due to its less reproducibility.The method is second choice when existing instrumental method (which measures signal) is absent.It is less accurate in measuring zero-crossing spectra.There is likeness in shape of derivative spectra and zero order spectrum, so small variation in a basic spectrum can strongly modify derivative spectrum.Poor reproducibility can alter results in way when different spectrophotometers used for zero order spectra gives similar results but derivatization of them display different [15]. Applications a. Single component analysis: Derivative spectrophotometry analyses single component (Table 1) along with Area under Curve (Table 3) in pharmaceutical formulation. b. Multicomponent analysis: Derivative spectrophotometry in pharmaceutical analysis analyses more than one component in presence of other components i.e. simultaneous determination of two or more compounds.Spectral derivatization can remove the prevalence caused by spectra of disturbing compounds (Table 2) [3] c. Bioanalytical application: Besides pharmaceutical analysis, derivative spectrophotometry may be applied to different areas.Determination of compounds in various biological samples like plasma, serum, urine and brain tissue [2].Amphotericine [52] and Diazepam [26] has been determined in human plasma with its order of derivatives. d. Forensic toxicology: Derivative spectroscopy has its application in toxicology especially of illicit drugs viz; amphetamine, ephedrine, meperidine, diazepam, etc. and can also be used in mixtures [1]. e. Trace analysis: Derivative signal processing technique is widely used in practical analytical work in measurement of small amounts of substances in the presence of large amounts of potentially interfering substances [4].Due to such interference, analytical signals becomes weak, noisy and superimposed on large background signals.The conditions like non-specific broadband interfering absorption, non-reproducible cuvette positioning, dirt or fingerprints on the cuvette walls, imperfect cuvette transmission matching, and solution turbidity results in degraded measurement precision is by sample-tosample baseline shifts [4].Baseline shifts may be due to practical errors, either are weak wavelength dependence (small particle turbidity) or wavelengthindependent (light blockage caused by bubbles or large suspended particles).So, there is need of differentiation of relevant absorption from these sources of baseline shift [5].It is expected to suppress broad background by differentiation with a aim that it reduces variations in background amplitude from sample-tosample.This results in improved precision and measurement in many instances, especially in case if there is a lot of uncontrolled variability in the background and when the analyte signal is small compared to the background [4]. CONCLUSION Derivative Spectrophotometry is presently available with software's controlling modern spectrophotometers.This makes easy to analyst in obtaining useful information from spectra of respective compounds.The derivatives of UV spectra give applicable information in elucidating compounds in pharmaceutical formulation.This present article provides complete understanding about derivative spectrophotometry technique & its applications. Table 1 : Single Component determination of analyte in Pharmaceutical sample. Table 2 : Simultaneous determination of two or more compounds in Pharmaceutical sample. Table 3 : Determination of compounds in pharmaceutical sample along with AUC.
2019-04-10T13:11:55.964Z
2018-09-12T00:00:00.000
{ "year": 2018, "sha1": "8170b1563f796bc68e2e87652e4124e8df356078", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/jcchems/v63n3/0717-9707-jcchems-63-03-4126.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8170b1563f796bc68e2e87652e4124e8df356078", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
233651917
pes2o/s2orc
v3-fos-license
The Influence of Outward Foreign Direct Investment on Enterprise Technological Innovation As the scale of China’s outward foreign direct investment (OFDI) continues to expand, more and more scholars have begun to discuss the influence of OFDI on enterprise technological innovation. In order to be able to deeply explore the impact of enterprises’ OFDI activities on enterprise technological innovation, this paper combines relevant enterprise data from 2015 to 2017 and uses the new method to test the “technological innovation effect” produced by the OFDI of Chinese enterprises. Finally, this paper concluded that the development of OFDI activities of enterprises can promote the improvement of the level of technological innovation of enterprises, and this promotion has a lag effect. R&D-type OFDI activities play a vital role in promoting the ability of technological innovation of enterprises. Compared with low-income host countries, investment in high-income host countries has a greater effect on promoting the ability of technological innovation of enterprises. Introduction In the context of increasing industrialization, the scale of OFDI by China's enterprises is expanding. According to data from the "2018 China's OFDI Statistical Bulletin" released by the Ministry of Chinese Commerce, Chinese National Bureau of Statistics, and Chinese State Administration of Foreign Exchange, China's OFDI flow was US$143.04 billion in 2018, making it the world's second largest OFDI outflow country. China's OFDI stock reaches 1.98 trillion US dollars, ranking it third in the OFDI stock rankings of all countries and regions in the world, only behind the United States and the Netherlands. In addition, China's economic influence in global OFDI continues to expand. In 2018, China's OFDI flow accounted for 14.1% of global flow. At the end of 2018, China's OFDI stock accounted for 6.4% of the global OFDI stock, both hitting record highs. With the increasing scale of OFDI, the issue of OFDI in China has become a hot issue in academic research. e main research question of this paper is as follows: What is the impact of OFDI on enterprise technological innovation? at is, to explore whether there is a "technical innovation effect" in the process of enterprises carrying out OFDI or not? e "technical innovation effect" in OFDI mainly refers to the increase of intellectual capital investment and technology absorption in OFDI, which promotes the improvement of the level of technological innovation of enterprises. Based on the relevant research literature of scholars, it can be seen that some scholars have carried out certain research on OFDI, technological innovation, and the impact of OFDI on technological innovation. Some scholars have studied the related issues on OFDI from different perspectives. Buckley [1] mainly studied the relevant factors affecting OFDI activities of China's enterprises, such as investment strategy, enterprise-related production factor endowments, and investment industry status. Alon [2] researched the large-scale development of China's state-owned enterprises, but the injection of OFDI from foreign enterprises is still needed, because attracting foreign capital can stimulate the sustainable development of related industries more efficiently in China. Verbeke [3] mainly explored the key factors of China's OFDI based on the relevant data from 2008 to 2017, and pointed out that the investment environment of the host country is the most critical factor. When Nie [4] studied the specific role of OFDI in China, he focused on the economic effects between OFDI and economic development, and believed that OFDI in China can indirectly stimulate the development and progress of the domestic economy. When Zhang Qian [5] researched the OFDI of countries along the "Belt and Road," he believed that in the process of choosing the location of foreign direct investment, Chinese enterprises are mainly affected by the relationship between the host country and the home country. Borghesi [6] conducted a study based on the data of 22,000 Italian manufacturing enterprises that carried out ODFI in Europe, and believed that the EU's emissions trading system had little impact on the number of subsidiaries established in Italy, but it had a greater impact on the production efficiency of subsidiaries. Especially in trade-intensive industries, this situation is more common. ere are also some scholars who have studied related issues on the firm technological innovation. When Cantwell [7] studied the relationship between the technological innovation and the development of multinational enterprises, he pointed out that the level of technological innovation was a decisive factor affecting the international economic activities and production efficiency of enterprises. Abernathy [8] proposed the definition of the connotation of technological innovation in research, and believed that technological innovation mainly included process innovation and product innovation, and used the A-U model to point out the connection between product innovation and technological innovation in different stages. Xu Qingrui [9] pointed out in research that technological innovation was a process of applying new ideas to reproduction and selling related products to the market, and emphasized that product commercialization was the ultimate goal of enterprises to carry out technological innovation. e International Organization for Economic Cooperation [10] defined technological innovation as a process or product change with greater improvement, and also pointed out that product innovation was a corresponding change in the basic attributes or basic uses of the product, while process innovation was mainly important changes in the production and manufacturing process of products, such as changes in production technology and production equipment. Wilson [11] pointed out the importance of technological innovation to Japanese companies and believed that technological innovation of apparel companies was the core driving force for corporate development. Al-Jinini [12] believed that the influence of the element of knowledge capital of small-and medium-sized enterprises on the technological innovation of enterprises was the most critical, and it had played a positive role in stimulating and promoting to a greater extent. Federico [13] pointed out in the research that the technological innovation was of great help to the improvement of enterprise performance, so enterprises must pay full attention to the role of technological innovation in daily management. In addition, some scholars have begun to study the "technological innovation effect" of OFDI. Lang [14] believed that based on the "technological innovation effect," Vietnamese enterprises would bring certain technological progress in the process of developing OFDI, and even improve the efficiency of technological innovation. Long Yong [15] believed that when high-tech enterprises develop OFDI, their technological progress would have a certain "time lag," which was called the "technical innovation effect.. Wu Jianjun [16] studied the "technological innovation effect" of OFDI in China based on the perspective of R&D input and output, and pointed out that the appearance of this effect was closely related to the R&D input and output of enterprises. Fan Dan [17] took Zhongguancun Science Park as an example to study the "technological innovation effect" of overseas investment by high-tech enterprises, and believed that the production of this effect was very conducive to the progress and innovation of enterprise technology. It can be seen that some scholars have mainly focused on the relevant factors of corporate OFDI, the impact of OFDI on the economic development of the home country, the connotation and role of technological innovation, etc., but the research on the impact of China's OFDI on corporate technological innovation is relatively less. In addition, there are relatively few relevant research documents verifying the "technological innovation effect" from an empirical perspective. erefore, Based on Chinese enterprises, this paper studies the impact of OFDI on the technological innovation by innovatively combining with "technological innovation effect." At the same time, in the process of previous studies, the data used are relatively old. At present, this paper uses the latest Chinese investment and enterprise innovation data, which can more accurately reveal the mechanism of OFDI on enterprise innovation. By analyzing the related research conclusion of Andreani [18], Yohei [19], Rudzinski [20], and Kroodsma [21], combined with the characteristics of the research topic, this paper first chooses data matching methods, and then determines the most comparable related enterprises with OFDI enterprises as the comparison group. Secondly, the paper chooses the difference methods to verify the positive impact of corporate OFDI on corporate technological innovation, and further conducts a robustness test, that is, to calculate the "average treatment effect" with OFDI enterprises as the experimental group. Finally, this paper draws conclusions. Model Design. Using the method of double difference, this paper sets apart the enterprises that have carried out OFDI as the experimental group, and the enterprises that have not carried out OFDI as the comparison group. First, this paper determines that the dual dummy variables are dv and ds, respectively. Among them, dv indicates as to whether the enterprise has carried out OFDI. When dv � 0, it means that the enterprise has not carried out OFDI; when dv � 1, it means that the enterprise has carried out OFDI. ds means the time dummy variable. ds � 0 means the time before OFDI of the enterprise.ds � 1 means the time after the enterprise OFDI. Set eti is to represent the technological innovation status of enterprise iin period s, and Δeti i to represent the technological innovation changes of enterprise i before or after OFDI. If an enterprise does OFDI, then the technological innovation changes in the two stages of the enterprise can be recorded as Δeti 0 i ; if the enterprise has never carried out OFDI, then the technological innovation changes in the two stages of the enterprise can be recorded and denoted as Δeti 0 i . erefore, after OFDI, the impact of enterprise's technological innovation θ is as follows: In formula (1), judging from the development of practice, E(Δeti 0 i |dv i � 1) can no longer observe accurate results, because after the OFDI, the development status of enterprises without OFDI can no longer be observed. erefore, this paper uses the matching method to find some similar companies that have not carried out OFDI activities in China, and the changes in technological innovation of enterprises that have not yet carried out OFDI can be used to replace the changes in technological innovation of enterprises that have carried out OFDI, namely, . So the new formula (1) is as follows: en, based on the method of doubling, the OFDI enterprises were used as the experimental observation group, and the enterprises that did not do OFDI were used as the comparison group. In order to compare the changes in the technological innovation level of enterprises between the experimental observation group and the enterprises in the comparison group before or after OFDI. If the enterprises in the experimental observation group have improved their technological innovation level significantly higher than those in the comparison group after OFDI, it can be proved that the implementation of OFDI has significantly stimulated the improvement of technological innovation. e verification model is as follows: In formula (3), dv and ds, respectively, represent the same meaning as above; i represents the enterprise; s represents the time; eti is represents the technological innovation level of the enterprise, ε is represents the error of the model, and E(ε is ) � 0. At the same time, in formula (3), the technological innovation levels of the enterprises in the experimental observation group before and after OFDI are α 0 + α 1 , α 0 + α 1 + α 2 + θ; so, the technological innovation changes of the enterprises in the experimental observation group can be expressed as E(Δeti 1 i |dv i � 1) � α 2 + θ. en, the technological innovation status of the enterprises in the comparison group before and after OFDI is α 0 ,α 0 + α 2 , respectively. erefore, the changes in the technological innovation level of enterprises in the comparison group can be expressed as E(Δeti 0 i |dv i � 1) � α 2 . Combined with formula (2), the following expression can be determined: Based on formula (4), it can be found that the coefficient θ of the interaction term (dv × ds) in formula (3) is the real impact on the change of the enterprise's technological innovation status after the enterprise launches OFDI activities. If θ > 0, it means that when the OFDI is carried out, the technological innovation level of the experimental observation group is significantly higher than that of the control group. It also means that enterprises that carry out OFDI activities can significantly improve their technological innovation level, that is, enterprises can promote the improvement of their innovation level through OFDI. In order to strengthen the robustness of the mode, this paper further adds some control variables and effect factors in formula (3). Based on the previous scholars' research in this field, the specific control variables in this paper are as follows: enterprise capital investment (PC); number of employees participating in R&D (TEN); enterprise establishment period (OT); whether the enterprise carries out export trade activities (WE); whether there is foreign capital injection (WO) in enterprise technical capital; whether there is R&D investment (WR) in the enterprise; and the economic location of the enterprise investment economic development level (OEL), degree of openness (OD), regulatory quality (RQ), and industrial agglomeration (IC). In addition, the effects in this study mainly include age effect (TE), regional effect (RE), and industrial effect (IE). (ETI). Based on the relevant research methods used by scholars such as Cantwell (1989) and Abernathy (1998) to study enterprise technological innovation, this paper determines the estimating formula of enterprise technological innovation level as follows: Calculation of Enterprise Technology Innovation Level In formula (5), IIA represents the improvement of the level of technological innovation of the enterprise, l represents the number of employees in the enterprise's R&D field, k represents the overall stock of corporate technology capital, m represents the input, and f(k is , m s ) represents the existence of technical capital and the function of intermediate input, and is also a third-order polynomial approximation expressed by k and m. It can then be determined that the ETI calculation formula is as follows: en, we can use the logarithmic method to calculate all the variables in formulas (5) and (6). IIA, k, and m can be Mathematical Problems in Engineering deflated by the product sales price index and fixed technology asset price index, respectively, and converted into actual values; the input of enterprise technology-related labor can be expressed by the average annual number of employees in the R&D field. Design of Other Variables. e capital intensity of an enterprise can be expressed by the ratio of the enterprise's fixed technology capital stock to the number of employees in the enterprise's R&D field. Whether the enterprise is carrying out export trade activities or not is indicated by 1 and 0; if it is exporting, the value is 1; if not, the value is 0. Whether there is foreign capital injection in the technological capital of the enterprise is represented by 1 and 0; if there is foreign capital injection, the value is 1; if there is no foreign capital injection, the value is 0. Whether the enterprise has R&D investment is represented by 1 and 0; if there is, the value is 1, and if not, the value is 0. Data Collection. In this paper, the data of the experimental observation group mainly come from the relevant data in the statistical database of China's OFDI enterprises and China's industrial enterprises calculated by the National Bureau of Statistics of China. First, we determine the sample enterprises in the experimental control group. is paper determined the time period for the research sample to be enterprises that carried out OFDI from 2015 to 2017. During this period, the number and scale of China's OFDI enterprises were unprecedented and were at a historical high. erefore, it will be more significant to research the effect mechanism of corporate OFDI in this period on corporate technological innovation. Combining with the data of enterprises that launched OFDI from 2015 to 2017 released by the National Bureau of Statistics, this paper eliminates the number of enterprises that have continuously invested in them and selects enterprises that have launched OFDI for the first time; this paper further excludes enterprises that have not invested more than 2 years and finally determines the sample for this research. e number of enterprises in the experimental observation group is 935. Secondly, the sample enterprises without OFDI activities are identified. Based on the relevant data of Chinese industrial enterprise database, this paper selects the relevant enterprise data from 2016 to 2018. e full name of the Chinese industrial enterprise database is "data of all stateowned and large-scale non-state-owned industrial enterprises (the annual main business income or sales of enterprises is more than 5 million yuan, and it is more than 20 million yuan since 2011)." e data come from the industrial survey statistics carried out by the National Bureau of Statistics of China according to the "industrial statistics reporting system." e statistical contents include the production and marketing status, financial status, cost and expense of industrial enterprises, sales of main industrial products, inventory and production capacity, as well as the prosperity of production and operation of enterprises. e relevant enterprise data used in this study are all from Chinese industrial enterprise database published by the National Bureau of Statistics. Moreover, the relevant data of Chinese industrial enterprise database has been updated till 2018. ese data provide great support and help for the latest research in this paper. In order to ensure the systematic accuracy of the research samples, this paper eliminates some enterprises with missing enterprise information indicators (such as enterprise age, total assets, enterprise identification code, etc.), some small-scale enterprises with less than 5 R&D personnel, and some enterprises with unclear industry. en, combined with the Mahalanobis distance matching method, some enterprises that are similar to the experimental observation group are selected as the samples of the control group. In the end, we selected the corporate panel data from 2016 to 2018, with a sample size of 42,395. Comparison of Group Matching and Matching Results. In this paper, the Mahalanobis distance matching method is used to match the enterprises between the experimental observation group and the control group [22]. e matching process is as follows: Set the experimental observation group enterprise as p ∈ j ps � 1 , and the comparison group enterprise as q ∈ j ps � 0 ; j is a 0-1 variable, representing whether the studied enterprise belongs to a certain group; J pq represents the Mahalanobis distance between the comparison group q and the experimental observation group p, and the calculation formula is as follows: In formula (7), V p represents the vector of matching variables in the experimental observation group, V q represents the vector of matching variables in the comparison group, and C represents the covariance matrix of the matching variables. In formula (7), when J pq takes the minimum value, the firm q of the comparison group and the firm p of the experimental observation group are the closest firms. At this time, the enterprise q in this control group can be selected as the research object determined by the matching experiment. Generally speaking, if J pq meets the following conditions: It means that the comparison group enterprise q is the optimal value of the Mahalanobis distance matching method, and it is also the research object to be determined in this paper. Based on the conclusions of previous relevant research literature, combined with the characteristics of the research objects in this paper, it is necessary to clarify whether the goal of the enterprise matching experiment is to find the enterprise that is most similar to the enterprise before OFDI. erefore, this paper is based on the enterprise OFDI. e variables of the first-phase enterprise characteristics are matched with the sample enterprises. It is determined that the matching year of this study is between 2015 and 2017, and the enterprises that have not carried out OFDI are found to be most similar to those that have carried out OFDI in this period. e specific matching results of this study are shown in Table 1. By observing the data in Table 1, it can be seen that before the matching, the difference between the experimental observation group and the comparison group's enterprise sample mean value exceeds 1.9 units, which is large and particularly significant. is means that the sample enterprises in the comparison group before matching are less similar to the sample enterprises in the experimental observation group, and they are not suitable for the control group of this study. However, the mean values of the sample enterprises in the matched experimental observation group and the control group are very close, and the gap is less than 0.01 unit. is means that the control group sample enterprises are relatively similar to the experimental observation group enterprises and can be used as the sample objects for this study. And from the point of view of the T value, it can accept the null hypothesis that the sample mean of the experimental observation group and the control group are equal. erefore, it can be determined that the comparison group enterprises are 530 in 2015, 613 in 2016, and 647 in 2017. Next, this study will also determine the corresponding enterprises from 2016 to 2018 based on the experimental observation group and comparison group enterprises from 2015 to 2017. Initial Inspection. Combined with the matching sample data in this study, the double difference method was used for the initial test. e test results are shown in Table 2. In the tests of columns (1) and (2) in Table 2, no regulatory variables are added. In the tests in columns (3)-(5), different control variables are added, respectively. First, we examine the coefficients of dv and ds. dv mainly measures the difference in technological innovation between the experimental observation group and the comparison group. at is to measure the effect of relevant factors that do not change over time on enterprise technological innovation. By observing the results of the dv coefficient in Table 2, it is found that 0.362 is significantly greater than the coefficient values of 0.307, 0.301, 0.171, and −0.092 in the relevant data after the control variables of the enterprise characteristics are involved. It means that, when the relevant control variables of enterprise characteristics are not involved, the technological innovation level of enterprises that carry out OFDI activities is significantly higher than that of enterprises that do not carry out OFDI. Moreover, when controlling for enterprise characteristic variables, the dv coefficient becomes insignificant. It means that enterprise characteristics can explain the difference in the technological innovation between the experimental observation group and the comparison group. But, after further controlling the effects of year, region, and industry, the coefficient of dv becomes significant at the 1% level. is means that from the perspective of year, region, and industry, the innovation level of the experimental observation group sample enterprises is significantly higher than the technological innovation level of the comparison group enterprises. It shows that the dv here is not robust. In addition, ds is a time dummy variable before and after the enterprise launches OFDI activities. By observing the ds coefficient data in Table 2, it can be seen that after controlling the 3 effect variables, the ds coefficients are all negative numbers. It shows that without considering whether enterprises have carried out OFDI activities, the technological innovation capabilities of the two sample enterprises will not improve over time. Second, we examine the coefficient status of the interaction term (dv × ds). By observing the coefficients of the interaction terms in Table 2, it can be seen that regardless of whether control variables are added, the coefficients of the interaction terms are all positive, and they are all significant at the 1% level, and the results are robust. It shows that the level of technological innovation of enterprises carrying out OFDI is significantly higher than that of enterprises not carrying out OFDI. It also shows to a certain extent that the development of OFDI is conducive to enterprises to improve the level of technological innovation. Finally, we start to consider other control variables. By observing the coefficient results in Table 2, we can see that the variables of enterprise capital, the number of technical employees, whether there are foreign equity variables, whether there are export products, whether there is R&D investment, the economic development level (OEL) of the enterprise investment area, and the degree of openness (OD), regulatory quality (RQ), and industrial agglomeration (IC) are all positive numbers. Only the coefficient sign of the company's establishment year variable is negative. It means an increase in corporate capital, an increase in the number of technical personnel, foreign shares, export products, and increased R&D investment. e economic development level of the company's investment area is higher, the degree of openness is higher, the quality of supervision is higher, and the industrial agglomeration is better. ese conditions will promote the improvement of enterprise technological innovation level to varying degrees. is is mainly due to the inevitable need for relevant capital investment, R&D personnel investment, and R&D capital investment in the process of enterprise technological innovation. At the same time, when foreign shares enter, it may bring certain external advanced technologies, thereby stimulating the level of technological innovation of enterprises. Moreover, enterprises that can carry out export trade must have a certain advanced technological foundation, which is conducive to enterprises to carry out higher-level technological innovation. At the same time, the high level of economic development, high degree of openness, high quality of supervision, and good industrial agglomeration in the invested areas can also stimulate the technological progress and innovation of investment enterprises to a certain extent. In addition, only the increase in the years of establishment of enterprises will inhibit the improvement of the level of technological innovation of enterprises to a certain extent. is may be due to the fact that the early establishment of the Mathematical Problems in Engineering enterprise may have a backward management system and poor management flexibility, resulting in unreasonable allocation of enterprise resources, which is not conducive for the development of technological innovation activities by the enterprise. In summary, based on the results of the initial inspection, it can be judged that after controlling the variables of the relevant characteristics of the enterprise and the related effects, the enterprises that carry out OFDI have significantly improved the level of technological innovation. It also means that there is a "technical innovation effect" in the OFDI activities of China's enterprises. Test Based on Lag Effect. In an earlier research, it was found that China's enterprises have a "technical innovation effect" in the process of OFDI. erefore, the process of corporate OFDI may also have a certain lag effect on technological innovation. erefore, this paper further tests the lag effect. By observing the test results in Table 3, we can see that the coefficients of the interaction term (dv × ds) at a lag of 1 year and a lag of 2 years are both positive, and they are significant at different levels. It shows that there is an obvious "technological innovation effect" lag in the process of OFDI. Moreover, by observing the variation of coefficients, it can be seen that the coefficients lagging for 2 years are greater than the coefficients lagging for 1 year, and when the coefficient is lagging for 3 years, the coefficient appears to be significantly smaller. It means that in the process of developing OFDI, the investment process of the first 2 years will significantly stimulate the gradual improvement of the enterprise's technological innovation level, and in the 3rd year, the room for improvement of the enterprise's technological innovation level becomes smaller. Generally speaking, the reason for this result should be a time course for enterprises to absorb and digest new technologies in the process of developing OFDI. Only after going through this process can it promote a significant improvement in the level of enterprise technological innovation. When the technological innovation level of enterprises reaches a certain level, the stimulus effect of this round of OFDI on enterprise technological innovation will gradually decrease, that is, the marginal effect will decrease. Note. * * * , * * , and * represent significance at the levels of 1%, 5%, and 10%, respectively. Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively, "-"means blank. In addition, the coefficients of dv in Table 3 are always positive and significant at different levels. It shows that under the premise of not considering the time factor, the level of technological innovation of enterprises carrying out OFDI is higher. It also means that enterprises carrying out OFDI activities can stimulate the improvement of their technological innovation level. Test Based on Investment Motivation. Based on the OFDI purpose of China's enterprises, according to the National Bureau of Statistics of China, OFDI has been classified, mainly including business service investment, production and sales investment, research and development investment, and resource mining investment. Based on previous research literature of relevant scholars, it can be found that there are certain differences in the impact of different types of OFDI on enterprise technological innovation, and the "technical innovation effects" produced. First of all, in the process of investment, investment enterprises that develop business services can more conveniently access some of the competitors and products with higher foreign production technology, technological achievements, etc., and may also have a more direct understanding of consumers' concerns in the international market. Contents such as demand preferences and international quality standards will be more conducive to enterprises to clarify the next technological innovation goals, avoid detours, and improve the efficiency of enterprise technological innovation. Secondly, in the process of investment, enterprises that carry out production and sales investment can easily learn about the world's advanced production technology, and can also fully and efficiently use the host country's high-quality production technology and high-quality talents, so as to form certain gains. e "technological innovation effect" promotes the improvement of investment enterprises' technological innovation capabilities. ird, enterprises that carry out research and development investments can absorb foreign advanced technology through overseas mergers and acquisitions, and they can also improve their own technological innovation levels by building or participating in foreign R&D alliances. Finally, enterprises that carry out resource development investments are mainly meant to obtain certain resources in the host country, especially natural resources or labor resources. Most countries in the world with abundant natural resources or labor resources are developing countries and are relatively weak in terms of technology. As a result, enterprises that carry out resource development investments have a substantially reduced possibility of producing "technological innovation effects," and the possibility of stimulating enterprise technological innovation is also lower. In short, from the perspective of past research experience and theoretical derivation, investment in business services, investment in production and marketing, and investment in research and development are more likely to promote the improvement of enterprise technological innovation; while investment in resource development is more, the possibility of increasing the level of innovation is lower. In order to test the authenticity of these theoretical derivations, this study continues to conduct corresponding tests based on investment motivation. e specific test results are shown in Table 4. By observing the coefficients of the interaction term (dv × ds) in Table 4, it can be seen that among the four types of motivation investment, the interaction term coefficients are all positive, but only investment coefficients for business, production and sales, and R&D are significant, while the coefficient of resource development investment is not significant. It shows that investment in business services, production and sales, and research and development will promote the improvement of the level of technological innovation of enterprises. At the same time, from the perspective of the size of the coefficient, 0.343 < 0.557<0.833. It shows that from the perspective of the degree of promoting the improvement of the technological innovation level of enterprises, there is a certain difference, that is, the research and development type OFDI has the greatest effect on the promotion of the enterprise technological innovation level, followed by the production and sales type, and relatively Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. Mathematical Problems in Engineering 7 least effect is associated with the business service type. e reason for this situation may be: enterprises that carry out R&D investment are the most direct implementation of the goal of technological improvement. Its investment purpose is to improve the technological level of the enterprise itself, so it is more direct and effective to stimulate the technological innovation level of the enterprise. In terms of technological advancement in overseas production processes, enterprises with production and marketing investments are also more likely to obtain new technologies or be inspired by them, which is conducive to the improvement of their technological innovation level. Although commercial service investment can also stimulate the improvement of the technological innovation level of enterprises, since the distance between commercial service investment and the world's advanced technology is slightly farther, this type of investment has a relatively small effect on the improvement of the technological innovation level of enterprises. In addition, the coefficient of OFDI for resource development is not significant, which means that the promotion of resource development investment on the improvement of enterprise technological innovation level is not obvious. It is also mainly due to the fact that when enterprises invest in resource development, the purpose of their investment is to obtain resources, not technology. Moreover, from a global perspective, the resource-rich countries are mainly developing countries, and their industrial technology levels are low. erefore, it is not certain that this type of investment can promote the improvement of the level of technological innovation of enterprises. In order to be able to test whether the "technological innovation effect" of the above four motivations of OFDI has a lagging effect, this research carried out further tests. First, the results of the lag effect test based on commercial service and production-sale investment are shown in Table 5. By observing the coefficients of the interaction term (dv × ds), it can be seen that the coefficients in columns (1) -(3) are all positive. Moreover, the coefficient is increasing when the lag is 1 year and 2 years. When the lag is 3 years, the coefficient drops drastically and is not significant. Its coefficient drops drastically, and the coefficient is not significant after a lag of 3 years. It means that the stimulus effect of commercial service investment on enterprise technological innovation in the first two years is increasing, and by the third year, this stimulus effect has become insignificant. e coefficients of the interaction terms in columns (4)-(6) are all positive and significant, and the coefficients change firstly and then decrease. It also shows that the stimulus effect of production and marketing investment on enterprise technological innovation shows a situation of first being strong and then weak. e main reason for this situation is that although these two forms of investment can understand the new level of technological development in the world, the production and marketing investment is more efficient in the absorption and application of new technologies. ey stimulate the enterprise technological innovation. It is more intense and significant. Secondly, the test results based on the lag effect of R&D and resource development investments are shown in Table 6. By observing the coefficients of the interaction term (dv × ds) in Table 6, it can be found that the coefficients of the research and development OFDI in columns (1)-(3) are all positive and significant, and show a continuous increasing trend. It shows that the promotion of R&D OFDI to enterprise technological innovation is continuous. is is mainly because the core goal of R&D investment is to improve the technological level of enterprises, so that the R&D investment will continue to stimulate the improvement of the level of technological innovation of enterprises over time. Judging from the interaction term coefficients in columns (4)-(6), none of them are significant. It shows that the impact of resource development investment on enterprise technological innovation is not obvious. Test Based on the Income Level of the Investment Host Country. Rudzinski [20], Kroodsma [21], and Fan [17] all believe that the "technological innovation effect" produced by OFDI activities of enterprises that invest in countries with different income levels has a certain impact on the Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. improvement of technological level. According to data released by the World Bank in 2017, countries with a per capita national income of more than US$12,235 are high-income countries, and countries with lower income levels are lowincome countries. erefore, in order to verify the "technological innovation effect" of OFDI in host countries with different income levels, this research has carried out further verification [23]. Firstly, the test results based on the income level of the investment host country are shown in Table 7. By observing the coefficient of the interaction term (dv × ds), it can be found that the coefficients of the interaction term in columns (1) and (2) in Table 7 are both significant and positive. It shows whether it is investing in high-income countries or investing in low-income countries that can stimulate the improvement of the level of technological innovation of enterprises. At the same time, it can also be seen that the coefficient 0.493 is significantly greater than 0.206. It shows that investment in high-income countries is significantly better than investment in low-income countries in terms of promoting the improvement of enterprise technological innovation. e main reason for this situation is that generally high-income countries have relatively strong technological foundations, so when enterprises invest in high-income countries, they can more directly and efficiently absorb high-quality technologies from high-income countries. erefore, the "technological innovation effect" of OFDI carried out in high-income countries has a greater impact, and this kind of OFDI can more efficiently promote the improvement of the level of technological innovation of enterprises. It can be seen that investing in both high-income countries and low-income countries can promote the improvement of enterprise innovation, but the promotion effect of high-income countries is more significant. 'Secondly, the test results based on the lag effect of the investment host country are shown in Table 8. By observing the coefficient of the interaction term (dv × ds) in Table 8, it can be found that the interaction term coefficients of investing in high-income countries are all significant, positive, and first increase and then decrease. It means that investing in high-income countries can significantly promote the improvement of enterprise technological innovation. Because investment in highincome countries can better absorb the advanced industrial technology of high-income countries, however, after a period of time, when the technological level of the Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. investment enterprise reaches a higher level, the speed of the improvement of the enterprise's technological innovation level will also slow down. In addition, from the perspective of the interaction coefficient of investment in low-income countries, the first 2 years are significantly positive, and the 3rd year is not significant. is shows that although the "technological innovation effect" of investment in low-income countries has a lagging effect, the stimulus effect on technological innovation is relatively weak and its sustainability is insufficient. It can be seen that there is a certain lag in the "technological innovation effect" of investment in host countries with different income levels, and compared with investing in low-income countries, enterprises investing in high-income countries are more sustainable in stimulating technological innovation. ATT Test Based on Bias Score In order to further verify the robustness of the previous research results, this paper further tests the results. By combing the relevant research literature of Jiang [22] and other scholars, this study finally chooses the average effect of the treatment on the treated (ATT) test method based on the bias score, and determines the specific calculation formula of the ATT test method as follows: In formula (9), eti 1 iz and eti 0 az represent the technological innovation level of the experimental observation group and the comparison group, respectively,C(i) represents the set of enterprises matched with the experimental observation group, and c ia represents the enterprise i, the weight of matched firm a, and M represents the total number of firm pairs. en, different estimates are made according to the time when the enterprise started OFDI, and the results are shown in Tables 9-11 . By observing the data in Table 9, it can be seen that the estimated value in 2015 is 0.2639, which is a positive and significant number, which means that on the whole, enterprises' OFDI can significantly promote the improvement of their technological innovation level. In terms of investment motivation, business service investment, production and marketing investment, and R&D investment can all promote the improvement of enterprise technological innovation to varying degrees. e resource development investment has partially promoted the improvement of the level of technological innovation of enterprises. At the same time, investment in high-income countries has a more significant effect on the promotion of Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. enterprise technological innovation than investment in lowincome countries [24]. In terms of the lag effect of technological innovation, there has been an increase and then a decrease. All in all, through research and testing, it can be determined that OFDI can promote the improvement of enterprise technological innovation level. Furthermore, the estimated data results in Tables 10 and 11 indicate that the results are basically consistent with the data conclusions in Table 9. Moreover, these test results are consistent with the previous argumentation conclusions. is also means that the previous research conclusions have strong robustness [25]. In order to be able to ensure the robustness of the research conclusions more comprehensively, this paper refers to the relevant research conclusions of scholars such as Boroomand [26] and Tilton [27], and further conducts robustness tests based on relevant measurement indicators of enterprise innovation capabilities. Hsieh [28] believes that R&D investment indicators can measure the size of enterprise innovation capabilities to a certain extent. Lana [29] argues that the number of patents a company applies for each year can also reflect the innovation level of the company to a certain extent. Tan [30] considers that the high-skilled labor ratio indicator can also show the development status of enterprise innovation capabilities to a certain extent. Davey's [31] innovation value realization index will also reflect a certain enterprise's innovation capability. Based on the research of the above scholars, this paper further uses different indicators to measure the innovation ability of enterprises and launches the corresponding robustness test. e test results are shown in Table 12. By observing the data results in the chronological order in column (1) of the ATT test in Table 12, we can find that the changes in the numerical values of these indicators that reflect the company's innovation capabilities are consistent with the previous research conclusions. It also means that the previous research conclusions have strong robustness. Note: * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. Note. * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. Note: * * * , * * , and * represent significant at the levels of 1%, 5%, and 10%, respectively. Conclusions Based on relevant data from 2015 to 2017, this paper examines the impact of OFDI on China's enterprise technological innovation and conducts relevant research, especially in conjunction with the "technical innovation effect" in the enterprise investment process. e research of this paper has reached the following 3 main conclusions: (1) e increase in enterprise capital, the increase in the number of technical personnel, the participation of foreign shares, the increase in export products, and the increase in R&D investment will all promote the improvement of the level of enterprise technological innovation, and the influence will have a certain lag effect and present a trend of increasing first and then decreasing. However, with the increase of the establishment years of enterprises, it will restrain the improvement of the technological innovation level of enterprises to a certain extent. (2) OFDI of business service type, production and sales type, and R&D type will promote the improvement of enterprise technological innovation level to varying degrees. In particular, the research and development OFDI will promote the improvement of the technological innovation level of enterprises to a greater extent. e resource development-oriented OFDI does not necessarily promote the improvement of enterprise technological innovation level. (3) When carrying out OFDI for host countries with different income levels, and investing in host countries with low-income levels, investment in host countries with highincome levels has a greater role in promoting technological innovation of enterprises. Moreover, the above conclusions have passed the ATT test, and the conclusions are robust. Based on the above research conclusions, China's enterprises participating in OFDI can take measures such as strengthening corporate capital investment, increasing technical personnel, attracting foreign investment, actively exporting products, and increasing R&D investment to promote the improvement of corporate technology. Moreover, enterprises can increase their investment in highincome countries by increasing OFDI in business service, production and sales, and R&d directions, and further promote the improvement of their technological innovation level. At the same time, relevant government management departments can increase their guidance to enterprises and provide important policy and institutional guarantees for improving their technological innovation level. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest.
2021-05-05T00:09:56.666Z
2021-03-09T00:00:00.000
{ "year": 2021, "sha1": "c49a786fe7798e0bf42349a6d2c1c8b921dfb65d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6697298.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3fbe8cd517e15436eb127616e86f22f707668b89", "s2fieldsofstudy": [ "Economics", "Business", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
247839453
pes2o/s2orc
v3-fos-license
Convex Parameterization of Stabilizing Controllers and its LMI-based Computation via Filtering Various new implicit parameterizations for stabilizing controllers that allow one to impose structural constraints on the controller have been proposed lately. They are convex but infinite-dimensional, formulated in the frequency domain with no available efficient methods for computation. In this paper, we introduce a kernel version of the Youla parameterization to characterize the set of stabilizing controllers. It features a single affine constraint, which allows us to recast the controller parameterization as a novel robust filtering problem. This makes it possible to derive the first efficient Linear Matrix Inequality (LMI) implicit parametrization of stabilizing controllers. Our LMI characterization not only admits efficient numerical computation, but also guarantees a full-order stabilizing dynamical controller that is efficient for practical deployment. Numerical experiments demonstrate that our LMI can be orders of magnitude faster to solve than the existing closed-loop parameterizations. I. INTRODUCTION One basic yet fundamental problem in control theory is that of designing a feedback controller to stabilize a dynamical system [1,Chapter 12]. Any controller synthesis method needs to implicitly or explicitly include stability as a constraint, since feedback systems must be stable for practical deployment. When the system state is directly measured, it is sufficient to consider a static state feedback u = Kx with a constant matrix K. In this case, the set of stabilizing gains can be characterized by a Lyapunov inequality. If we only have output measurements, a static output feedback is insufficient to get good closed-loop performance. Instead, we need to consider the class of dynamical controllers [1]- [3]. It is well-known that the set of stabilizing dynamical controllers is characterized by the classical Youla parameterization [4] in the frequency domain, which requires a doubly coprime factorization of the system. Many closed-loop performances can be further addressed via convex optimization in the Youla framework; see [2] for extensive discussions. In the past few years, a classical notion of closed-loop convexity (coined in [2,Chapter 6]) has regained increasing attention thanks to its flexibility in addressing distributed control and data-driven control problems [5]- [15]. One common underlying idea is to parameterize stabilizing dynamical controllers using certain closed-loop responses in a convex way, which shifts from designing a controller to designing desirable closed-loop responses. One main benefit is that designing closed-loop responses becomes a convex problem in many distributed and data-driven control setups [13]- [15]. In particular, a system-level parameterization (SLP) was introduced in [13], and an input-output parameterization (IOP) was proposed in [6]; both of them characterize the set of all stabilizing dynamical controllers with no need of computing a doubly-coprime factorization explicitly. As expected, Youla, SLP, and IOP are equivalent to each other in theory, which has been first proved in [10] and later discussed in [11]. Very recently, the work [12] has further characterized all convex parameterizations of stabilizing controllers using closed-loop responses, revealing two new parameterizations beyond SLP and IOP. Thanks to convexity, these closed-loop parameterizations have become powerful tools in addressing various distributed control problems [5], [14], and quantifying the performance of data-driven control [7]- [9]. While convexity is one desirable feature in closed-loop parameterizations, the resulting convex problems are unfortunately always infinitely dimensional since the decision variables are transfer functions in the frequency domain. The classical work [2] and all the recent studies [5]- [14] apply Ritz or finite impulse response (FIR) approximations for numerical computation. However, the Ritz or FIR approximations do not scale well in both computational efficiency and controller implementation since they lead to large-scale optimization problems and result in dynamical controllers of impractical high-order. Moreover, a subtle notion of numerical robustness [12,Section 6] arises on the SLP [13] and IOP [6] due to the FIR approximation that may affect internal stability in practical computation. In this paper, we present the first computationally efficient linear matrix inequality (LMI) characterization for a closedloop parameterization of stabilizing dynamical controllers. To achieve this, we first introduce a "kernel" version of the Youla parameterization. Unlike SLP [13], IOP [6] and the mixed parameterizations [12], our new parameterization only requires one single affine constraint. This feature leads to a new robust H ∞ filtering problem, which allows us to derive an LMI for efficient computation. Note that our filtering problem is different from the classical setup (cf. [16], [17]), and thus our LMI characterization might have independent interest. Numerical experiments show that our LMI can be orders of magnitude faster to solve than FIR approximations. The rest of this paper is organized as follows. We present the problem statement in Section II. Our new parameterization is presented in Section III, and its LMI characterization is introduced in Section IV. Numerical results are shown in Section V. We conclude the paper in Section VI. A. System model and internal stability We consider a strictly proper linear time-invariant (LTI) plant in the discrete-time domain 1 where ∈ R p are the state, control action, and measurement vector at time t, respectively, and δ x [t] ∈ R n and δ y [t] ∈ R p are disturbances on the state and measurement vectors at time t, respectively. The transfer matrix from u to y is Consider an output-feedback LTI dynamical controller where δ u is the external disturbance on the control action. The controller (2) has a state-space realization as where ξ[t] ∈ R q is the controller internal state at time t, and A K ∈ R q×q , B K ∈ R q×p , C K ∈ R m×q , D K ∈ R m×p specify the controller dynamics. We call q the order of the controller K. Applying the controller (2) to the plant (1) leads to a closed-loop system shown in Figure 1. We make the following standard assumption. Assumption 1: The plant is stabilizable and detectable, i.e., (A, B) is stabilizable, and (C, A) is detectable. The closed-loop system must be stable in some appropriate sense, and any controller synthesis procedure implicitly or explicitly involves a stability constraint [1]- [4], [6], [10], [13], [18]. A standard notion is internal stability, defined as [1, Chapter 5.3]: Definition 1: The system in Figure 1 is internally stable if it is well-posed, and the states (x[t], ξ[t]) converge to zero as t → ∞ for all initial states ( The interconnection in Figure 1 is always well-posed since the plant is strictly proper [1, Lemma 5.1]. We say the controller K internally stabilizes the plant G if the closedloop system in Figure 1 is internally stable. The set of all internally stabilizing LTI dynamical controllers is defined as We have a standard state-space characterization for C stab . Lemma 1 ( [1, Lemma 5.2]): K internally stabilizes G if and only if the following closed-loop matrix A cl is stable. The condition in Lemma 1 is non-convex in A K , B K , C K , D K . It is known that if q = n, we can derive a convex linear matrix inequality (LMI) to characterize A K , B K , C K , D K by a change of variables based on Lyapunov theory [19]- [21]. 1 Unless specified otherwise, all the results in this paper can be generalized to continuous-time systems. Fig. 1: Interconnection of the plant G and the controller K. B. Doubly-coprime factorization and Youla parameterization In addition to the state-space condition (5), there are frequency-domain characterizations for C stab , which only impose convex constraints on certain transfer functions. A classical approach is the celebrated Youla parameterization [4], and two recent approaches are SLP [13] and IOP [6]. As expected, Youla parameterization, SLP, and IOP are equivalent [10]; see more discussions in [11], [12]. Definition 2: A collection of stable transfer matrices, U l , Such a doubly-coprime factorization can always be computed efficiently under Assumption 1 (see Appendix A) [22]. The Youla parameterization presents the equivalence [4] where Q is called the Youla parameter. The RH ∞ constraint on the Youla parameter Q is convex, but the order of the controller K cannot be specified a priori in the present form (7). The SLP [13] and the IOP [6] require no doublycoprime factorization, but impose a set of convex affine constraints on certain closed-loop responses. Thanks to the convexity in the Youla, SLP, and IOP, they have found applications in distributed and robust control [1], [14], [15], and recently in sample complexity analysis of learning problems [7]- [9]. However, the constraints on Youla, SLP, and IOP are infinitely dimensional in frequency domain, and they do not admit immediately efficient computation. The Ritz approximation was discussed in [2,Chapter 15], and the FIR approximation was used extensively in [6]- [8], [13]. However, the Ritz or FIR approximation not only leads to large-scale optimization problems, but also results in controllers of high-order (often much larger than the state dimension n); see [12, Section 5] for more discussions. C. Problem statement The computational issue for frequency-domain characterizations of C stab has been addressed unsatisfactorily in the classical literature [2,Chapter 15] or the recent studies [6]- [8], [13]. This motivates the main question in this paper. Can we develop an efficient linear matrix inequality (LMI) for a frequency-domain characterization of C stab ? We provide a positive answaer to this question. In particular, we first introduce a "kernel" version of the Youla parameterization (7), which only involves one single affine constraint. This leads to a new robust filtering problem, allowing us to derive an LMI for efficient computation. A. Stabilization lemma We first introduce a classical stabilization lemma. Lemma 2: Given a doubly coprime factorization (6) with we have equivalent statements as 1) The controller K internally stabilizes G; This result is standard [3,Chapter 4]. A quick understanding might be: a classical result [1, Lemma 5.3] says that K internally stabilizes G if and only if the closed-loop responses from (δ y , δ u ) to (y, u) in Figure 1 are stable. Simple algebra leads to This proves the equivalence between (1) and (2). Since B. Convex parameterization of stabilizing controllers Our first result is the following convex parameterization of all stabilizing controllers, which can be considered as a "kernel" version of the Youla parameterization. Theorem 1: Given a coprime factorization (6) with G = M −1 l N l , we have an equivalent representation of C stab as Proof: ⇐ Suppose that there exist X, Y ∈ RH ∞ satisfying the affine constraint in (9). We prove that K = YX −1 internally stabilizes G. Indeed, we can verify By Lemma 2, we know K = YX −1 ∈ C stab . ⇒ Given K ∈ C stab , we prove that there exist X, Y ∈ RH ∞ satisfying the affine constraint in (9) such that K = YX −1 . By Lemma 2, we know Upon defining This completes the proof. Similarly, we can derive an equivalent parameterization using the right coprime factorization G = N r M −1 r : There exist different internal stability conditions based on the coprime factorization (6); see e.g., [1, Lemma 5.10 & Corollary 5.1]. To the best of our knowledge, the explicit characterization with a single affine constraint in Theorem 1 has not been formulated before. Theorem 1, the Youla [4], the SLP [13] and the IOP [6] are expected to be equivalent among each other in theory. We give some discussions below. Remark 1 (Connection with Youla): As shown in (7), the classical Youla parameterization only has one parameter Q ∈ RH ∞ with no affine constraints. Indeed, all the solutions to the affine equation are parameterized by where X r Y r is a special solution to (10) and N r M r is the kernel space of M l −N l in RH ∞ , which is confirmed by the coprime factorization (6). Remark 2 (Connection with SLP/IOP): Both the SLP and IOP utlize certain closed-loop responses to parameterize C stab . In particular, the IOP [6] relies on the closed-loop responses from (δ y , δ u ) to (y, u) in (8): all internally stabilizing controllers is parameterized by Φ yy , Φ uy , Φ yu , Φ uu that lies in the affine subspace defined by and the controller is given by K = Φ uy Φ −1 yy . There are four affine constraints in (11). We can verify that given any X, Y satisfying the constraint in (9), the following choice Φ yy = XM l , Φ uy = YM l , Φ yu = XN l , Φ uu = I + YN l is feasible to (11) and parameterizes the same controller. Similar relationship with the SLP can be derived as well. C. A robustness variant and robust H ∞ filtering While Theorem 1, Youla [4], SLP [13] and IOP [6] are all equivalent with each other theoretically, they have different computational features. As we will see in Section IV, the fact that Theorem 1 has only one affine constraint will be essential for deriving an equivalent efficient LMI condition. Indeed, the single affine equality in (9) does not need to be satisfied exactly for internal stability. Lemma 3 (Robustness lemma): Given a coprime factorization (6) If (12). We have Combining this fact with Lemma 2, we complete the proof. Remark 3: The condition (I + ∆) −1 ∈ RH ∞ is only sufficient for internal stability. Consider a simple plant is unstable. Thus, (I + ∆) −1 ∈ RH ∞ is not necessary for internal stability From Lemma 3, we are ready to introduce our second result that can be interpreted as a robust filtering problem. Theorem 2: Given a coprime factorization (6) with G = M −1 l N l , the controller K internally stabilizes G if and only if there exist X and Y in RH ∞ such that If (13) holds, then K = YX −1 is an internally stabilizing controller and the closed-loop response satisfies Proof: ⇒ If K internally stabilizes G, Theorem 1 guarantees that we have X, Y ∈ RH ∞ such that K = YX −1 and M l X−N l Y = I. Thus, (13) is trivially satisfied. ⇐ Let X, Y ∈ RH ∞ satisfy (13). Then By the small gain theorem, we know Fig. 2: (a) Right-filtering problem, where the filter F appears before the dynamical system P 1 . (b) Left-filtering problem, where the filter F appears after the dynamical system H 1 Lemma 3 implies that K = YX −1 internally stabilizes G. To prove (14), applying K = YX −1 leads to the closedloop response as Considering (I + ∆) −1 = I − ∆(I + ∆) −1 , it is easy to verify that In addition, we have the following H ∞ norm inequalities and thus (14) follows. We note that the condition (13) has an interesting interpretation as a robust filtering problem [16], [17]: it aims to find a stable filter X Y ∈ RH ∞ such that the residual M l X − N l Y − I has H ∞ norm less than 1. This filtering interpretation motivates the LMI development in Section IV. A. H ∞ filtering problem We consider a right H ∞ filtering problem: given µ > 0 and P 1 (z), P 2 (z) ∈ RH ∞ with a state-space realization find a stable filter F(z) ∈ RH ∞ such that We call (15) as the right H ∞ filtering problem, since the filter F(z) is on the right side of P 1 (z). In the classical literature on filtering (see [16], [17] and the references therein), a left H ∞ filtering problem is more common: find F(z) ∈ RH ∞ such that Figure 2 illustrates these two types of filtering problems. It seems that most existing literature focuses on the left H ∞ filtering problem (16), while the right H ∞ filtering problem (15) has received less attention. Therefore, our LMIbased solution to (15) might be of independent interest. Lemma 4: Given a stable transfer function T(z) = C(zI − A) −1 B + D ∈ RH ∞ , then T(z) 2 ∞ < µ if and only if there exists a positive definite matrix P ≻ 0 such that  The right H ∞ filtering is solved in the theorem below. Theorem 3: There exists F(z) ∈ RH ∞ such that (15) holds if and only if there exist symmetric matrices X, Z, and matrices Q, F, L, R of compatible dimensions such that where ⋆ denotes the symmetric parts. If (18) holds, a state- where U is an arbitrary non-singular matrix. Proof: Let a state-space realization of F(z) be F(z) = ÂB CD . Standard system operations (see Appendix C) lead to the following state-space realization CD . By Lemma 4, we know (15) holds if and only if there exists a positive definite matrixP such that Note that (20) is bilinear in terms of the design variablẽ P and the filter realizationÂ,B,Ĉ,D. Motivated by the nonlinear change of variables in [16], [20], we partition the Lyapunov variableP and its inverse as Let and A have the same dimension, then U and V are invertible. We define N := Y −1 and We further define a change of variables Z := −N V U T = X − N (derived from (21)), which is symmetric, and We can then verify that (some detailed computations are presented in the appendix) Then, (20) is equivalent to which turns out to be the same as (18). From (22), the state- We only need to prove −(N V ) −1 = U T Z −1 , which is the same as (note that the last equation is (21)) where the first equivalence applied the fact that Z = X − Y −1 . This completes the proof. The linearization of the bilinear inequality (20) via the nonlinear change of variables in (22) and (23) is motivated by the classical literature on robust filtering [16], [17]. Due to the difference between right and left filtering problems, we remark that the LMI characterization in (18) has not appeared in [16], [17], and thus Theorem 3 might have independent interest. Note that we have used the standard H ∞ LMI in (17), and that one can further derive a similar LMI to solve (15) based on the extended H ∞ LMI in [23]. We provide such a characterization in Appendix D. B. Enforcing internal stability via an LMI From Theorem 3, we can derive an equivalent LMI formulation for the internal stability condition in Theorem 2. This is formally stated in the theorem below. Theorem 4: Given a coprime factorization (6) with G = M −1 l N l . Let M l and N l have the state-space realization There exist X(z) and Y(z) in RH ∞ such that if and only if there exist symmetric matrices X, Z, and matrices Q, F , L X , L Y , R X and R Y of compatible size such that the LMI (26) holds. In (26), notation ⋆ denotes the symmetric parts and f i , i = 1, . . . , 6 are linear functions as (26) holds, state-space realizations for X(z) and Y(z) are and U is an arbitrary non-singular matrix. Proof: Define P 1 (z) = M l (z) −N l (z) , and P 2 (z) = I, which have a state-space realization as Applying Theorem 3 to P 1 (z)F(z) − P 2 (z) ∞ < ǫ completes the proof. Setting ǫ = 1 recovers the internal stability condition (13) in Theorem 2. Thus, the following corollary is immediate. Corollary 1: Given a coprime factorization (6) with G = M −1 l N l and (24), the controller K internally stabilizes G if and only if there exist symmetric matrices X, Z, and matrices Q, F , L X , L Y , R X and R Y of compatible size such that the LMI (26) holds with ǫ = 1. If (26) holds with ǫ = 1, the following controller K internally stabilizes G, where Y and X have state-space realizations in (27). The state-space realization of K = YX −1 (28) is based on standard system operations (see, e.g., [1,Chapter 3.6]). We provide a detailed calculation for (28) in Appendix C. Note that the state-space realization (24) for M l and N l can be easily computed under Assumption 1 (see Appendix A). Remark 4 (Comparison with Youla/SLP/IOP): Youla [4], SLP [13], IOP [6] and Theorem 1 present equivalent convex parameterizations for C stab . However, they have very different numerical features in practical computation. The Youla parameter Q can be freely chosen in RH ∞ , but the resulting controller in (7) may not have a priori fixed order. The affine constraints in SLP [13], IOP [6] (see (11)) make their numerical computation non-trivial. The FIR approximation in [6], [13] often leads to controllers of very high order that are impractical to deploy. Furthermore, the FIR approximation may make SLP [13] infeasible even for very simple systems; see [12]. In contrast, the single affine constraint in Theorem 1 allows for a robust filtering interpretation (13) and admits an efficient LMI (26) for all stabilizing controllers. Moreover, the controller from (26) always has the same order as the system state in (1). To the best our knowledge, Theorem 4 offers the first efficient LMI among the recent surging interest on frequency-domain characterizations of C stab [6], [10]- [13]. Remark 5 (Comparison with standard LMI for stability): For internal stability, one can also derive an LMI based on Lemma 1. In particular, the following bilinear inequality with P ≻ 0 can be linearized into an LMI using a nonlinear change of variables [19]; see also [21, Section 3] for a recent revisit. However, the change of variables for (29) has a complicated inverse and factorization. Our new controller in (27) is more straightforward (with only inverse on diagonal blocks), which offers benefits in other scenarios, e.g., decentralized control [14], [15]. C. Decentralized stabilization One main motivation for the recent surging interests in frequency-domain characterizations of C stab [6], [10]- [13] is that one can impose structural constraints on the design parameters that can lead to structural controller constraints, such as a decentralized controller K. Note that imposing convex constraints on K often leads to intractable synthesis problems [14], [15], while imposing convex constraints on the new parameters after reparameterization of C stab naturally leads to a convex (but infinitely-dimensional) problem; see, e.g., [10,Section IV]. Research in decentralized control has remains of great interest [24], especially for large-scale interconnected systems. This aims to design a decentralized controller based on local measurements for each subsystem to regulate the global behavior. In our LMI computation (26), structural constraints on X(z) and Y(z) may be enforced by constraining the decision variables Z, Q, F , L X , L Y , R X , and R Y . In particular, if all these variables have a block-diagonal (decentralized) structure then X(z) and Y(z) also have the same block-diagonal structure, hence K(z) = Y(z)X −1 (z) will be block diagonal (decentralized), so is the state-space realization in (28). Note that imposing a block diagonal constraint on the Youla parameter Q does not lead to a decentralized controller K (see [14], [15] more discussions on constraints for Q). V. NUMERICAL EXPERIMENTS In this section, we consider a discrete-time LTI system that consists of n subsystems interacting over a chain graph (see Figure 3) to illustrate the performance of our LMIbased computation in Theorem 4 and Corollary 1. We used YALMIP [25] together with the solver MOSEK [26] to solve the optimization problems in our numerical experiments. A. Example setup Similar to [27], we assume the dynamics of each node x i are an unstable second-order system coupled with its neighbouring nodes through an exponentially decaying function as where α(i, j) = 1 5 e −(i−j) 2 , N i = {i − 1, i + 1} ∩ {1, . . ., n} and i = 1, . . . , n. Our goal is to design a decentralized dynamical controller for each subsystem i based on its own measurement u i = K i y i to stabilize the global system. We first get a doubly coprime factorization of this system by the standard pole placement method in which the closedloop poles were chosen randomly from −0.5 to 0.5 (see Appendix A for the computation of a doubly coprime factorization). As discussed in Section IV-C, we can constrain the decision variables Z, Q, F , L X , L Y , R X , and R Y to be block diagonal with dimensions consistent with each subsystem. This leads to block diagonal X(z) and Y(z), and thus results in a desired decentralized controller. In particular, we solved the following optimization problem 2 min h(Q, F, L X , L Y , R X , R Y ) subject to (26), where we chose ǫ = 1 in (26) to guarantee stability and the cost function h(R, F, L X , L Y , R X , R Y ) : 2 See our code at https://github.com/soc-ucsd/iop_lmi. V ∞ := max ij |V ij | is to regularize the size of the controller realization. For the comparison of numerical efficiency, we also solved a centralized H 2 optimal control problem using the SLP [13] according to the setup in [12,Section 7], where the standard FIR approximation was used in numerical computation 3 . B. Numerical results and computational efficiency We first consider an LTI system (30) with n = 3 subsystems. For this small system, it took less than half a second to solve (31), resulting in the following decentralized controller K 1 = −2.647z 2 − 0.04603z − 0.02581 z 2 + 0.01875z + 0.009845 , The order of each local controller u i = K i y i is guaranteed to be the same with the state dimension of each subsystem (which is two in this case). Figure 4 shows the the responses of input u i [t] and output y i [t] when the initial state was x i [0] = 0, 1 T , i = 1, 2, 3. As expected, the decentralized controller from (31) stabilizes the global system (30). For comparison, we computed a centralized H 2 optimal controller via SLP [13] according to [12,Section 7]. This SLP problem is infinite dimensional, and we used a standard FIR approximation for computation. Figure 5 (a) and (b) demonstrate the closed-loop responses using the resulting centralized dynamic controller when the FIR length was 10 and 20, respectively. Note that the FIR approximation always leads to a dynamical controller of high order (which scales linearly with respect to the FIR length): in particular, the order of the controller with FIR length 10 is 84 and the order of the controller with FIR length 20 is 174. In contrast, our LMI-based computation in Theorem 4 and Corollary 1 guarantees that the order of the resulting controller will be the same as the order of the system. Moreover, it is known that the computational efficiency of the FIR approximation does not scale well with system dimension, as it leads to optimization problems of very large size. To illustrate this, we varied the number of subsystems from 6 to 14 in (30) and allow each subsystem to use its own state (i.e., y i [t] = x i [t]). Table I lists the time consumption for solving (31) and the SLP problem with FIR length 20. It is clear that our LMI-based computation is much more scalable. For the case n = 14, our LMI was two orders of magnitude faster to solve. Finally, as shown in Table II, the order of the dynamic controller (31) is always two whereas the order of the controller from the SLP increases dramatically, and is of order 1092 when n = 14, which is unpractical for deployment. VI. CONCLUSIONS In this paper, we have presented a kernel version of the Youla parameterization for stabilizing controllers C stab . This parameterization only involves a single affine constraint, which can be viewed as a novel robust filtering problem. This filtering perspective leads to the first efficient LMI characterization for the frequency-domain characterization of C stab . Our LMI characterization offers significant advantages compared to the existing parameterizations (SLP [13], IOP [6], and the mixed versions [12]) in terms of both computation and implementation. Ongoing research directions include investigations on LMIs for performance specifications under our new controller parameterization. A. State-space realization of the coprime factorization It is straightforward to find a doubly coprime factorization for G(z) given a stabilizable and detectable state-space realization [1,Theorem 5.9]. This amounts to find a stabilizing feedback gain and observer gain. Theorem 5: Suppose G(s) is a proper real-rational matrix and G = A B C D is a stabilizable and detectable state-space realization. Let F and L be such that A + BF and A + LC are both stable. Then, a doubly co-prime factorization of G is We can directly verify that the choices in (33) satisfy Definition 2 (see [22] for detailed computations). The coprime factorization of a transfer matrix in (33) has a feedback control interpretation [1,Remark 5.3]. For example, the right coprime factorization comes out naturally from changing the control variable by a state feedback. Consider the state-space model From these equations, it is easy to see that the transfer matrix from v to u is and that the transfer matrix from v to y is N r (z) = A + BF B C + DF D . Therefore, we have u = M r v, y = N r v, so that y = N r M −1 r u, i.e., G = N r M −1 r . B. Computation of (23) Here, we provide some detailed computation for (23). For notational convenience, we highlightÂ,B,Ĉ,D in blue. • For (23a), we can verify Combining Z = −U V T N with the two equations above leads to (23a). • For (23b), we haveT
2022-04-01T01:15:35.050Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "4f01350e0c1f6bde05c6e92dfda9a35132063ff5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4f01350e0c1f6bde05c6e92dfda9a35132063ff5", "s2fieldsofstudy": [ "Engineering", "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Engineering" ] }
58946212
pes2o/s2orc
v3-fos-license
“Present-day realities of risk management in the activity of Ukrainian banks” Modern development of banking business is connected with significant risks, which, tak- ing into account globalization processes, political, economic problems in Ukraine and worldwide, development of technological and information systems, tend to transform, therefore it is very difficult to identify them and take preventive measures concerning their smoothing. Taking the abovementioned into account, it is reasonable to assess the modern state of risk management in the activity of Ukrainian banks and the influence on banking system development. For this purpose, the authors analyzed the performance of Ukrainian banks in the period 2008–2017 based on official statistic data of the National Bank of Ukraine and measures of economic standard of banking activity; studied the modern state of performing risk management in Ukrainian banks. The authors offer the process of effective organization of risk management system in national banks, which is a prerequisite for safe management of the bank. During the study, the authors found the significant decrease in the share of credits in total assets of Ukrainian banks and low quality of assets of Ukrainian banks during 2008–2017. This is caused by the significant amount of loan arrears, during the study period, the amount of loan arrears in 2016 increased by 36 times in comparison with 2008. The authors point to the need for improvement of assessment of banks’ riskiness, as a result of which they offer to use the methods of descrip- tive statistics for assessing risks and identifying them at all levels of banking activity. tendency towards increase in economic and political instability at national, regional and global levels. Under in-creasing instability, banking systems, which accumulate political, mac-roeconomic and institutional risks, find themselves in the most unfavorable conditions. Herewith the emergence of instability directly in the banking sector of the economy leads to negative consequences of economic development as a whole, and in some cases provokes the socio-political crisis. National banks are being seriously tested by time in the conditions of constant economic transformations. Rapid change of the operating conditions, influence of external environment, need for internal transformations cause constant improvement of the banking system. Ukrainian banking system is a main segment of the financial market and the only source of external financing for a range of important sec-tors of the economy. Crisis situations in the banking sector became particularly acute in the years 2008–2010 and became the lessons for preventing the crises in future or at least their smoothing. Crisis situation at the level of a separate bank can occur unexpectedly or develop gradually. In recent years, there is a clear tendency towards increase in economic and political instability at national, regional and global levels. Under increasing instability, banking systems, which accumulate political, macroeconomic and institutional risks, find themselves in the most unfavorable conditions. Herewith the emergence of instability directly in the banking sector of the economy leads to negative consequences of economic development as a whole, and in some cases provokes the sociopolitical crisis. National banks are being seriously tested by time in the conditions of constant economic transformations. Rapid change of the operating conditions, influence of external environment, need for internal transformations cause constant improvement of the banking system. Ukrainian banking system is a main segment of the financial market and the only source of external financing for a range of important sectors of the economy. Crisis situations in the banking sector became particularly acute in the years 2008-2010 and became the lessons for preventing the crises in future or at least their smoothing. Crisis situation at the level of a separate bank can occur unexpectedly or develop gradually. Today the problem of the risk essence and its management is one of the most relevant not only in the Ukrainian banking system, but also in the activity of the world banks. In modern times, the Ukrainian banking system operates under instability of national and world market environment, so during the economic globalization, the task of effective risk management in national banks is extremely relevant and it cannot be performed without implementing new forms, methods and instruments for managing the bank risks in the activity of banking institutions. From the scientific point of view, the risk management system should be based on scientifically rigorous methodology subjectively adapted to the banking activity realities, advanced technologies and world experience in risk management. In the conditions of globalization and integration of banking business, the increased competition and growth of threats for credit security, the task is to increase the own banks' financial sustainability, optimize the relationship between the competing characteristics -risk and profitability. Today the effective bank risk management should be considered one of the primary tasks of the banking institutions when implementing their development strategy. 1. In their works, many famous scientists pay a significant attention to theoretical and methodological aspects of the development of risk management systems in the banks, in particular, the issue of defining economic essence, functions and tasks. For example, in their work, Crouhy et al. (2012) consider the methods of risk assessment, modern instruments of risk management, use of modern technologies, change of risk management principles and their regulation. Andersen and Schrоder (2010) are of the opinion that in modern times, there takes place the increased need for effective risk management, herewith the authors state that absent or improper risk management can have damaging consequences for the enterprises and the whole economy. Hopkin (2010) states that the acute need for risk management is stimulated not only by global financial crisis, but also other global events like terrorism, natural disasters, etc. Herewith the author states that the enterprises should take into account all the risks, which affect their activity directly or indirectly. So, Kornev (2006) defined risk management as a process which implies systematic monitoring and risk management inherent to bank activity . In our opinion, risk management also involves risk monitoring. Kireitsev (2001) understands risk management as risk management system, which implies use of methods and instruments directed at identifying risks, calculating the probability of their emergence, assessing and smoothing. Starostina and Kravchenko (2004) define risk management as management of the whole organization or its separate subdivisions taking into account risk factors based on the special procedure of their definition and assessment, exchange of information about risk and control for the results of using these methods. In the ISO document "Working Draft for ISO Guide. Risk Management Terminology" (2009), risk management is defined as follows: "concerted activity regarding management of the organization and its control taking the risk into account". Prymostka (2004) defined bank management as a science about safe and effective system for managing all the processes and relationships, which characterize the bank's activity. The increase in profitability and decrease of risk are the two main directions of bank management. So, considering that the responsibility for system functioning, reaction to risky situation and making the corresponding decisions is the competence of management, it can be stated that bank risk management in its broad sense is a part of bank management, thus, general bank management. In turn, Sifumba et al. (2017) state that risk management is one of the most important issues being key for business success, but can negatively affect the profitability if not realized properly. Shyrynska (1998) defines the aim of risk management when organizing a certain process of effective management of these risks with the help of establishing strict limits separately for every type of risks, which must be obligatorily observed. It means that as we see, the author understands the aim of risk management as smoothing the risks the banking institution will take. But if to think in detail about the consequences of the bank's position concerning smoothing or avoiding risks, this will first of all lead to losing some share in the market, as all the bank operations can be considered risky, it means while achieving this aim, it will be necessary to refuse from performing the riskiest operations or refuse service, in particular lending to risky clients. This in turn will stimulate the adoption of aggressive marketing policy. Thus, the risk itself is not a negative phenomenon, but the incorrect estimate and risk management. Drogalas et al. (2017) state that the main task of business management is to constantly monitor the risks and implement the practices of their management. Besides, the authors state that the enterprises should use internal audit as a key instrument of effective risk management. At the same time, taking into account the achievements of fundamental and applied researches, insufficient attention is paid to separate theoretical and methodological, and applied aspects of defining main stipulations for organizing the risk management system in banks, implementation of international risk management standards. There is still a debate concerning the issue of specifying the methods and instruments of risk management; improvement of scientific approaches to bank risk assessment, which are not subject to quantitative assessment; formation of new business models of banks to support the allowable risk in their activity. 2. The research method is based on systemic and dialectical approaches to scientific understanding of bank risk management as an important segment of the banking institutions activity. A range of modern research methods was used to achieve the aim of the paper. In particular, when studying the process of bank risk management, the methods of scientific abstraction, analysis and synthesis were used. In the process of studying the modern realities of bank risk management, patterns and contradictions of its development, empirical methods were used, namely statistic observations, comparison, statistic methods of collecting and processing the information, and systemic and structural analysis. The informational background of the study are the official statistic data of the National Bank of Ukraine and annual reports from banking institutions, results of researches of Ukrainian and foreign scientists. 3. Risk management has long been recognized abroad as the effective instrument of modern management. Herewith, nowadays, risk management should be defined as one of the main directions of modern bank management that would study the problems of managing the banks taking into account different risks, the task of which would be to create an effective risk management system based on some concepts, laws, principles and methods. Risk management is quite dynamic, as the increase of its effectiveness directly depends on how rapid is the reaction to any changes in economic and financial situation. That's why it is necessary to understand the effective risk management, so it is necessary to be able to use the techniques and methods for assessing, identifying and effectively managing bank risks. Risk management involves strategy and tactics of management. At the moment, the top managers of the banking institutions still do not understand aims and functions of risk management, which leads to incompatible things from the point of view of corporate governance such as risk management by internal audit service or, vice versa, performing the control functions by the risk management subdivision. That's why lately enterprise risk management attracted unprecedented interest and worldwide attention. The growing interest in ERM is explained by a range of challenges in business, beginning from global financial crisis, corporate frauds and scandals, and banks' collapse (Soliman & Adam, 2017). Diagnostics of the existing national practice of risk management in banks still points to formal nature of risk management system because of absence of integration between structural subdivisions and lack of differentiation of their duties and powers in supporting the process of bank risk management. There also emerge difficulties in clear formulation of aims and tasks of risk management, choice of appropriate instruments for optimizing the level of risks. In the pre-crisis period, Ukrainian banking institutions have already organized some elements of bank risk management, but as time went by, it became clear that it is not enough. This can be explained by the absence of a unified methodological basis of bank risk management, bank control, financial planning, interest rate and limit policies. Defining the place of risk management in the model of business processes in the bank is the main strategic moment, which defines the bank's strategy. Kuzmak (2011) assumed that the strategy of any bank should provide for qualitative changes in management standards at the technological level, and meeting new targets, the main prerequisite of which is the effectively functioning integrated risk management. That's why strategic aims should be established not as part of "paper risk management", but for bank risk management process complaint with all international standards. It is necessary to effectively manage the risks instead of avoiding them, but at the same time, it is necessary to take into account that they all are connected to each other. Therefore one of the main tasks every bank faces is to learn to assess risks, show them properly in management information, work with them systematically. The issue is also relevant among the foreign scientists. So, Constantinescu and Nistorescu (2008), and D u ţă ( 2 0 1 6 ) One can see that according to these authors, the number of stages of bank risk management has increased, in particular, there emerged the stage such as choice of risk management method, which, in our opinion, is quite appropriate, but we cannot agree that the stage of risk monitoring is not included, as it is a quite important moment for every bank. The same is for control. In our opinion, to clearly understand the essence of bank risk management, seven stages of bank risk management should be defined for banking institutions (see Figure 1). At the first stage, the responsible bank employees should define the essence and classification of risks that can emerge during the bank's activity, and strategic and tactical aims of the bank concerning managing the banking institution taking the risks into account. At the second stage, bank managers obtain the information for identifying the risks. The identification should be understood as acknowledgement and understanding the existing risks and the risks, which can emerge in future. In its essence, definition of risks is a continuous process and is performed at the level of bank's structural subdivisions. After the risk is defined, it should be identified, it means it should be assigned to one of previously defined classification groups. The difficulty of performing this stage of bank risk management depends on the source of emergence and size of the risk. The identification of risk is necessary, but not sufficient procedure. The third stage involves risk assessment, it means quantitative measurement (quantification) of defined risks, during which characteristics are defined such as probability and brunt of possible consequences. Herewith, the system of limits from risks, which can be quantitatively assessed, is also developed. Such assessment should also define the allowable limits for every type of risk. The fourth stage involves choice and use of methods and techniques for affecting the risks for minimizing or avoiding possible unfavorable consequences. If the taken risks are allowable, bank manage-ment can only perform control, thus, pass to sixth stage of bank risk management. Also in some cases banks can use methods for avoiding risks. At the fifth stage, it is necessary to monitor risks, which means independent system of risk assessment and control, which is performed through internal and external audit and analytics. Monitoring is aimed at timely observation of risk levels. At the sixth stage, one of the effective elements of management is control for subdivisions' activity, which will provide for effectiveness of risk management system, accuracy and validity of information. The control involves establishing the limits and informing the executants about them with the help of stipulations, standards, procedures. At the seventh stage, bank managers should make conclusions and offers for future. That's why the necessary conditions for effective management are training of qualified managers of banking institutions, presence of knowledge and skills concerning using the risk management methods. Summing up the abovementioned, let us note that effective organization of risk management system is a prerequisite of bank safe management, which in turn contributes to strengthening the Ukrainian banking system as a whole and speeding up its integration in international banking society. To assess the level of Ukrainian banks' financial sustainability, let's analyze the main financial indicators of Ukrainian banking institutions (see Table 1). Beginning from 2014 in Ukraine there is observed destabilization of both the banking system and financial sustainability of the state as a whole, the reasons for which are political, financial, economic and banking crises. The amount of banks in 2017 was 88, it means beginning from 2014, 70 banks were liquidated, which is the biggest number during all history of independent Ukraine. Decreased share of operating banks is a consequence of general economic destabilization, which to some extent forces it, as losses of clients of bankrupt banks (UAH 111 billion as of mid-2016) worsen their financial state and business expectations. There also takes place a decrease in number of bank's operating departments (National Bank of Ukraine, 2016). The banking institutions' assets were growing till 2014 and have fallen by 3.5% in 2017 and amount-ed to UAH 1,233 billion. Credit portfolio had a tendency towards increase till 2014 and when crisis happened, began to decrease quickly and in 2017, decreased by UAH 399 billion. here is observed a signi cant decrease of the share of loans in total assets of Ukrainian banks, which is 39.45%. One can observe low quality of assets of Ukrainian banks during 2008-2017, caused by the significant amount of loan arrears. The situation became a lot worse due to significant decrease of GDP of Ukraine in 2014 and devaluation of Ukrainian national currency in the period of crisis by more than 300%, which became the reason for the significant increase of the debt service burden for borrowers who obtained loan in foreign currency. The growth of bank's obliga-tions continued till 2014 and reached UAH 1,168 billion, which is 45% more than in 2008, and till 2017, there is observed an insignificant decrease of banks' obligations by 6.4%. In 2015, total amount of own equity decreased by 36% in comparison with 2014, which is explained by banks' unprofitability as of year-end 2014. The decrease in profitability of the banking business can be explained by losses in banking activity during 2013-2017. Optimal value of the indicator "return on assets" can be from 1 to 1.5%. Instead, provided the losses of banking system, the value of this indicator will be negative. In 2014, the significant crisis relapse was provoked by political and economic events in the country's Table 2. From the data presented in Table 2, one can see that national banks fully meet the economic indicators, but taking into account the number of liquidated banks, it can be stated that the level of bank risk is high and the quality of their management is low. Thus, not always meeting the economic indicators by national banks allows to make a conclusion about the level of financial sustainability of certain commercial bank. But, in the modern risk theory, there is no absolute answer to the question about how to define and assess bank's riskiness. It was demonstrated during the last financial crisis, which showed that the standard approaches recommended by the Basel Committee on Banking Supervision do not reflect the real size of total risk of banks and all the elements of this risk. That's why it is necessary to improve the assessment of banks' riskiness not only at the micro level (at the level of bank), but also at the macro level, i.e. the level of regulator. All the methods of risk management are based on elements of theory of probabilities and mathematical statistics, which formed the effective instruments for measuring and assessing the risks. Unfortunately, in practice, efficient elementary methods of descriptive statistics, which are effective for assessing risks and their identification at all levels of banking activity, are not used quite in full. In Tables 3 and 4, there are presented the calculations of variances of returns/losses based on data from NBU for groups of banks during 2009-2013 (during this period, NBU divided banks into four groups) and 2015-2017 (during this period, NBU defines banks with public share -group 1, banks of foreign bank groups -group 2, and banks with private capital -group 3). All these indicators together can be used for assessing the riskiness of bank groups' activity, namely assessment of probability of getting the returns of certain level (similarly with VAR approach). The coefficient of variation shows the level of risk for unit of average return/loss in the certain group of banks. So, it can be stated that in 2009 in the first group of banks, one hryvnia of loss generated nearly 15% of risk. Moreover, one can see that in 2013, there took place an increase of risk in all groups of banks, the same is for 2017, which can be explained by financial and political crisis, which escalated dramatically till the end of 2013, and armed conflict in the East of Ukraine. Of interest is the fact that from the calculations, one can state that the decrease of losses of the first group of banks during 2009-2011 led to increase of the level of risk for 1 hryvnia of loss from 15% to 75%. Similar situation can be seen also in 2015-2017, the significant decrease of losses in the second group of banks, i.e. in banks with foreign capital, led to significant increase of risk for 1 hryvnia of loss from 25% to 272%. Getting 1 hryvnia of return of the first group of banks during 2012 generated 45% of risk, and the third group of banks during 2017 -51%, i.e. one sees that the risks of these groups, not considering their profitability, are significant. But the biggest risk is observed in the third group of banks in 2011 and 2013, in which 1 hryv-nia of return generated, respectively, 418% and 300% of risk, i.e. from these calculations, it can be stated that the level of risk management in Ukrainian banks is absent or quite low. In the context of assessing and measuring risks, the coefficient of asymmetry has the following value in the case when it is positive, high returns are more probable (right "end" of the line of reaching the normal distribution in histograms); and, correspondingly, when it is negative, losses are more probable. Thus, from the calculated data, it can be stated that the returns are characteristic only for the third group of banks in 2012 and fourth group of banks in 2013 (see Figures 3 and 4). In the context of our study, the indicator of kurtosis (see Tables 3 and 4) is offered to be used as follows: the larger the indicator of kurtosis, the less risky is the group of banks. The indicator of kurtosis can be used as supplementary in the situations when the indicator of asymmetry in the groups of banks is the same. Thus, from the performed study, it can be observed that the largest indicator of kurtosis in the fourth group of banks, which during the years 2009-2012 increased almost by four times and was 108.6, has the largest value in all groups of banks during the years under study. Besides, this indicator has acutely decreased during the years 2015-2017, which indicates that the riskiness of all groups of banks increases. This decrease can be explained by the same crisis. According to calculation data, first group of banks is considered the riskiest, as in this group, the indicators of kurtosis are the smallest. Standard deviation indicates the range of return volatility in the group of banks. Thus, the smaller the standard deviation, the lower is the level of riskiness of income-generating activity in the group of banks. So, according to the calculated data of standard deviation of all the group of banks during the years of study, the lowest level of riskiness of income-generating activity is characteristic for the third group of banks, as during 2015-5017, standard deviation was the smallest. And the level of riskiness of income-generating activity is the highest in the first group of banks, in which during the years of study, the calculated standard deviation is the largest. Thus, the offered array of these indicators can be used for assessing the riskiness of activity of groups of banks, namely for assessing the probability of getting returns of a certain level. During the years of study, in the majority of groups of banks, losses are probable, which indicates that risks are high and risk management is quite weak or absent. So, the important problem of risk management functioning in the Ukrainian banks activity is low quality of bank risk management. The essence of the offered approach lies in comparing the values of the abovementioned descriptive characteristics on the dynamics in groups of banks (according to NBU classification ), which gives a possibility to assess the riskiness of banks' activity and obtain Such an approach can be used for monitoring the riskiness of banks' activity at the macro level by the regulators for making corresponding decisions, and, in particular, representatives or banking supervision who "should define how some existing or potential problems, which the bank or bank system face, affect the nature and level of risks in this bank. According to the assessment results, the supervisors make plans and define the supervision actions. The supervision based on the assessment of risks is the deepened continuation of supervision function, which is based on risk and is already used by National Bank for some time…" (National Bank of Ukraine, 2004). But when trying to use risk management in their activity, Ukrainian commercial banks face the need to take into account some circumstances, which make their actions more difficult, in particular: 1) in our society, the risk culture is only at the stage of its formation. A clear example is the distrust to banking sector. One can say that risk culture is present only when the management knows which risks the banking institution faces. Besides, all bank employees should openly discuss and understand the risks; 2) risk management infrastructure is not developed in Ukraine (i.e. institutes and instruments for managing bank risks); 3) size and ratio of different types of bank risks of Ukrainian and foreign banks and motivation for implementing risk management in the activity of domestic banking institutions differ significantly. So, specificity of Ukrainian economy consists in the size of some types of risks and underdevelopment of instruments for protecting from risks. In particular, such specific risks can be unregulated ownership relations, corruption, underdeveloped financial infrastructure, etc. Bank risk management is the main element of bank management system, and under constant increase in instability of international and domestic financial markets, its value grows significantly. The authors found that one of the main factors of effective functioning of bank risk management is formation of effective risk management process in the activity of Ukrainian banks. There were found problematic tendencies to organizing risk management in the activity of national banks. The authors systematized the problems, which hinder the development of bank risk management in Ukraine. It was found that the main problem of risk management functioning in the national banks activity is low quality of bank risk management. That's why with the aim to improve the activity of banking institutions, the authors offer the methodology for assessing banks' riskiness at the macro level, i.e. at the level of regulator with the help of methods of descriptive statistics. The advantages of using these indicators as an alternative to VAR approach are simplicity of calculations, availability of data for calculations, promptness of use and simplicity of interpretation. Furthermore, this approach gave a possibility to identify the level of riskiness of groups of Ukrainian banks and, correspondingly, the level of quality of bank risk management in these groups. Herewith these calculations showed high risks and quite weak or absent risk management in the banks.
2018-12-17T22:41:22.455Z
2018-04-11T00:00:00.000
{ "year": 2018, "sha1": "32d2541a38049cac5e80da25d327037e7f557dce", "oa_license": "CCBY", "oa_url": "https://businessperspectives.org/images/pdf/applications/publishing/templates/article/assets/10222/BBS_2018_01_Kuzmak.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "856c2e9f86474d98cd8666536314167c57c24e61", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
250072263
pes2o/s2orc
v3-fos-license
ConcreteGraph: A Data Augmentation Method Leveraging the Properties of Concept Relatedness Estimation The concept relatedness estimation (CRE) task is to determine whether two given concepts are related. Although existing methods for the semantic textual similarity (STS) task can be easily adapted to this task, the CRE task has some unique properties that can be lever-aged to augment the datasets for addressing its data scarcity problem. In this paper, we construct a graph named ConcreteGraph ( Conc ept re la t edness e stimation Graph) to take advantage of the CRE properties. For the sampled new concept pairs from the Concrete-Graph, we add an additional step of filtering out the new concept pairs with low quality based on simple yet effective quality thresholding. We apply the ConcreteGraph data augmentation on three Transformer-based models to show its efficacy. Detailed ablation study for quality thresholding further shows that even a limited amount of high-quality data is more beneficial than a large quantity of un-thresholded data. This paper is the first one to work on the WORD dataset and the proposed ConcreteGraph can boost the accuracy of the Transformers by more than 2%. All three Transformers, with the help of Concrete-Graph, can outperform the current state-of-the-art method, Concept Interaction Graph (CIG), on the CNSE and CNSS datasets. Introduction Concept relatedness estimation (CRE) is the task of determining whether two concepts are related. A Wikipedia entry, a news article, or a mathematical definition can all be considered a concept. Table 1 shows a pair of related concepts and an unrelated concept. In this example, when given the first two concepts, "Open-source software" and "GNU General Public License", one should label them as a related pair; but if "Open-source software" and "Landscape architecture" were given, one should mark them unrelated. CRE plays an important role in a wide range of applications, such as information Relatedness Concept Related Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose · · · The GNU General Public License (GNU GPL or simply GPL) is a series of widely used free software licenses that guarantee end users the freedom to run, study, share, and modify the software · · · Unrelated Landscape architecture is the design of outdoor areas, landmarks, and structures to achieve environmental, social-behavioural, or aesthetic outcomes · · · retrieval (Busch et al., 2012;Teevan et al., 2011), document clustering (Aswani Kumar and Srinivas, 2010), plagiarism detection (Muangprathub et al., 2021), etc. In recent years, the amount of concepts has been rapidly growing, and it becomes unfeasible to assess the relatedness of every concept pair manually. Therefore, automated concept relatedness estimation has been attracting much interest. In traditional settings, the concept similarity matching (CSM) task is closely related to CRE and often considered as a formal concept analysis (FCA) task, where a concept is formally defined as a pair of sets: a set of objects and a set of attributes in a given domain (Formica, 2006). But such a definition of a concept becomes less suitable for today's CRE problems because structured concepts are scarce while unstructured text documents of concepts are ubiquitous. With the recent introduction of the more difficult CRE task, the definition of a concept is generalized to any text document that describes a concept. The challenge of CRE lies in the unstructured long concept documents and the limited amount of training data. Because of the structure of a concept in FCA, the methods for CSM were restricted to using ontology, Tversky's ratio and rough set (Formica, 2006(Formica, , 2008Lombardi and Sartori, 2006;Wang and Liu, 2008), etc. However, most of to-day's concepts are written in natural language without any explicit mathematical structure. Traditional CSM methods become less suitable for such types of data. With the popularity of deep neural networks (DNNs) (LeCun et al., 2015), many DNN models for NLP are now capable of processing unstructured text inputs. Recurrent Neural Networks (RNNs) (Schuster and Paliwal, 1997) were built upon recurrent units, like LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014), but they often suffer from the vanishing gradient problem (Hochreiter, 1998). This problem was recently solved by Transformers (Vaswani et al., 2017). Therefore, in this paper, we use Transformers as our backbone models, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b) and XL-Net (Yang et al., 2019). To address the data scarcity issue, we also propose a novel data augmentation method called ConcreteGraph (Concept relatedness estimation Graph). Most NLP data augmentation methods focus on sentence-level strategies, including paraphrasing, noising and sampling (Li et al., 2022). For example, one can augment a sentence by replacing words with their synonyms, deleting certain parts of a sentence, and inserting a short phrase. However, those typical NLP data augmentation methods do not take advantage of the unique properties that CRE has. Therefore, we build the Con-creteGraph utilizing three types of CRE properties: reflexivity, commutativity and transitivity. From this graph, new relationships can be discovered between any sampled concept pairs. The original relationships provided by a CRE dataset describe the immediate neighborhood, but ConcreteGraph enables us to obtain new related concept pairs from the multi-hop neighborhood and new unrelated concept pairs from different graph components. For instance, in the Table 1 example, if it is additionally given that "Open-source software" is related to "Creative Commons license", then we can find a new two-hop relationship between "GNU General Public License" and "Creative Commons license", which did not exist in the set of relationships provided. Despite the theoretical potential of Concrete-Graph, the new relationships vary in quality. Namely, some paths between two sampled Con-creteGraph nodes may have low quality because some edges have low relatedness scores or the path lengths are too long. Therefore, we need to fil-ter out such concept pairs in practice during the data augmentation process. We introduce two simple yet effective quality thresholds to eliminate the harmful concept pairs and only keep the highquality ones. The main contributions of our paper are as follows: • We propose a novel model-independent data augmentation method based on Concrete-Graph . The data augmentation method leverages the unique properties of the concept relatedness estimation (CRE) task. Detailed experiments demonstrate its effectiveness for multiple datasets in two different languages, English and Chinese, and across three Transformer models, BERT, RoBERTa and XLNet; • This paper is the first to work on the WORD dataset (Ein-Dor et al., 2018) that contains English Wikipedia concepts. Using three Transformer models along with our data augmentation method, this paper sets a strong baseline for future work on this dataset; • Our method also achieves considerable improvement over the state-of-the-art model, Concept Interaction Graph (CIG) (Liu et al., 2019a), on two datasets of Chinese news articles, the CNSE dataset and the CNSS dataset. 2 Related Work CRE Task Although the concept relatedness estimation task is a relatively new task initiated by the Wikipedia Oriented Relatedness Dataset (WORD) dataset (Ein-Dor et al., 2018), similar tasks have existed for a long time. Originally, the concept similarity matching task was introduced for the concepts in formal concept analysis (FCA). In FCA, a concept is formally defined as a pair of sets: a set of objects and a set of attributes in a given domain (Formica, 2006). Methods for assessing concept similarity include ontology-based methods (Formica, 2006(Formica, , 2008, Tversky's-Ratio-based methods (Lombardi and Sartori, 2006), rough-set-based methods (Wang and Liu, 2008), and semantic-distance-based methods (Ge and Qiu, 2008;Li and Xia, 2011). The FCA definition of a concept becomes less useful in recent applications because more concepts are described in plain text. Therefore, in the CRE task, the definition of a concept is generalized to Build Graph Figure 1: The overview of our ConcreteGraph data augmentation method. Each node represents a concept; solidline edges correspond to related concept pairs and dashed-line edges denote unrelated concept pairs. The green node A is the source node from which we find shortest paths to other nodes using Dijkstra's algorithm. Target nodes are highlighted in yellow. In this example, we use the minimum path score and the score threshold is 0.7. Therefore, the path between A and B is filtered out because the edge score between B and G is 0.5 < 0.7. The maximum path length is set to 2. Thus, the path of length 3 between A and D is removed. The two paths, A-G and A-C, satisfy the quality thresholds and they are treated as two new related concept pairs. There are not paths between A and E, A and F, so they are considered as two new unrelated concept pairs. any text document that narrates a concept. Traditional methods for text input are term-based text matching methods based on TF-IDF (Ramos et al., 2003), BM25 (Robertson, 2009) or LDA (Blei et al., 2003. As deep neural networks became popular, many deep learning text matching methods (Hu et al., 2014;Qiu and Huang, 2015; were introduced, but they mainly focus on matching short text, i.e., sentence pairs. Liu et al. (2019a) introduced Concept Interaction Graph (CIG) for matching news article pairs along with two new article-matching datasets, CNSE and CNSS, and it is the first method that addresses the problem of matching long Chinese news articles. Recurrent neural networks, such as ELMo (Peters et al., 2018), were prevalent before the introduction of Transformer models (Vaswani et al., 2017). But they are relatively slow and cannot handle the long dependency between distant tokens very well. Those methods are outperformed by Transformers that use the multi-head self-attention mechanism. Popular Transformer variants include BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b) and XLNet (Yang et al., 2019). CRE is related to tasks such as semantic textual similarity (Cer et al., 2017), text similarity (Thijs, 2019), text relatedness (Tsatsaronis et al., 2014), and text matching (Jiang et al., 2019). But it differs from those tasks because two related concepts may contain completely opposite semantic meanings. The current state-of-the-art model for semantic textual similarity is XLNet (Yang et al., 2019). However, there is still no work on the WORD dataset. Data Augmentation Data augmentation is to generate synthetic data based on the original data so that the training set is larger. It is useful when the amount of data is limited or the model is overfitting. Thus, augmenting data usually makes the trained model generalize better. Data augmentation was first commonly used in computer vision tasks (Shorten and Khoshgoftaar, 2019). Multiple data augmentation methods were then introduced for NLP tasks. An example of text classification is EDA (Wei and Zou, 2019), which involves synonym replacement, random insertion, random swap, random deletion. There is no existing data augmentation method for CRE. ConcreteGraph is closely associated with knowledge graph but they are not identical (Ji et al., 2022). In a typical knowledge graph, each node represents a named entity that is made up of a short phrase or even a single noun; but a node in ConcreteGraph corresponds to a concept, which can contain multiple sentences. In addition, a knowledge graph edge usually has a type attribute but a ConcreteGraph edge does not. CRE Properties The concept relatedness estimation (CRE) task is to predict whether two given concepts are related or unrelated. Thus, it is a binary classification task with two labels "related" and "unrelated". In this paper, we focus on concepts that are in the form of long documents. The CRE task exhibits some unique properties that are rarely present in other typical NLP tasks. To state these properties formally, we assume that there are 3 concepts A, B and C. The similarity symbol "∼" is used to represent that two concepts are "related", while the dissimilarity symbol " " is used to connect two unrelated concepts. Property 2 (Commutativity of Relatedness). If A and B are related, then B and A are related: Property 3 (Commutativity of Unrelatedness). If A and B are unrelated, then B and A are unrelated: Property 4 (Transitivity of Relatedness). If A is related to B and B is related to C, then A and C are related: Property 5 (Transitivity of Unrelatedness). If A is related to B but B is unrelated to C, then A and C are unrelated: Property 6. If A is unrelated to B and B is unrelated to C, no conclusion can be drawn about the relatedness between A and C Strictly speaking, the property 6 means that we cannot determine whether A and C are related or not, given that we only know A B and B C. Despite that, we can still be fairly confident that A and C are unrelated in practice if we know enough about what concepts are related to A or C. Namely, if we also know that many other concepts, D, E, F , . . . , are related to A but none of them are C, we can still be relatively confident that A and C are unrelated. For example, in Table 1, although it is not explicitly stated that "Landscape architecture" is unrelated to "Open-source software" in the WORD dataset, since we know well about the neighborhood of "Open-source software" (Figure 2), we can safely conclude that the two concepts are likely to be unrelated. Therefore, this property can be relaxed and used to produce more augmented data. We do not use the property 1 as it is trivial and only yields related concept pairs, which can cause imbalance to the augmented dataset. It is listed here only for completeness. Figure 2: The neighborhood of the concept "Opensource software" in the extracted ConcreteGraph ConcreteGraph Data Augmentation ConcreteGraph To make use of the CRE properties in practice, we can build a graph to piece together the pairwise relationships from the dataset and then sample new concept pairs from this graph. We name it "ConcreteGraph" (concept relatedness graph). An overview of the steps in the data augmentation method is shown in Figure 1. The structure of the ConcreteGraph is easy to understand: One can simply treat the concepts as the nodes and all related concept pairs as the edges. The annotation of the raw relatedness in the WORD dataset is a decimal score ranging from 0 to 1, which is the average of the binary answers of multiple annotators. Therefore, this relatedness score is higher when more annotators agree on the relatedness of the concept pair. However, this score cannot be directly used in a shortest-path algorithm. A high relatedness score should mean a short distance. To obtain the suitable distance measure, we introduce three mappings from the relatedness score to the distance: where d(A, B) ≥ 0 is the distance between the concept A and the concept B and s(A, B) > 0 is their relatedness score. As we desired, all three kinds of distances decrease when the relatedness score increases. When s(A, B) = 0, A and B are unrelated, we simply do not add an edge to these two concepts, and thus the distance between them is implicitly set to +∞. Algorithm 1 ConcreteGraph Sampling if path == NULL then 7: return (A, B), "unrelated" 8: else 9: if quality(path)> T then 10: return (A, B), "related" 11: else 12: return NULL, NULL 13: end if 14: end if 15: end if Sampling After building the ConcreteGraph, we can then sample new concept pairs that did not exist in the original dataset. The high-level idea is to pick two random concept nodes and check whether there is a path between them. If so, we assess the quality of their relationship according to several criteria; otherwise, they are a pair of unrelated concepts. By doing those steps, we are taking advantage of the commutativity properties 2, 3, the transitivity properties 4, 5, 6. Commutativity is used if we sample two nodes that already provided by the dataset but in a different order. For example, if (A, B) is provided and we sample (B, A). Although the ConcreteGraph is not a directed graph, commutativity is still useful in data augmentation. The reason is that Transformers are aware of permutations, (A, B) and (B, A) are different inputs for Transformers. Transitivity is used when the path between the two sampled concepts has at least two edges. For example, if the path is A → B → C, then the new concept pair (A, C) is justified by the transitivity properties. The algorithm can be expressed more formally in the pseudocode algorithm 1. The dataset D is a list of concept pairs and each of the concept pairs is a two-element tuple; To check whether there is a path between two concepts, we use Dijkstra's algorithm dijkstra(·, ·, ·) with the distance d(·, ·) in Eq(1) as the edge weight; The quality(·) function maps the path between the sampled concept pair to a set of scalars that are quality measures of the path; the "thresholds" variable T is a set of values for ensuring the quality of relationship between the sampled concept pair. In this paper, we developed two simple yet effective quality measures: path length and path score. Path Length Path length of a path is the number of edges. The edge weights along the path are not taken into account. When the path length is too long, the connection between the concept pair becomes "risky". That is, when there are many edges on the path, the probability of the existence of a "bad" edge is high, which degrades the quality of the path. Thus, we prefer paths that are not too long. Path Score Path score is the aggregation of all relatedness scores of the edges. A high path score should be assigned to a high-quality path and vice versa. We have also developed three methods to calculate the path score: mean, minimum, and product. "Mean" is the average of all edge scores; "minimum" is to find the minimum edge score along the path; "product" is the product of all edge scores. In Algorithm 1, after we successfully sample a new concept pair, we also need to check whether it has been found before or it is already in the dataset D. The sampling is unsuccessful if the algorithm returns NULL. We run the ConcreteGraph sampling algorithm until the sampling success rate drops to near zero, which gives us a ∼10% data augmentation ratio in practice (without considering commutativity). If we also add (B, A) as a new concept pair when we successfully sample (A, B), the augmentation ratio is ∼20%. The reason why we have multiple ways to calculate the distance and the path score is that we will apply our ConcreteGraph data augmentation on three Transformer models. These models work best with difference distance and path score measures, as we show in our experiments. Transformers for CRE To test the effectiveness of our ConcreteGraph data augmentation method, we finetune three Transformer models on our augmented dataset: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b) and XLNet (Yang et al., 2019). Their respective configuration details can be found in Appendix: Table 7, 8, 9. XLNet is the current state-ofthe-art model for the STS task. As in BERT, we use special tokens, [CLS] and [SEP], to accommodate the two concept documents. The maximum sequence length of all three models is 512 because we experiment with their base configuration. To deal with long documents, we use the following strategy to create input sequences: If both documents are longer than 255 tokens, we keep the first 255 tokens; if one of the documents is shorter than 255 tokens but the whole sequence of the two concept documents is still longer than 512 tokens, we truncate the longer document. For example, the input sequence could look like [[CLS], a 1:300 , [SEP], b 1:210 ] if concept A is 450-token long and concept B is 210-token long. In this case, we only keep the first 300 tokens of the concept A as it is the longer one. The resulting sequence has 300+210+2=512 tokens where "2" corresponds to the two special tokens. A fully-connected layer with logistic activation takes as input the Transformer hidden state of [CLS] and produces the prediction of the relatedness probability of the two concepts. Binary cross-entropy loss is used. Datasets We experiment with three datasets in two languages: English and Chinese. The Wikipedia Oriented Relatedness Dataset (WORD) dataset (Ein-Dor et al., 2018) is recently developed to focus on English concepts from Wikipedia. It is made up of 19,176 pairs of concepts. The Chinese News Same Event (CNSE) dataset and the Chinese News Same Story (CNSS) dataset were together introduced by Liu et al. (2019a). They both contain news articles from the major internet news providers in China. The statistics of the three datasets can be seen in Appendix: Table 10. Implementation Details We set the learning rate of the Transformer blocks at 1 × 10 −5 and the learning rate of the final fullyconnected layer at 1 × 10 −3 . We used the official dataset split for WORD whose train-test ratio is approximately 2:1. Since CNSE and CNSS do not provide an official dataset split, they are split randomly with a train-dev-test ratio of 7:2:1. Those dataset splits are fixed throughout the experiments for all models. The models are trained for at most 5 epochs, depending on the model and the dataset, and the last checkpoint is used for evaluation. Performance Comparison on the WORD dataset The results of the performance comparison are summarized in the WORD dataset, we include two traditional algorithms, BM25 (Robertson, 2009) andLDA (Blei et al., 2003). In the BM25-based relatedness estimation algorithm (Appendix: Algorithm 2), we use BM25 to query the source concept in the test set and check whether the target concept is a match. If the concept pair is a match, then it is a related concept pair. In the LDA-based relatedness estimation algorithm (Appendix: Algorithm 3), we first train the LDA model to learn what topics exist in the training set; then we obtain the topic distributions of concept pairs in the test set with the trained LDA model and calculate the cosine similarity between the topic distributions, which is the concept relatedness estimation. We developed a baseline based on ELMo (Peters et al., 2018) and graph convolutional network (GCN) (Fey and Lenssen, 2019) because CIG (Liu et al., 2019a) is also a GCN but it only works on CNSE and CNSS. We build one graph for each concept document, where each node corresponds to the sentence embedding from ELMo and each edge links two nodes if they are consecutive sentences (next to each other in the original concept document) or similar sentences based on the Agglomerative Clustering algorithm provided by Scikit-learn (Pedregosa et al., 2011). To illustrate the benefit of ConcreteGraph data augmentation, we train BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), and XLNet (Yang et al., 2019) on the original dataset and our augmented dataset ("w/ ConcreteGraph"). As we can see that the three Transformer models significantly outperform the non-deep-learning baselines (LDA, BM25), which is expected as LDA and BM25 were initially designed for document retrieval. GCN improves over LDA and BM25 in accuracy and F1 score mainly because of the pretrained ELMo model. Our ConcreteGraph data augmentation can further improve the Transformer models by ∼ 2.5% in accuracy and ∼ 1 − 5% in F1. Performance Comparison on the CNSE dataset and the CNSS dataset For the CNSE dataset and the CNSS dataset, we also use BM25 (Robertson, 2009) and LDA (Blei et al., 2003) as two representative baselines among traditional methods. BERT, RoBERTa, and XLNet are also finetuned on the two datasets and their augmented versions. The Concept Interaction Graph (CIG) model (Liu et al., 2019a) is the current state-of-the-art model. It is based on extracting a concept graph for each article. The concepts in those concept graphs are different from the CRE concepts in this paper. They are the concepts within an article, similar to named entities. One major problem with this model is that there is a limit to the size of the concept graphs, i.e., the number of concepts in a graph. If the concept graph exceeds the limit, the model simply discards the article pair. Their performance measurements excluded those excessively large graphs. By doing so, they are practically working with easier subsets of the original datasets, which causes inaccurate measurements of GIC's performance. We corrected the accuracy and the F1 score by adding the skipped pairs as wrong predictions for accuracy and false negatives for F1 score (equivalent to treating them as false positives because of the property of harmonic mean). CIG is trained with its default training settings. The results are also collected in Table 2 and we can see that our Transformer models that are finetuned on augmented datasets achieve better accuracy and F1 score on both the CNSE dataset and the CNSS dataset, outperforming all existing baselines. On both datasets, the accuracy is improved by ∼ 2% on average and the F1 score is improved by ∼ 1 − 5%. Ablation Study of Data Augmentation We divided our data augmentation into two main parts: data augmentation using the commutativity properties 2, 3 and data augmentation based on the transitivity properties 4, 5 and the relaxed version of property 6. We trained our model in four different settings to study the effect of the two independent data augmentation methods: no data augmentation ("No Aug"), only commutativity data augmentation ("Comm"), only transitivity data augmentation ("Trans") and commutativity + transitivity data augmentation ("Both"). The ablation study on the WORD dataset for each Transformer is included in Table 3. We compare their performance using accuracy, F1, and area under curve (AUC). We can observe that more performance gain is brought by transitivity. Although commutativity doubles the size of the dataset while transitivity only augments the dataset by ∼ 5−10%, commutativity does not provide new concept pairs and, thus, it cannot improve the performance a lot. Commutativity is mainly helpful to compensate the fact that Transformers are aware of the permutations of the two input documents. That is, (A, B) and (B, A) are different inputs to Transformers. In theory, Transformers should be able to implicitly learn the same set of new concept pairs as provided by transitivity. But in practice, this is hard to achieve as the structure of the Concrete-Graph is not easy to learn. For example, the biggest component (connected subgraph) in WORD's Con-creteGraph has 4,301 nodes, and we can sample up to 9,247,150 concept pairs from it. Such amount of potential new concept pairs cannot be perfectly captured by Transformers implicitly. More detailed ablation study of BERT based on additional metrics, precision, recall and specificity, is shown in Table 4. We can see that the Con-creteGraph data augmentation sacrifices precision and specificity for better recall. By the definition of those metrics (Appendix:Eq (2) are more false positives but much less false negatives when we use data augmentation. That is, the transformers are more lenient on making positive predictions. Such trade-off is worthwhile because, for instance, the harmonic mean of precision and recall (F1) of "Both" becomes higher than that of "No Aug". Effect of Path Quality Functions We use quality functions to measure the quality of the relationships between sampled concept pairs. Detailed ablation study for each component in our ConcreteGraph data augmentation algorithm is included in Table 5. The highlight colors show the changed component, and the other components in those rows remain the same. For example, only the score threshold (in blue) is changed in the first 5 rows. The distance mapping affects which path the Dijkstra's algorithm chooses given two sampled concept pairs (highlighted in yellow), which, in turn, can influence the path score. We experiment with three approaches to calculate the path score (highlighted in orange). Once we obtain the path score, we use a score threshold to filter out lowquality paths (highlighted in blue). We also filter the paths based on their path length, which is the number of edges on the path regardless of the edge weights or the edge scores (highlighted in green). According to the experiment results, the reciprocal distance mapping outperforms the other two mappings. One unique property of reciprocal distance mapping is that when the edge score approaches 0, the distance approaches infinity. Therefore, it penalizes low-score edges much more than Quadratic distance mapping is limited to the range from 0 to 1, which is the same as linear distance mapping, but it also penalizes edges more when the score is close to 0. For calculating the path score, all three models perform the best when minimum edge score is used. This is reasonable because whenever there is an edge with a low relatedness score, the connectivity between the two concept nodes becomes weak. Other path score measures, product and mean, might ignore a low-score edge if other edges on the path all have high scores. When minimum edge score is combined with a score threshold, we are able to remove weak relationships and only keep the high-quality paths. Maximum length is an independent quality measure of the score threshold. It is simply the number of edges on the path. By limiting the maximum length, we eliminate long dependencies, which are more "risky" than short paths. That is, it is more likely to have a low-score edge when the path is long. Table 6 shows the performance when no quality function is used ("Without Quality Thresholds"). When we run a tenfold data augment, the performance in fact decreases significantly, which indicates that not every new concept pair is beneficial to the performance. If we augment the dataset by the same amount as the "With Quality Thresholds" setting (3rd row and 4th row), the performance is not as bad. But by including higher-quality concept pairs, we can boost the performance even further (5th row and 6th row). Conclusion Concept relatedness estimation is a recently introduced task that has a wide range of applications. Many typical NLP data augmentation methods can be applied on CRE, but the unique properties of CRE are underexplored. ConcreteGraph takes advantage of such CRE properties and can boost the performance of Transformers even further. A Model Configurations Model specification for the transformer models. D Pseudo-code for the Algorithms Appeared in this Paper The pseudo-code for baselines, LDA and BM25. Algorithm 2 BM25-based relatedness estimation Input: testPairs {(A 1 , B 1 , score 1 ), (A 2 , B 2 , score 2 ), (A 3 , B 3 , score 3 ) . . . , (A n , B n , score n )}, threshold T Output: testAccuracy 1: score = {} 2: Documents = [A 1 , B 1 , A 2 , B 2 , . . . , A n , B n ] 3: testN um = 0 4: predictT rue = 0 5: for every pair (A i , B i , score i ) in testPairs do BM 25(d, q) = i log N − n(q i ) + 0.5 n(q i ) + 0.5 · (k 1 + 1) · tf (q i , d) where N is the total number of Documents, q i is the i-th token in document q, n(q i ) is the number of documents contains token q i , k 1 , k 2 , b are three parameters, tf (a, b) is the frequency of token a in b, L d is the length of document d and L avg is the average length of all documents.
2022-06-28T01:16:10.291Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "3524ee0ca743f168053507d56233c05037d9d8ab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3524ee0ca743f168053507d56233c05037d9d8ab", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
229430978
pes2o/s2orc
v3-fos-license
Environmental Accounting and Corporate Sustainability: A Research Synthesis The paper examined the extent of environmental accounting and its relationship with corporate sustainability with an ardent focus on controversies, contradictions, gaps and relationships with previous explorations. The researchers adopted an assiduous literature review approach on the diverse perspectives of investigators and scholars through probing into their conceptualizations, empiricism and theoretical underpinnings of industrialized nations, emerging economies and least developed countries. The analysis provides a comprehensive overview of recent studies on multiple dimensions of environmental accounting and their interrelationship with corporate sustainability. We observed that there was a broad gap on the issue of profitability, financial leverage, industry type and social and moral responsibility of environmental accounting. Additionally, we also discovered that the relationship between environmental accounting and corporate sustainability has not been effectively established in emerging economies. Premised on this intriguing findings, it was recommended that standard setters should yield to the incessant calls and proposals canvassed by renowned scholars for directives bordering on the establishment of International Financial Reporting Standards (IFRS) on environmental accounting (EA) in order to enhance corporate sustainability (CS). Introduction The paradigm shift of evolving economic activities in many areas from agriculture to manufacturing occasioned by the industrial revolution of the late 18 th century have brought about increasing use of natural resources and continuous emissions of greenhouse gases by industries around the world especially in this age and has inevitably brought issues on the nexus between environmental accounting (EA) and corporate sustainability (CS) interrelationships to the front burner of environmental management and green accounting discourses and has raised overwhelming calls for an upward surge in the strength of exploration in contemporary times. The industrial revolutions led to tremendous economic improvement for most spectrum of people in the industrialized societies. Besides the ubiquitously dynamic nature of globalization, the environmental constraints imposed on corporations and the quest for appropriate practices to disclose environmental information has necessitated the need to drive for EA to be brought to the fore to resolve issues relating to EA. The incessant short comings of environmental pollution, oil spillage, gas flaring, global warming, deforestation, depletion of natural resources, bio diversity and environmental degradation has necessitated the fundamental problem of the current human society which is very severe (Amiri et al., 2014.). The issues of environmental accounting (EA) is associated with both developing and developed nations and it is a vast and scintillating issue that has aroused the attention of authorities and scholars on the debates and emerging strands of literature. it would have been presumed that studies on environmental accounting (EA) is overarching and the issues surrounding it should have been over flogged, but it happens not to be the situation because , it is not a static event, but a dynamic and ever revolving process. Diverse literatures have proxied environmental accounting (EA) as Bartolomeo et al (2000) views it as a superset of accounting, Okafor (2018), assessed it as the fusion of environmental dimension into the macro or micro level. development is redirecting the attention of organizations towards environmental sensitivity. Sustainable development as is generally known, focuses on the creation of wealth and prosperity, whilst considering the true importance of social and environmental aspects, allowing businesses and public organizations to meet triple bottom line mandates in sustainable management. The vibrancy for sustainability and its nomenclature has assumed giant strides in diverse fields such as, environmental microbiology, environmental ecotoxicology, environmental law, water supply engineering, solid waste management, environmental economics, environmental entrepreneurship, environmental management accounting (EMA) and environmental financial accounting (EFA) by bringing together scholarship and practice in a highly organized and logical manner. This approach is of relative immense importance and there is a wide spread belief, that it will provide a hard core methodological basis for establishing issues related to sustainable development, but not solving them, because they are by nature unresolvable. Remarkable differences could also exist among developing and developed countries studies based on the strength of their Environmental Accounting (EA) and Corporate Sustainability (CS) domain which could be as a result of their research settings, dispositions, and the peculiarity of the environment wherein the study is carried out. Retrospectively, there has been a proliferation of substantial empirical and theoretical studies on this topical issue which were researched by erudite scholars such as (Hernadi & Bettini, 2012;Kilian & Hennings;, Kumar, 2017Schaltegger & Wagner, 2006), relating to environmental accounting (EA) of corporate entities which had their dominance in developed countries which could be at variance with studies in developing countries as a result of their legal and regulatory framework. In the context of developing countries, researchers such as (Beredugo & Mefor;Ironkwe & Success;2017;Okoye & Ezejiofor, 2013), studied environmental accounting (EA) and corporate sustainability (CS). These studies had their divergent perspectives of outcomes of the link between environmental accounting (EA) and corporate sustainability (CS) which appear to have been inconclusive and failed to reach a consensus on the intrinsic relationship between environmental accounting (EA) and corporate sustainability (CS). Their research findings had negated the consensus proposition of the bidirectional causality between environmental accounting (EA) and corporate sustainability (CS). Furthermore, most of these studies have been concentrated on selected sub-sectors of the developing countries economy which may not be a true representation of the activities of the entire business sector. The paper seeks to address new frontiers and fill the void in previous studies by exploring possible distinctive links between environmental accounting (EA) and corporate sustainability (CS) literature by buttressing on critical issues bordering on this contentious and controversial sub facet of accounting. This study will also take a further step by addressing the following pertinent question: What are the factors that drive environmental accounting (EA) and corporate sustainability (CS)? The rest of this paper is demarcated into the following subtitles : section 2, deals with the concept of environmental accounting (EA)and corporate sustainability (CS), section 3, gives an insight into the theoretical framework on which this study was hinged and further establishes its assertions based on this contemporary issue, section 4 rhetorically, presents prior empirical evidence on environmental accounting (EA) and corporate sustainability (CS), while section 5, which finalizes the report, is based on conclusion and recommendations. Concept of Environmental Accounting Environmental Protection Agency (1995) defined environmental accounting as the identification and measurement of the costs of environmental materials and activities using this information for environmental management decisions accruable to shareholders. According to Osemene (2010), environmental accounting reports vital information on the use of natural resources, communication and measurement of the costs of business activities and their potential impact on the environment. Howes (2002), views that environmental accounting has a connection between diverse subsets of accounting such as external and internal environmental accounting and it also fuses together organizations culture and environmental sustainability to give it a more balanced view. We report, however, that environmental accounting is the incorporation of environmental and social issues to already established financial information, with the ultimate aim of satisfying stakeholder's aspirations. Yu and Zhao, (2015), posits that corporate sustainability (CS) is an all-encompassing notion that inculcates environmental responsibility, economic viability and social responsibility. Concept of Corporate Sustainability Montel (2008), reports it as being all inclusive and embraces economic standard, social and environmental contexts. Fifka and Drabble (2012), views it as evolving and dynamic reporting practices of businesses over a course of time. From the submission of this present study, we therefore document it as the structure and activity that absorbs economically, socially and environmentally over time. Stakeholder Theory Enormous amount of verifiable studies and anecdotal reports/evidence have demonstrated the validity of the stakeholders theory in the analyses of the relationship between environmental accounting (EA) and corporate sustainability (CS) and are visible testaments to the fact that stakeholders theory only exists in the context of a firm and it also corroborates the view of other academic discourses that are firmly rooted and championed along this course. Some of the positions and assertions canvassed by multiple authorities, advocates and reputed scholars in the field of green accounting will be fully established in order to anchor the study and debates on this pertinent issue. The pioneer philosopher of stakeholders theory is Freeman (1984).Stakeholders theory was founded on the premise that corporations are an integral part of a social system with a pivotal focus revolving around the various stakeholders groups drawn from the ranks of society. Vast amount of theoretical and empirical studies have demonstrated the validity of the stakeholders theory in the analyses of the relationship between company's employees, customers, suppliers, financiers, communities, governmental bodies, political groups, trade associations, due to their impact on companies and have affirmed that the theory holds sway due to its inclination with research on firmament (Donaldson & Preston, 1995). Also, there have been dissenters to these propositions based on this theory, due to the conflicting and contrasting academic literature and stand points. Sternberg (1997) refutes the claim, that entities can be held accountable to a collective group of stakeholders and asserts that, the imperative position of those in the managerial cadre is to reach a compromise on balancing the discords of various stakeholders, given the enormous number of stakeholders and their discords. Therefore, based on this theory and drawing inspirations from the work of Robert (1992), we will therefore hinge this research on stakeholder theory due to its precedence and fulcrum on studies related to environmental accounting (EA) and corporate sustainability (CS). Prior Empirical Evidence on Environmental Accounting (EA) and Corporate Sustainability (CS) A wide array of exhaustive studies exploring environmental accounting (EA) and corporate Sustainability (CS) on companies in both developed and developing countries have been investigated by multiple scholars and their findings have been mixed. We therefore present a case by case scenario of each of these studies by indicating and embedding their objectives, methodologies adopted, multidimensional findings and core arguments raised in certain situations as have been portrayed in previous studies which informed the field. Robbins (1991) investigates the environmental constraints imposed within industrial company's strategies within the context of U.S.A and the European countries, where there was an exponential increase in the disclosure of environmental information. He probed to know, if such disclosures were as a result of environmental damages and their expected financial impact. The study deduced that environmental impacts has a significant effect on asset and business values. He equally reveals, companies violations of environmental laws and dysfunctional ways of disposing industrial waste as the rationale for environmental pollution. This follows the school of thought and is a statement of fact, based on the proposition that findings from America and Europe context are not in consonance with the fallacy of generalization. (2006) presents an empirical analysis of managing sustainability performance measurement and reporting in an integrated manner. A cursory observation demonstrates that there is a link between sustainability Balanced score card as a strategic information and management approach, sustainability reporting as a supporting measurement approach and sustainability reporting for communication and reporting . Enahoro (2009) examines the level of independence of tracking of costs impacting on the environment; level of efficiency and appropriateness of environmental costs and disclosure reporting. Pearson's product moment correlation tests-tests statistics, multivariate linear regression analysis were used as the statistical tool of analyses. The result shows that, there is a deficiency in the costing system used for tracking of externality costs. Furthermore, environmental operating expenditures are not charged independently of other expenditures. (2010) studies the relationship between green accounting and sustainability of Peruvian companies. It is observed that there is an overestimated GDP traditional measure by 51-64%, which is the real economic income produced by Peruvian metal mining sector during the period 1992-1996. Cortez and Cudia (2011) explores the impact of environmental innovations on financial performance of Japanese electronics companies following the growing literature linking corporate social performance with profitability. The sample consists of 10 automotive and 10 electronic companies listed in the Tokyo Stock Exchange. Granger causality tests were performed to establish virtuous cycles. Their findings were in consonance with risk minimization efforts of electronics companies in spite of declining profitability. Khalid, Lord & Dixon (2012), examines the level of environmental management accounting (EMA) implementation in Malaysian companies. It was discovered that the elements of environmental related management accounting within some of the organizations was driven by a motivation to reduce costs rather than environmental conservation. Beredugo and Mefor (2012) investigates the relationship between environmental accounting, reporting and sustainable development in Nigeria. Pearson correlation coefficient and OLS were used for the statistical analysis, the results revealed that there is a significant relationship between environmental accounting, reporting and sustainable development. It was found that Environmental Accounting (EA) motivates organizations to monitor their GHG emissions and other environmental data against reduction targets, and also non-compliance with environmental accounting and reporting regulations has its adverse effects. From Environmental Accounting (EA) and Corporate Disclosure (CD) perspectives, the factors that are of utmost importance are (1) profitability, (2) financial leverage (3) regulatory pressure (4) social and moral responsibility, (5) legal and cultural factors, (6) independent audit, (7) company size, and (8) industry type. More in depth and standardized measures should be taken into paramount consideration in establishing the statistical criteria and also to provide a basis for differentiation between various dimensions and degrees of Environmental Accounting (EA) and Corporate Sustainability (CS). Hernádi and Bettini (2012) indicates that companies have a pivotal role in achieving sustainability. Their current activities not only have an effect on today's world but also on the future as well. Now, companies themselves are slowly understanding this; but few know how to achieve corporate sustainability. However, traditional accounting systems do not deal with accounting for social and environmental effort and cannot demonstrate them. For this reason, sustainability accounting has gone a step ahead of green accounting and should be thoroughly emphasized by researchers/investigators and the underlying concerned parties. Recommendations were rife that this decisions, must be based on the pertinent information provided by sustainability accounting that invariably contributes to economic, social and environmental perspectives. Following the systematic sequence of this study, it is apparent that statistical tools were not taken into consideration. Retrospectively, the narratives imply a high level of compulsion. Figuero, Orihuela and Calfucara Okoye and Ezejiofor (2013) studies the appraisal of sustainability environmental accounting in enhancing corporate performance and economic growth in Nigeria. Pearson Product Moment Correlation Co-efficient was used as the basis of analysis. The findings show that sustainable environmental accounting has significant impact on corporate productivity. Kilian and Hennings (2014) studies the relationship between corporate social responsibility and environmental accounting in German controversial industries. A sample of 30 German DAX companies for a period of 11years between the year (1998 -2009) spanning across subsectors such as consumer goods, financial services, chemical and pharmaceutical, automobiles, financial services, transportation, time, energy, manufacturing, consumer goods, and tourism were taken into perspective. The authors used a combination of qualitative and quantitative approach. The result of the duo reveals/ indicates a coherent transition from the qualitative study to a category system, that encompasses both CSR philosophies and CSR related activities as the normative basis of CSR communication. Agbiogwu, Ihendinihu and Okafor (2016) examines the impact of environmental and social costs on performance of Nigerian companies comprising net profit margin, earnings per share and return on capital employed. The study was carried out on a sample of 10 randomly selected firms in Nigeria for the period 2014. The researchers employed T test for the empirical investigation and found that the sample companies environmental and social cost significantly affects net profit margin, earnings per share and return on capital employed of manufacturing companies. The authors recommended that government should mandate manufacturing companies to adhere strictly to environmental laws. Kumar (2017) explores the link between environmental accounting and triple bottom line, quantitative environmental reporting and standard method, voluntary environmental disclosure and legal requirement, size of company and volume of environmental disclosure, material flow analysis and life cycle assessment to achieve sustainable development by using a sample of Bangladeshi companies. Paired sample test, cross tabulation, matrix were the statistical tool deployed for the analyses. The findings revealed that sustainability of corporations was ijbm.ccsenet.org International Journal of Business and Management Vol. 16, No. 1; associated with the performance of economic, social and environment. Other factors like quantitative environmental reporting, standard method, voluntary environmental disclosure, legal requirement, size of the company, volume of environmental disclosure, material flow analysis, and life cycle assessment were found that, they worked as a complement to enhance the performance of economic, social and environment to achieve sustainable development in Bangladeshi corporation. Ironkwe and Success (2017) presents an empirical examination on environmental accounting and sustainable development. The study focuses on the Niger Delta area of Nigeria, using spearman's rank correlation and chisquare to analyse the bidirectional relationship between economic stability, sustainable development and environmental accounting, their study reveals that environmental accounting is imperative for sustainable development in Nigeria and should be absorbed by all companies which fall within the purview of the Niger Delta domain of Nigeria. Giang, Binh, Thuy,Ha and Loan(2020) examines Environmental Accounting (EA) for Sustainable Development (SD) in Vietnamese companies. The samples cut across 80 companies such as mining, manufacturing, and processing industries. Data is analysed using multivariate linear regression. Research results, reveals that determinants such as managers perceptions of costs and benefits, environmental changes, characteristics of the scale of production and business activities of enterprises, pressures to announce sustainable environmental information and reporting have significant influences on the development of environmental accounting for sustainable development. Drawing from the empirical literature on Environmental Accounting (EA) and Corporate Sustainability (CS), it can be inferred that most of the scholarly studies on this topical issue have been concentrated in industrialized nations and there are insufficient studies still carried out on developing countries and emerging markets on this dynamic and ever evolving issue on Corporate Reporting (CR), even though, there are circumstantial evidences of dissipation of wasteful energy. Conclusion and Recommendations In the foregoing sections, we have cross examined the empirical investigations, conceptualizations and theoretical antecedents of vast amount of studies dwelling on Environmental Accounting (EA) and Corporate Sustainability (CS). The precursory discloses that explorations on Environmental Accounting (EA) and Corporate Sustainability (CS) cannot be waived aside, due to the resultant effects, which impinges on entities. Mandatory obligations should be made in letter and spirit for entities to account for their substantial environmental costs, environmental liabilities, water effluent discharges, environmental pollution, and greenhouse gas emissions that cut across diverse subsectors and also to ameliorate the multiplier effects, which could jeopardize their operational capabilities. Holistic policy measures and mandated documentations on Environmental Impact Assessment (EIA) should be institutionalized by corporations in developing countries and emerging markets and made imperative in order not to circumvent and violate the environmental laws. Furthermore, it should be used as a reference point by companies and strict enforcement to the established framework should be taken into paramount consideration. Standard setters should yield to the incessant calls and proposals made for the establishment of International Financial Reporting Standards (IFRS) on Environmental Accounting (EA) in order to enhance Corporate Sustainability (CS). Penultimately, Environmental Accounting is a subject of different theories, which may have resulted in the mixed results in extant literature between Environmental Accounting (EA) dimensions and corporate sustainability (CS). These inconsistencies call for future scientific inquiry that should be subjected to further investigation. Ultimately, with the research gap on the issue of profitability, financial leverage, industry type and social and moral responsibility, an empirical investigation incorporating these dimensions of environmental accounting will no doubt contribute significantly to the repository of knowledge on the nexus between Environmental Accounting (EA) and Corporate Sustainability (CS). Paramountly, this research was bedevilled of pertinent data on environmental accounting. Sophisticated analytical data incorporating structural equation modelling can be brought to the fore in future research on the interplay between environmental accounting and corporate sustainability in Sub Sahara Africa in order to be able to predict certain variables.
2020-12-03T09:07:45.846Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "f43f815f4bb5892862f5636b5ec41078825cd697", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijbm/article/download/0/0/44326/46724", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37793cb7604e686d7e1590e9f6491078a5104e9d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
15126425
pes2o/s2orc
v3-fos-license
Mechanisms of Candida albicans Trafficking to the Brain During hematogenously disseminated disease, Candida albicans infects most organs, including the brain. We discovered that a C. albicans vps51Δ/Δ mutant had significantly increased tropism for the brain in the mouse model of disseminated disease. To investigate the mechanisms of this enhanced trafficking to the brain, we studied the interactions of wild-type C. albicans and the vps51Δ/Δ mutant with brain microvascular endothelial cells in vitro. These studies revealed that C. albicans invasion of brain endothelial cells is mediated by the fungal invasins, Als3 and Ssa1. Als3 binds to the gp96 heat shock protein, which is expressed on the surface of brain endothelial cells, but not human umbilical vein endothelial cells, whereas Ssa1 binds to a brain endothelial cell receptor other than gp96. The vps51Δ/Δ mutant has increased surface expression of Als3, which is a major cause of the increased capacity of this mutant to both invade brain endothelial cells in vitro and traffic to the brain in mice. Therefore, during disseminated disease, C. albicans traffics to and infects the brain by binding to gp96, a unique receptor that is expressed specifically on the surface of brain endothelial cells. Introduction Hematogenously disseminated candidiasis is a serious disease that remains associated with approximately 35% mortality, even with currently available treatment, and Candida albicans is the infecting organism in approximately 50% of patients [1,2]. During this disease, C. albicans is carried by the bloodstream to virtually all organs of the body, including the brain. Although candidal infection of the brain may not be clinically evident in adults with disseminated candidiasis, it is frequently found at autopsy in patients who die of this disease [3]. Even more importantly, candidal brain infection, especially meningitis, is a significant problem in premature infants who have risk factors for disseminated candidiasis, even in the absence of detectable candidemia [4,5]. To invade the brain parenchyma, blood-borne C. albicans cells must adhere to and traverse the endothelial cell lining of the blood vessels within the central nervous system. Brain endothelial cells are significantly different from those lining systemic blood vessels. For example, they have tight junctions that are not present in the endothelial cells in other vascular beds. Forming the blood brain barrier, brain endothelial cells restrict the diffusion of large or hydrophilic molecules into the central nervous system, while allowing the diffusion of small hydrophobic molecules [6]. More importantly, some microbial pathogens, such as Neisseria meningitidis, Streptococcus pneumonia, Escherichia coli K1, and Cryptococcus neoformans have an enhanced capacity to adhere to and invade human brain microvascular endothelial cells (HBMECs), which enables them to preferentially infect the central nervous system via the hematogenous route [7][8][9][10][11]. Thus, these pathogens can exploit the unique characteristics of HBMECs to specifically infect the brain. Studies using human umbilical vein endothelial cells (HUVECs) as representative systemic endothelial cells have demonstrated that C. albicans adheres to, invades, and damages these cells in vitro [12,13]. One mechanism by which C. albicans invades these cells is by stimulating its own endocytosis, which is induced when the C. albicans invasins, Als3 and Ssa1, bind to receptors such as Ncadherin and HER2 on the endothelial cell surface [14][15][16][17]. C. albicans yeast and hyphae can also invade HBMECs by inducing their own endocytosis [18,19]. However, the mechanism by which this pathogen invades these endothelial cells and infects the brain is poorly understood. Recently we discovered that C. albicans VPS51 is up-regulated by contact with HUVECs in vitro, and that a vps51/vps51 insertion mutant is defective in damaging these endothelial cells [20]. In Saccharomyces cerevisiae, Vps51 is known to bind to the Vps52/53/54 complex and is required for the retrograde transport of proteins from endosomes to the late Golgi [21,22]. Although the function of Vps51 in C. albicans has not been studied in detail, the vps51/ vps51 insertion mutant has a fragmented vacuole, similar to the corresponding S. cerevisiae mutant [20][21][22]. Thus, Vps51 likely plays a role in protein trafficking in C. albicans. In the current study, we investigated how deletion of VPS51 affects the virulence of C. albicans during hematogenous infection. We found that the vps51D/D null mutant exhibits a preferential tropism for the brain. This tropism is mediated in part by the enhanced exposure of Als3 on the surface of the vps51D/D mutant, which binds to gp96 on the surface of HBMECs and mediates invasion of these endothelial cells. We further discovered that gp96 functions as a receptor for wild-type C. albicans on HBMECs, but not HUVECs, indicating that this organism invades the central nervous system by binding to a receptor that is expressed specifically on HBMECs. Results Deletion of VPS51 causes reduced mortality and decreased kidney and liver fungal burden during hematogenously disseminated candidiasis To investigate the role of Vps51 in the virulence of C. albicans, we inoculated mice via the lateral tail vein with a wild-type strain, a vps51D/D mutant, and a vps51D/D+pVPS51 complemented strain and then monitored their survival over time. We found that all mice infected with the vps51D/D mutant survived for the entire 21-day observation period, whereas all mice infected with the wildtype strain died within 7 days after inoculation ( Figure 1A). Complementing the vps51D/D mutant with an intact copy of VPS51 restored its virulence to wild-type levels, thus confirming that Vps51 is required for the maximal virulence of C. albicans. The greatly reduced virulence of the vps51D/D mutant was further verified by infecting mice with a 6-fold higher inoculum. As expected, mice infected with the wild-type strain at this higher inoculum died rapidly, with a median survival of only 3 days ( Figure 1B). However, all mice infected with the vps51D/D mutant still survived. Therefore, Vps51 is necessary for the full virulence of C. albicans. The mouse model of hematogenous disseminated candidiasis mimics many aspects of this disease in humans, particularly the formation of microabscesses in most organs [23,24]. We therefore investigated the effects of deleting VPS51 on organ fungal burden. During the first 4 days of infection, the kidneys and livers of mice infected with the vps51D/D mutant contained significantly fewer organisms than those of mice infected with either the wild-type or vps51D/D+pVPS51 complemented strain ( Figure 1C and D). Furthermore, the kidney fungal burden of mice infected with the vps51D/D mutant progressively declined after the first day of infection. In contrast, the kidney fungal burden of mice infected with the wild-type and vps51D/D+pVPS51 complemented strains progressively increased for the first 4 days post-infection, after which these mice began to die. These results further demonstrate that the overall virulence of the vps51D/D mutant is decreased. Absence of Vps51 or Vps53 results in increased brain fungal burden A surprising result was that during the first 4 days of infection, the brain fungal burden of mice infected with the vps51D/D mutant was significantly greater than that of mice infected with either the wild-type or vps51D/D+pVPS51 complemented strain ( Figure 1E). Indeed, after 3 days of infection, the brains of mice infected with the vps51D/D mutant contained a median of 50-fold more organisms than those of mice infected with the wild-type strain. Despite having a high brain fungal burden, the mice infected with the vps51D/D mutant did not appear to be sick and had no obvious signs of neurological disease. Moreover, beginning on the fourth day of infection, these mice progressively cleared the organisms from their central nervous system. These results suggest that while the overall virulence of the vps51D/D mutant is decreased, it has a distinct tropism for the brain. We verified these quantitative culture results by performing histopathologic analysis of the brains of the infected mice. Foci containing multiple organisms were visible in the brains of mice infected with the vps51D/D mutant, especially in the hippocampus ( Figure 1F). In sharp contrast, only rare organisms were visible in the brains of the mice infected with either the wild-type or vps51D/ D+pVPS51 complemented strains, and these organisms were typically either solitary or in pairs. To determine if another member of the Vps51/52/53/54 complex is required for maximal virulence and enhanced brain tropism, we constructed and analyzed a vps53D/D mutant. This strain also caused no mortality in mice following tail vein inoculation. (Figure 2A) Furthermore, it accumulated at significantly higher levels in the brain than did the wild-type and vps53D/D+pVPS53 complemented strains ( Figure 2B). Collectively, these results demonstrate that the Vps51/52/53/54 complex plays a key role in virulence, and that C. albicans strains that lack components of this complex preferentially infect the brain. The vps51D/D mutant has an increased capacity to adhere to and invade HBMECs To cross the endothelial cell lining of the vasculature, C. albicans must first adhere to these endothelial cells and then invade through them. We hypothesized that the vps51D/D mutant had increased capacity to infect the brain because it preferentially adhered to and invaded the unique endothelial cells that line the blood vessels of central nervous system. To test this hypothesis, we compared the interactions of this mutant with HUVECs and HBMECs in vitro. We found that the adherence of the vps51D/D mutant to HUVECs was increased by only 22% compared to the wild-type strain ( Figure 3A). However, the adherence of this mutant to HBMECs was increased by 95% ( Figure 3B). There was an even greater difference in the capacity the vps51D/D mutant to induce its own endocytosis by HUVECs compared to HBMECs. The endocytosis of the vps51D/D mutant by HUVECs was 58% lower than that of the wild-type strain ( Figure 3C). In contrast, the endocytosis of this mutant by HBMECs was 39% higher than the wild-type strain ( Figure 3D). Complementing the vps51D/D mutant with an intact copy of VPS51 restored its interactions with both types of endothelial cells to wild-type levels. The Author Summary During hematogenously disseminated infection, the fungus Candida albicans is carried by the bloodstream to virtually all organs in the body, including the brain. C. albicans infection of the brain is a significant problem in premature infants with disseminated candidiasis. To infect the brain, C. albicans must adhere to and invade the endothelial cells that line cerebral blood vessels. These endothelial cells express unique proteins on their surface that are not expressed by endothelial cells of other vascular beds. Here, we show that C. albicans infects the brain by binding to gp96, a heat shock protein that is uniquely expressed on the surface of brain endothelial cells. Gp96 is bound by the C. albicans Als3 invasin, which induces the uptake of this organism by brain endothelial cells. The C. albicans Ssa1 invasin also mediates fungal uptake by brain endothelial cells, but does so by binding to a receptor other than gp96. Thus, during hematogenously disseminated infection, C. albicans traffics to and infects the brain by binding to gp96, a receptor that is expressed specifically on the surface of brain endothelial cells. increased capacity of the vps51D/D mutant to adhere to and invade HBMECs compared to HUVECs provides a likely explanation for the enhanced tropism of this mutant for the brain. Gp96 mediates C. albicans invasion of HBMECs Next, we sought to identify the HBMEC receptor for both wildtype C. albicans and the vps51D/D mutant. HBMECs are known to express high amounts of the heat shock protein, gp96 on their cell surface, whereas HUVECs do not [9]. Furthermore, gp96 functions as an HBMEC-specific receptor for E. coli K1 strains that cause neonatal meningitis [9]. We used multiple complementary approaches to evaluate whether gp96 expression is required for C. albicans to invade HBMECs. First, we tested the capacity of an anti-gp96 antibody to inhibit HBMEC endocytosis of C. albicans. This antibody reduced the endocytosis of wild-type C. albicans by 24% and the vps51D/D mutant by 48% ( Figure 4A). Second, we determined the effects of siRNA-mediated knockdown of gp96 on endocytosis. HBMECs transfected with gp96 siRNA endocytosed 52% fewer wild-type C. albicans cells and 82% fewer vps51D/D cells than did HBMECs transfected with control siRNA ( Figure 4B). Importantly, the effect of gp96 knockdown on endocytosis was specific for HBMECs because knockdown of gp96 in HUVECs had no effect on their capacity to endocytose C. albicans ( Figure 4C). Also, knockdown of gp96 did not inhibit HBMECs endocytosis of transferrin ( Figure 4D), demonstrating that reducing gp96 protein levels did not cause a global decrease in receptor-mediated endocytosis. Collectively, these results indicate that gp96 is required for maximal HBMEC endocytosis of both wild-type C. albicans and the vps51D/D mutant. To further explore these findings, we investigated the effects of overexpressing gp96 on the endocytosis of C. albicans. We found that overexpression of gp96 in HBMECs enhanced the endocytosis of the wild-type strain and the vps51D/D mutant by 100% and 115%, respectively ( Figure 4E). Similarly, heterologous expression of human gp96 in Chinese hamster ovary (CHO) cells resulted in a 42% increase in the endocytosis of wild-type C. albicans and a 102% increase in the endocytosis of the vps51D/D mutant compared to control CHO cells transfected with the empty vector ( Figure 4F). Therefore, these combined results demonstrate that gp96 functions as an HBMEC receptor that mediates the endocytosis of both wild-type C. albicans and the vps51D/D mutant. C. albicans Als3 and Ssa1 mediate HBMEC endocytosis in vitro Our previous studies revealed that the C. albicans proteins Ssa1 and Als3 function as invasins that induce the endocytosis of this organism by HUVECs [14,15]. To investigate the roles of these fungal proteins in HBMEC invasion, we analyzed ssa1D/D and als3D/D single mutants, as well as vps51D/D ssa1D/D and vps51D/ D als3D/D double mutants. Approximately 30% fewer hyphae of the ssa1D/D single mutant were endocytosed by HBMECs as compared to wild-type parent strain and the ssa1D/D+pSSA1 complemented strain ( Figure 5A). Similarly, the endocytosis of the vps51D/D ssa1D/D double mutant was significantly lower than the vps51D/D single mutant. Thus, Ssa1 is required for the maximal endocytosis of both wild-type and vps51D/D mutant strains of C. albicans by HBMECs in vitro. Als3 played a greater role than Ssa1 in stimulating the endocytosis of C. albicans by HBMECs in vitro. Both the als3D/D single mutant and the vps51D/D als3D/D double mutant were endocytosed extremely poorly by these endothelial cells ( Figure 5B), indicating that Als3 is essential for the endocytosis of C. albicans by HBMECs in vitro. To determine whether Ssa1 and Als3 mediate the endocytosis of C. albicans by directly interacting with endothelial cells, we used a heterologous expression strategy in which we expressed C. albicans SSA1 or ALS3 in the normally non-invasive yeast, Saccharomyces cerevisiae [25]. Expression of C. albicans SSA1 in S. cerevisiae resulted in a 300% increase in the endocytosis of this organism by HUVECs and a 43% increase in its endocytosis by HBMECs, as compared to the control strain of S. cerevisiae ( Figure 6A and B). Moreover, expression of C. albicans ALS3 in S. cerevisiae resulted in a 2050% and 1880% increase in endocytosis by HUVECs and HBMECs, respectively ( Figures 6C and D). Collectively, these data demonstrate that Ssa1 is a more potent inducer of fungal endocytosis by HUVECs than by HBMECs, whereas Als3 can induce endocytosis by HUVECs and HBMECs with similar efficacy. As HUVECs do not express gp96 on their surface [9], HUVEC endocytosis of S. cerevisiae expressing C. albicans SSA1 or ALS3 is mediated by receptors other than gp96, such as Ncadherin and HER2 [14][15][16][17]. Als3 interacts with gp96 to induce HBMEC endocytosis The above results suggested a model in which Als3 on the surface of C. albicans hyphae binds to gp96 on the surface of HBMECs and induces endocytosis. To test this model, we analyzed the effects of siRNA knockdown on the endocytosis of the S. cerevisiae strain that expressed C. albicans Als3. As predicted, knockdown of gp96 in HBMECs reduced the endocytosis of the Als3 expressing strain of S. cerevisiae by 79% compared to control HBMECs ( Figure 6E). We also tested the capacity of different C. albicans mutants and strains of S. cerevisiae to bind gp96 in HBMEC membrane protein extracts. As predicted by our endocytosis results, the vps51D/D mutant bound more gp96 than did the wild-type strain ( Figure 7A). Also, the ssa1DD mutant bound slightly less gp96 than did the wildtype strain, and the als3D/D mutant bound very poorly to this protein. Finally, the strain of S. cerevisiae that expressed C. albicans Als3 bound to gp96, whereas the control strain of S. cerevisiae did not ( Figure 7B), thus indicating that Als3 directly interacts with gp96. Next, we used flow cytometric analysis of C. albicans hyphae stained with either anti-HSP70 or anti-Als3 antibodies to quantify the levels of Ssa1 and Als3 that were exposed on the surface of the various strains. Although the vps51D/D mutant had normal Ssa1 surface exposure (data not shown), it had greater surface exposure of Als3 than did the wild-type and vps51D/D+pVPS51 complemented strains ( Figure 7C). The greater surface exposure of Als3 by the vps51D/D mutant likely contributes its enhanced capacity to induced HBMEC endocytosis. Ssa1 is important for brain invasion by wild-type C. albicans whereas Als3 is necessary for brain invasion by the vps51D/D mutant Lastly, we investigated the roles of Ssa1 and Als3 in mediating brain invasion in vivo by both wild-type and vps51D/D mutant strains of C. albicans. Mice were inoculated with the various C. albicans strains via the tail vein and their brain fungal burden was determined 3 days later. Similar to our previous results [14], the brain fungal burden of mice infected with the ssa1D/D single mutant was significantly less than that of mice infected with either the wild-type strain or the ssa1D/D+pSSA1 complemented strain ( Figure 8A). However, the brain fungal burden of mice infected with the vps51D/D ssa1D/D double mutant was only 1.7-fold lower than that of mice infected with the vps51D/D single mutant, a difference that did not achieve statistical significance (p = 0.053). Taken together, these results indicate that Ssa1 is necessary for wild-type C. albicans to cause maximal brain infection, but that it plays a relatively minor role in enhanced brain tropism of the vps51D/D mutant. Different results were obtained with strains that lacked Als3. The brain fungal burden of mice infected with the als3D/D single mutant was similar to that of mice infected with the wild-type strain ( Figure 8B). In contrast, mice infected with the vps51D/D als3D/D double mutant had 5.5-fold fewer organisms in their brain compared to mice infected with the vps51D/D single mutant. Therefore, although Als3 is dispensable for wild-type C. albicans to infect the brain, it is important for the vps51D/D mutant to achieve maximal brain fungal burden. Discussion In the mouse model of disseminated candidiasis, kidney fungal burden is directly correlated with mortality [23,26]. Thus, many studies of this disease have used kidney fungal burden as the primary endpoint when analyzing either the virulence of mutant strains of C. albicans in mice or the susceptibility of mutant strains of mice to disseminated candidiasis [27][28][29][30][31]. However, during disseminated candidiasis in both mice and humans, C. albicans infects virtually all organs in the body. To do so, the blood-borne organisms must adhere to and invade the vascular beds of these organs. Importantly, there are significant differences among the endothelial cells that line the vasculature of the different organs, as well as the immunologic milieu of these organs [32,33]. These differences provide a compelling rationale to investigate the capacity of C. albicans to traffic to and persist in organs other than the kidney. The brain is a particularly important target organ in neonates with hematogenously disseminated candidiasis [4,5], and its blood vessels are lined with the unique endothelial cells that form the blood-brain barrier. Our studies with a vps51D/D mutant strain of C. albicans led us to discover that C. albicans traffics to the brain and invades cerebral blood vessels in part by binding to gp96 that is expressed on the surface of brain endothelial cells. We had previously identified C. albicans VPS51 through a microarray study that was designed to discover genes that were upregulated when the organism adhered to HUVECs [20]. In that study, we determined that a vps51/vps51 insertion mutant had reduced capacity to damage HUVECs and increased susceptibility to antimicrobial peptides [20]. These in vitro findings led us to predict that VPS51 would be required for the maximal virulence of C. albicans during disseminated disease. In the current study, we verified this prediction by determining that mice infected with a vps51D/D deletion mutant had no mortality and progressively cleared this strain from their kidneys and liver. A unique and unexpected phenotype of the vps51D/D mutant was its marked propensity to infect the brain. In the few previous studies in which the brain fungal burden of mice infected with mutant strains of C. albicans was determined, the fungal burden in the brain generally paralleled the fungal burden in the kidney. For example, mice infected with ecm33D/D and hog1D/D mutants had improved survival and reduced fungal burden in both the kidney and the brain, as compared to mice infected with the wild-type strain [34,35]. Thus, it was unusual to find that mice infected with the vps51D/D mutant had reduced kidney fungal burden, yet significantly increased brain fungal burden. Our finding that the enhanced capacity of the vps51D/D mutant to adhere to and invade HBMECs, as compared to HUVECs, provides a likely explanation for its brain tropism. One difference between HBMECs and HUVECs is that the former cells express gp96 on their surface, whereas the latter cells do not [9]. Multiple lines of evidence indicate that gp96 functions as an HBMEC receptor for both wild-type C. albicans and the vps51D/D mutant. For example, an anti-gp96 antibody and siRNA knockdown of gp96 inhibited HBMEC endocytosis of C. albicans. Furthermore, overexpression of gp96 in HBMEC and the heterologous expression of human gp96 in CHO cells increased the endocytosis of C. albicans. Finally, wild-type C. albicans cells bound to gp96 in extracts of HBMEC membrane proteins, and the highly endocytosed vps51D/D mutant bound even more of this protein. Collectively, these data indicate that gp96 is an HBMEC receptor for C. albicans. It was notable that in both the anti-gp96 antibody studies and the gp96 siRNA experiments, inhibition of gp96 function or expression had greater effect on the endocytosis of the vps51D/D mutant than the wild-type strain (78% reduction for the vps51D/D mutant vs. 38% reduction for the wild-type strain; p,0.0001). These results indicate that the vps51D/D mutant preferentially utilizes gp96 as a receptor to invade HBMECs. They further suggest that the enhanced brain tropism of the vps51D/D mutant is likely due to its increased binding to gp96 on the surface of brain endothelial cells. Although these results strongly indicate that gp96 is important for HBMEC endocytosis of C. albicans, the findings that neither the anti-gp96 antibody nor siRNA knockdown of gp96 completely blocked the endocytosis of this organism suggest that it can invade HBMECs by additional mechanisms. Such mechanisms include the induction of endocytosis by binding to one or more receptors, such as N-cadherin that are independent of gp96 and active penetration, in which hyphae physically push their way into host cells by progressively elongating [16,36]. Because gp96 also functions as a molecular chaperone [37], it is possible that it could be involved in the endocytosis of C. albicans by altering the expression or function of other proteins on the surface of HBMECs. Our data indicate that this possibility is remote because HBMEC endocytosis of C. albicans was inhibited by the anti-gp96 antibody, which is unlikely to affect the chaperone function of gp96. In addition, siRNA knockdown of gp96 inhibited the endocytosis of C. albicans by HBMECs, but not HUVECs, in which gp96 is located intracellularly. Moreover, gp96 knockdown did not affect transferrin uptake in HBMECs, a process that is mediated by the transferrin receptor. Thus, the role of gp96 in inducing the endocytosis of C. albicans is likely due to its function as a cell surface receptor rather than a chaperone. Gp96 has been reported to be expressed on the surface of some epithelial cells where it functions as a receptor for Listeria monocytogenes, Neisseria gonorrhoeae and bovine adeno-associated virus [38][39][40]. In addition, gp96 on the surface of HBMEC is known to be bound by E. coli K1 OmpA [9]. This binding induces the endocytosis of E. coli by activating signal transducer and activator of transcription 3 (STAT3), which functions upstream of phosphatidylinositol-3 kinase and protein kinase C-a [41][42][43]. Whether the binding of C. albicans to gp96 activates a similar signaling pathway remains to be determined. C. albicans possesses at least two invasin-like proteins, Ssa1 and Als3. Both of these proteins induce the endocytosis of C. albicans by HUVECs by binding to N-cadherin and other endothelial cell receptors [14,15]. These two invasins may function cooperatively because the endocytosis defect of an ssa1D/D als3D/D double mutant is not greater than that of an als3D/D single mutant [14]. Our current studies with the C. albicans ssa1D/D and als3D/D mutants and strains of S. cerevisiae that overexpress C. albicans Ssa1 and Als3 demonstrate that both of these proteins can induce HBMEC endocytosis. The results of these in vitro experiments also indicate that Als3 is more important than Ssa1 in inducing HBMEC endocytosis, probably because it plays a greater role in binding to gp96. Our mouse studies suggest that Ssa1 is required for the maximal trafficking of wild-type C. albicans to the brain because the brain fungal burden of mice infected with the ssa1D/D mutant was significantly less than that of mice infected with the wild-type strain. These results are similar to our previous data [14]. However, deletion of SSA1 in the vps51 mutant had only a minor effect on brain trafficking. It is probable that in the vps51D/D mutant, the effects of deleting SSA1 were masked by the increased surface expression of Als3. A paradoxical finding was that although the endocytosis of the als3D/D mutant by HBMECs was severely impaired in vitro, this mutant had normal trafficking to the brain in mice. The normal virulence of an als3D/D mutant in the mouse model of disseminated candidiasis has recently been reported by others [44]. It is unclear why there is such a large discrepancy between the host cell interactions of the als3D/D mutant in vitro and its virulence in mice, especially because ALS3 is highly expressed in vivo [45,46]. The most probable explanation for these paradoxical results is that other invasins, such as Ssa1 and perhaps other proteins, compensate for the absence of Als3. Because the in vitro experiments were performed using human endothelial cells and the virulence experiments were performed in mice, it is theoretically possible that differences between human and mouse gp96 may account for the differences between the in vitro and in vivo results. However, human and mouse gp96 are 97.5% identical at the amino acid level, making this possibility unlikely. Importantly, our results indicate that Als3 does play a role in the enhanced brain tropism of the vps51D/D mutant because the brain fungal burden of mice infected with vps51D/D als3D/D double mutant was significantly lower than that of mice infected with the vps51D/D single mutant. Because protein trafficking is likely abnormal in the vps51D/D mutant, we speculate that this strain has reduced expression of compensatory proteins in response to deletion of ALS3. On the other hand, the vps51D/D als3D/D double mutant still had greater tropism for the brain compared to the wild-type strain. This result suggests that the overexpression of additional proteins, other than Als3, contributes to the brain tropism of the vps51D/D single mutant. The combined results of these experiments support a model in which C. albicans invades the brain during hematogenously disseminated infection by binding to proteins that are specifically expressed on the surface of brain endothelial cells. One of these proteins is gp96, which is bound predominantly by C. albicans Als3 (Figure 9). At least one other brain endothelial cell protein functions as receptors for C. albicans Ssa1. As the endothelial cells of other vascular beds also express unique surface proteins, it is highly probable that blood-borne C. albicans utilizes different endothelial cell surface proteins to infect different organs. Identification of these organ-specific receptors for C. albicans may lead to novel approaches to block these receptors and thereby prevent hematogenous dissemination. Fungal strains and plasmids The fungal strains used in this study are listed in Supplemental Table S1. All C. albicans mutant strains constructed for this study were derived from strain BWP17 [47]. Deletion of the entire protein coding regions of both alleles of VPS51 was accomplished by successive transformation with ARG4 and HIS1 deletion cassettes that were generated by PCR using the oligonucleotides vps51-f and vps51-r (The oligonucleotide sequences are listed in Supplemental Table S2) [47]. The resulting strain was subsequently transformed with pGEM-URA3 [47] to re-integrate URA3 at its native locus. The vps53D/D mutant was constructed similarly, using the oligonucleotides vps53-f and vps53-r. To construct the VPS51 complemented strain (vps51D/D+pVPS51), a 2.6 Kb fragment containing VPS51 was generated by high fidelity PCR with the primers vps51-rev-f and vps51-rev-r using genomic DNA from C. albicans SC5314 as the template. This PCR product was digested with NcoI, and then subcloned into pBSK-Ura, which had been linearized with NcoI. The resulting construct was linearized with NotI and PstI to direct integration at the URA3 locus of a Ura -vps51D/D mutant strain. The vps53D/Dcomplemented strain (vps53D/D+pVPS53) was generated similarly, except that primers vps53-rev-f and vps53-rev-r were used to PCR amplify a 3.3 Kb DNA fragment containing VPS53. To delete the entire protein coding region of ALS3 in the vps51D/D mutant, deletion cassettes containing ALS3 flanking regions and the URA3 or NAT1 selection markers were amplified by PCR with primers als3-pgem-KO-f and als3-pgem-KO-r, using pGEM-URA3 [47] and pJK795 [48] as templates, respectively. These PCR products were then used to successively transform a Ura-ssa1D/D strain. The resulting als3D/D vps51D/D double mutant was plated on 5-fluoroorotic acid to select for a Ura-strain, which was then transformed with pGEM-URA3 as above. The als3D/D vps51D/D+pVPS51 complemented strain was generated the same way as was the vps51D/D+pVPS51 complemented strain. The ssa1D/D vps51D/D double mutant and its VPS51-complemented strain (ssa1D/D vps51D/D+pVPS51) were generated similarly to the als3D/D vps51D/D double mutant and its complemented strain, except that primers ssa1-pgem-f and ssa1pgem-r were used to amplify the SSA1 deletion cassettes. The construction of the S. cerevisiae strain that expressed C. albicans ALS3 under the control of the ADH1 promoter and its control strain containing the backbone vector was described previously [25]. To express C. albicans SSA1 in S. cerevisiae, a 2.0 kb fragment containing the SSA1 protein coding region was generated by PCR with primers ssa1-exp-bglii-f and ssa1-exp-xhoi-r using pRP10-SSA1ORF as template [49]. The resulting SSA1 fragment was cloned downstream of the GAL1 promoter of pYES2.1/V5-His-TOPO using the pYES2.1 TOPO TA Expression Kit (Invitrogen) following the manufacturer's instructions. The control Figure 9. Model of the receptor-ligand interactions that mediate the endocytosis of C. albicans by HBMECs. C. albicans Als3 binds to gp96 on the surface of HBMECs and induces endocytosis. C. albicans Ssa1 binds to an HBMEC surface protein other than gp96, which also induces endocytosis. PM, plasma membrane. doi:10.1371/journal.ppat.1002305.g009 strain of S. cerevisiae was transformed with the backbone vector alone. Expression of C. albicans SSA1 was induced by growth in SC minimal medium containing 2% galactose following the manufacturer's protocol. Murine model of disseminated candidiasis Male BALB/c mice weighing 18-20 g (Taconic Farms) were used for all animal experiments. For survival studies, 10 mice per strain were injected via the tail vein with either 5610 5 or 3610 6 yeast of the various C. albicans strains [50] and then monitored for survival three times daily. All inocula were confirmed by colony counting. In the organ fungal burden studies, the mice were inoculated with 5610 5 yeast as above. At various time points, 7 mice per strain were sacrificed and the kidney, liver, and brain were harvested. These organs were weighed, homogenized and quantitatively cultured. For histopathological analysis, a portion of the excised tissue was fixed in zinc-buffered formalin followed by 70% ethanol. The tissue was then embedded in paraffin, after which thin sections were prepared and stained with Gomori methenamine silver. They were examined by light microscopy. All mouse experiments were approved by the Animal Care and Use Committee at the Los Angeles Biomedical Research Institute and carried out according to the National Institutes of Health (NIH) guidelines for the ethical treatment of animals. Endothelial cells HUVECs were harvested from umbilical cords with collagenase and grown in M-199 medium supplemented with 10% fetal bovine serum and 10% defined bovine calf serum (Gemini Bio-Products), and containing 2 mM L-glutamine with penicillin and streptomycin (Irvine Scientific) as previously described [51]. HBMECs were isolated from the capillaries in small fragments of the cerebral cortex, which were obtained by surgical resection from 4-to 7year-old children with seizure disorders at Children's Hospital Los Angeles. HBMECs were harvested from these capillaries and maintained in a mixture of M-199 and Ham's F-12 media (1:1 v/ v) supplemented with 10% fetal bovine serum, 1 mM sodium pyruvate, and 2 mM glutamine as described previously [52]. More than 98% of these cells were positive for Factor VIII-rag and carbonic anhydrase, and negative for GFAP by flow cytometry. In addition, 99% of the cells took up Dil-Ac-LDL by immunocytochemistry. CHO K-1 cells expressing human gp96 were generated and grown as outlined before [9]. All cell types were grown at 37uC in 5% CO 2 . Knockdown of gp96 by siRNA HBMECs or HUVECs were grown to 60% confluence in sixwell plates, then transfected with either gp96 siRNA (Catalog number HSS110955; Invitrogen) or a random control siRNA using lipofactamine 2000 (Invitrogen), according to the manufacturer's instructions. Gp96 knockdown was verified by Western blotting of total endothelial cell lysates with an anti-gp96 monoclonal antibody (Santa Cruz Biotechnology). Affinity purification of gp96 using intact organisms HBMEC membrane proteins from host cells were isolated using octyl-glucopyranoside exactly as described previously [16]. Next, 2610 8 hyphae of the various C. albicans strains or 8610 8 yeast of the different S. cerevisiae strains were incubated on ice for 1 h with 250 mg of HBMEC membrane proteins in PBS with calcium and magnesium and containing 1.5% octyl-glucopyranoside and protease inhibitors. The unbound proteins were removed by extensive rinsing in the same buffer. Next, the proteins that had bound to the hyphae were eluted with 6M urea. The eluted proteins were separated by SDS-PAGE and detected by immunoblotting with the anti-gp96 antibody using enhanced chemiluminescence (Pierce). Candidal adherence The adherence of C. albicans to HUVECs and HBMECs grown in 6-well tissue culture plates was measured by a modification of our previously described method [25]. Briefly, germ tubes of the various strains were generated by a 1-h incubation in RPMI 1640 medium (Irvine Scientific) at 37uC. The germ tubes were enumerated with a hemacytometer and suspended in HBSS at 200 cells/ml. After rinsing the endothelial cell monolayers twice with HBSS, 1 ml of the germ tube suspension was added to each well. The cells were incubated for 30 min, after which the nonadherent organisms were aspirated and the endothelial cell monolayers were rinsed twice with HBSS in a standardized manner. Next, the wells were overlaid with YPD agar and the number of adherent organisms was determined by colony counting. The adherence results were expressed as a percentage of the initial inoculum, which was verified by quantitative culture. Each strain was tested in triplicate on three different days. Candidal endocytosis The number of organisms internalized by the endothelial cells was determined using our standard differential fluorescence assay [15,16]. Briefly, endothelial cells on glass coverslips were infected with 10 5 yeast phase cells of each strain of C. albicans in RPMI 1640 medium. After incubation for 3 h, the cells were fixed with 3% paraformaldehyde. The noninternalized cells were stained with anti-C. albicans rabbit serum (Biodesign International) that had been conjugated with Alexa 568 (Invitrogen). Next, the endothelial cells were permeablized in 0.1% (vol/vol) Triton X-100 in PBS, after which both the internalized and the noninternalized organisms were stained with anti-C. albicans rabbit serum conjugated with Alexa 488 (Invitrogen). The coverslips were mounted inverted on a microscope slide and viewed under epifluorescence. The number of organisms endocytosed by the endothelial cells was determined by subtracting the number of noninternalized organisms (which fluoresced red) from the total number of organisms (which fluoresced green). At least 100 organisms were counted on each coverslip, and all experiments were performed in triplicate on at least three separate occasions. Transferrin uptake HBMECs were grown to 70% confluency in 6-well tissue culture plates and then incubated for 3 in serum-free medium to deplete endogenous transferrin. Next they were incubated for 45 min in serum-free medium containing AlexaFluor 555-labeled transferrin (Invitrogen; 10 mg/ml). The unincorporated transferrin was removed by rinsing, after which the cells were incubated for an additional 30 min. Any remaining surface bound transferrin was removed by rinsing the cells twice with ice-cold PBS containing Ca ++ and Mg ++ (PBS ++ ) followed by two, 5 min incubations with ice-cold acid wash buffer (0.2 M acetic acid (pH 2.8) 0.5 M NaCl). Finally, the cells were washed three times with ice-cold PBS ++ , detached with Cell Dissociation Buffer (Invitrogen), and suspended in PBS ++ . Their transferrin content was determined by flow cytometry, analyzing at least 10,000 cells. Flow cytometry Flow cytometry was used to analyze the surface expression Als3p on hyphae of the various strains using a minor modification of our previously described method [14]. Briefly, hyphae of the different strains of C. albicans were fixed in 3% paraformaldehyde and blocked with 1% goat serum. The hyphae were then incubated with either a rabbit polyclonal antiserum raised against rAls3-N or purified rabbit IgG. After extensive rinsing, the cells were incubated with a goat anti-rabbit secondary antibody conjugated with Alexa 488. The fluorescent intensity of the hyphae was measured by flow cytometry. Fluorescence data for 10,000 cells of each strain were collected. Statistical analyses The capacity of the various strains of C. albicans and S. cerevisiae to adhere to, and be endocytosed to endothelial cells was compared using analyses of variance. Differences in the fungal burden of mice infected with these strains were analyzed using the Wilcoxon Rank Sum test. Differences in survival were analyzed using the Log-Rank test. Ethics statement The protocol for collecting umbilical cords for the harvesting of HUVECs used in these studies was approved by the Institutional Review Board of the Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center. This protocol was granted a waiver of consent because the donors remained anonymous. The protocol for using fragments of the cerebral cortex, obtained by surgical resection from 4-to 7-year-old children with seizure disorders, for isolation of HBMECs was approved by the Institutional Review Board of Childrens Hospital Los Angeles. These fragments were obtained from anonymous donors in 1992-1993 and the HBMECs used in the current studies were isolated at that time and stored in liquid nitrogen. The use of HBMECs in our studies is exempted because the donors are unknown and there is no information linking the HBMECs with the donors. The mouse studies were carried out in accordance with the National Institutes of Health guidelines for the ethical treatment of animals. This protocol was approved by the Institutional Animal Care and Use Committee (IACUC) of the Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center.
2016-05-12T22:15:10.714Z
2011-10-01T00:00:00.000
{ "year": 2011, "sha1": "7975f4a30d49966f8bd18d3ed962680e07d359d8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1002305&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7975f4a30d49966f8bd18d3ed962680e07d359d8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
241821720
pes2o/s2orc
v3-fos-license
Evaluation of strain-stress state of vertical tank reinforced by carbon tyre based on numerical researches in ANSYS PC The article presents the results of numerical studies of the stress-strain state of the models of a vertical cylindrical tank with corrosion wear and with the strengthening of the wall of the first ring by an external and internal composite tyre. Based on the results of the studies, the maximum percentage of corrosion wear of the wall of the first ring of the tank have been determined, and the dependences of the influence of the installation of the internal and external carbon composite tyre on the stress-strain state have been obtained. Introduction Nowadays, the greater part of the operating tank farm of steel tanks of the hydrocarbon process industry of Russia has exceeded the regulatory time limit. Long-term operation of capacitive equipment without timely overhaul results in defects that reduce operating reliability [1]. The current task is to develop new methods or adapt existing ones for the restoration of building structures, using modern materials with high operational characteristics. The use of carbon composites in reinforcement of supporting structures is increasingly attracting researchers in this field every day [2,3]. This phenomenon is primarily due to significant technological advances in the production of carbon fibers and synthetic binders. Adhesive bonding of metals to other materials becomes a very reliable method of joining elements in items and structures, and has a number of advantages over other types of bonding. It should be noted that riveted, bolted and welded joints have an uneven distribution of stresses at the junction, and are also weakened by holes under rivets and bolts, and increase the weight of structures. The bonding of metals and alloys The works [4][5][6][7] show the results of tests of thin steel plates, reinforced with carbon plastic layers at cyclic quasi-static load. The authors conclude that the use of carbon plastic laminates can significantly increase yield strength, ultimate strength and stability. Fiber orientation is an important factor in shear enhancement. The greatest shear strength and resistance can be obtained by applying the basic orientation of carbon plastic laminates along the direction of the stress fields. When the thickness of the adhesive changes, the failure mode changes from cohesive failure to interfacial destruction of the adhesive on the steel surface. For cohesive fracture bonded joints, the maximum load increased as the adhesive thickness increased from 1 to 2 mm. Analysis of defects, arising during operation of tanks. Statistics show that the main cause of the failure of oil tanks is the corrosive wear of the surface, coming into contact with the corrosive (figure 1) [8.9]. The analysis of safety expert review showed that the most of the corrosion defects are located on the bottom, corner weld joint and the first ring of the tank wall, which is confirmed by the frequent replacement of elements during overhaul repairs. The highest percentage of corrosion wear in the wall thickness was observed at the height of up to 30 cm from the level of the corner weld joint ( Figure 2). The main cause of corrosion damage is the presence of bottom water at these levels. According to regulatory documents, GOST R 51858-2002 [10] in particular, the mass fraction of water is not more than 0.5% for the first and the second groups of oil, and not more than 1.0% for the third group. The numerical research program provides three design models of tanks: -vertical stock tank (VST) with corrosion wear of the inner surface of the first wall ring on a significant length in the area of abutment to the bottom. The nature of corrosion is the groups of shells that turn into continuous strips, as well as point depressions of the pitting type. -VST with one inner tyre with the height of b = 1300 mm, made of carbon unidirectional tape Tape 230 on epoxy binder Resin 230. -VST with two tyres, the inner one with the height of b = 1300 mm and the outer tyre with the height of b = 300, located at the height of 500 mm from the corner weld joint and made of carbon unidirectional tape Tape 230 on epoxy binder Resin 230. The efficiency of the device of one external steel shroud ring in restoring the bearing capacity of the tank is justified in the work of M.A. Tarasenko [11]. According to [12][13][14], the maximum effective stresses of σ max , MPa, in the tank wall should not exceed the values, specified in Table 1. Basic data for creating tank simulation models. To perform VAT calculations, the ANSYS software-computing complex is used, which allows: -creating FE model and determine stress-strain state of tank structures: tank wall, bottom, welded joints; -setting distributed, local, hydrostatic and inertial loads; -setting loads from previously applied stresses (welding stresses); -solving elastic (linear) and elastic-plastic problems (nonlinear); To build a geometric model of the tank, a typical design of RVS 5000 m3 "TP 704-1-27" was adopted [15]. The finite element model of the tank was constructed, using shell elements SHELL181, having a number of features, inherent in thin-walled shells. Face meshing is used to create a uniform ordered mesh on the surface of the tank wall. The size of the finite element mesh was 0.05 m (square side). In order to improve the accuracy of calculations in the wall and bottom interface, the FE grid was thickened 4 times, using the Refination function [16]. Statistical information of finite element tank model is given in Table 2. Results of numerical calculation. Based on the results of the conducted studies, the dependence of the strains αeq on the value of corrosion wear of the wall of the first ring in the range from 0 to 40% was obtained with reinforcement by carbon composite tyres and without reinforcement ( figure 5). There is also a diagram of the dependence of deformations (wall deviation from vertical) of the tank wall on the value of corrosion wear for three design models. Analysis of the obtained data showed that the excess of permissible stresses occurs with continuous corrosion of the first wall ring, exceeding 25%. Installation of one internal carbon composite shroud, with corrosion of 25%, allows to reduce the level of maximum stresses in the wall by 7.25% and deformations by 1.25% Combined installation of internal and external carbon composite shrouds with similar corrosion percentage showed a decrease in wall stresses by 8.27% and deformations by 1.625%. Conclusions. The application of the internal carbon composite shroud can be effectively used in restoring service suitability and extending the life cycle of tank with corrosion wear of not more than 40%. The use of a combined device of internal and external carbon composite shrouds does not significantly reduce the equivalent stresses, arising in the walls of the first ring.
2020-10-28T18:33:13.617Z
2020-09-18T00:00:00.000
{ "year": 2020, "sha1": "f220d538854dc743e15d101c1957c918b709f58b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/911/1/012009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cdcad3103c3468dd51cb7ccecd984503d0d8894a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
12457746
pes2o/s2orc
v3-fos-license
The lysosomal enzyme receptor protein (LERP) is not essential, but is implicated in lysosomal function in Drosophila melanogaster ABSTRACT The lysosomal enzyme receptor protein (LERP) of Drosophila melanogaster is the ortholog of the mammalian cation-independent mannose 6-phosphate (Man 6-P) receptor, which mediates trafficking of newly synthesized lysosomal acid hydrolases to lysosomes. However, flies lack the enzymes necessary to make the Man 6-P mark, and the amino acids implicated in Man 6-P binding by the mammalian receptor are not conserved in LERP. Thus, the function of LERP in sorting of lysosomal enzymes to lysosomes in Drosophila is unclear. Here, we analyze the consequence of LERP depletion in S2 cells and intact flies. RNAi-mediated knockdown of LERP in S2 cells had little or no effect on the cellular content or secretion of several lysosomal hydrolases. We generated a novel Lerp null mutation, LerpF6, which abolishes LERP protein expression. Lerp mutants have normal viability and fertility and display no overt phenotypes other than reduced body weight. Lerp mutant flies exhibit a 30–40% decrease in the level of several lysosomal hydrolases, and are hypersensitive to dietary chloroquine and starvation, consistent with impaired lysosome function. Loss of LERP also enhances an eye phenotype associated with defective autophagy. Our findings implicate Lerp in lysosome function and autophagy. INTRODUCTION In mammalian cells, the two mannose 6-phosphate (Man 6-P) receptors (MPRs), cation-independent (CI) and cation-dependent (CD) MPRs, function to transport newly synthesized lysosomal acid hydrolases from the trans-Golgi network (TGN) to the endosomal/ lysosomal system (Ghosh et al., 2003). These receptors bind the acid hydrolases via Man 6-P tags that are added to the hydrolases in the cis-Golgi and simultaneously bind adaptor proteins, GGAs and AP-1, for their incorporation into clathrin-coated vesicles at the trans-Golgi interface. Interestingly, Dennes et al. identified a single MPR ortholog in Drosophila melanogaster that was termed LERP, for lysosomal enzyme receptor protein (Dennes et al., 2005). LERP is a type I transmembrane protein whose lumenal domain contains five repeats that share overall homology with the 15 lumenal repeats of the CI-MPR. LERP is localized to the TGN and endosomes in Drosophila S2 cells and interacts with the adaptor proteins GGA and AP-1 via acidic dileucine and tyrosine-based sequences in its cytoplasmic tail (Hirst et al., 2009;Kametaka et al., 2010). Furthermore, LERP is incorporated into clathrin-coated vesicles by a process that is dependent on GGA and AP-1 (Hirst et al., 2009). These features are consistent with LERP functioning as a receptor involved in transporting cargo from the TGN to its destination. In support of this concept, Dennes et al. expressed LERP in MPRdeficient mouse fibroblasts and reported that it partially rescues the missorting of several lysosomal acid hydrolases (Dennes et al., 2005). However, these investigators found that LERP fails to bind to a phosphomannan affinity column, and the amino acids implicated in Man 6-P binding in mammalian MPRs are not conserved in LERP. Additionally, the Drosophila genome lacks discernable homologs for genes encoding essential enzymes for the Man 6-P mark, the gamma subunits of the N-acetylglucosamine-1-phosphate transferase and the N-acetylglucosamine-1-phosphodiester alpha-N-acetylglucosaminidase uncovering enzyme. This suggests that the Man 6-P-dependent sorting mechanism is absent in flies. Most recently, Kowalewski-Nimmerfall et al. reported that RNAi knockdown of LERP in S2 cells had only a small effect on the retention of the lysosomal enzyme cathepsin L and no effect on lysosomal CREG (cellular repressor of EIA-stimulated genes retention), leading them to suggest that LERP is not a universal sorting receptor for lysosomal proteins in flies (Kowalewski-Nimmerfall et al., 2014). To clarify these paradoxical results and to test the role of LERP in the whole fly, we generated a Lerp null Drosophila mutant and investigated the impact on development and on lysosomal enzyme sorting and lysosome-dependent phenotypes. We also analyzed the consequence of LERP knockdown in S2 cells on the trafficking of several lysosomal hydrolases. Depletion of LERP in Drosophila melanogaster S2 cells To explore the possibility that LERP functions as a sorting receptor for lysosomal enzymes at the TGN, the consequence of LERP depletion was first studied in Drosophila S2 cells using RNAi-mediated knockdown. In these experiments, we would predict that loss of LERP would impair the lysosomal targeting of these enzymes. Additionally, it would lead to reduced intracellular levels of lysosomal enzymes due to enhanced cellular secretion via the constitutive secretory pathway. The S2 cells were treated with LERP dsRNA for five days, with fresh media added 16 h prior to harvesting the cells. Cell lysates were then prepared and aliquots of these lysates and media were assayed for their content of a panel of lysosomal glycosidases (Table 1). The mock-treated cells showed various degrees of glycosidase secretion over the 16 h collection period, ranging from 12% of total β-hexosaminidase to 95% of β-galactosidase. With the exception of a 19% increase in the secretion of β-glucuronidase, LERP depletion had no effect on the secretion of the other glycosidases tested relative to mock treated cells. Furthermore, the cellular content of these glycosidases was unchanged relative to mock treated cells, aside from a small decrease in cellular β-glucuronidase. The knockdown of LERP mRNA was >88% as determined by RT-PCR, while the depletion of LERP protein was confirmed by western blotting (Fig. 1A). Similar results were obtained with prolonged knockdown of nine days; the cellular content of β-glucuronidase was not decreased relative to the mock-treated cells (data not shown). In another experiment, the levels of cathepsin L, a lysosomal endopeptidase, were determined by western blotting. In both mock-treated and LERP depleted cells, the cathepsin L precursor (∼45 kDa, inactive pre-lysosomal) and mature (∼30 kDa, lysosomal) forms were detected in the cell lysates (Fig. 1B). In media samples, however, only the precursor of cathepsin L was detected. Impaired lysosomal targeting of cathepsin L would shift the ratio of precursor to mature enzyme in the cells towards the precursor form and in addition, increase the precursor levels in the media. However, no differences in cathepsin L sorting were observed after five or nine days of LERP depletion compared to mock-treated cells (Fig. 1B). To quantify the effect of LERP depletion on cathepsin L sorting, pulse-chase labeling experiments were performed. In both mock-treated and LERP depleted S2 cells, 49% of cathepsin L was secreted into the culture medium (Fig. 1C). Taken together, these results are not consistent with a role for LERP as a universal receptor for lysosomal enzymes. Utilizing affinity chromatography, we attempted to directly test whether LERP binds lysosomal enzymes. A soluble form of LERP, expressed in Spodoptera frugiperda (Sf9) cells, was immobilized on an affinity resin and S2 cell lysates or medium were passed over the column as a source of Drosophila lysosomal enzymes. Using this system, we did not observe any binding of lysosomal enzymes, including β-hexosaminidase, β-glucuronidase, α-mannosidase, and β-mannosidase, to the immobilized LERP (data not shown). Because these studies regarding the role of LERP in Drosophila S2 cells were inconclusive, we focused on the intact organism. The third strategy utilized an ends-out homologous recombination strategy based on Chen et al. (2009). A construct containing a miniwhite transgene, under the control of the Hsp70Aa promoter, and the coding sequence for enhanced yellow fluorescent protein (EYFP) was flanked by intron sequences derived from the Lerp locus; the Lerp sequences, in turn, are flanked by FRT sites for Flip recombinase and by target sites for the I-Sce1 megaendonuclease ( Fig. 2A). The Lerp knockout targeting cassette was established as a transgene on the X chromosome. We used schemes ( Fig. 2B) in which the targeting cassette is excised by FLP recombinase and the subsequent DNA circle is linearized by I-SceI. Candidates for Lerp knockout by homologous recombination were selected based on retention of eye pigmentation after loss of the X-linked donor transgene, and subsequent crosses showing linkage of the donor transgene to the third chromosome. Of ∼20,000 progeny scored, three candidates were identified from mobilization in the female germline and none from mobilization in the male germline, based on mobilization of the Hsp70-miniwhite marker to the third chromosome. For all three candidate Lerp knockouts, adults homozygous for the donor transgene were identified. We were able to confirm one line, Lerp F6 , in which LERP was knocked out. Structure of the Lerp F6 allele To further analyze the knockout, genomic DNA sequence was obtained from Lerp F6 homozygous flies. Three libraries of DNA, starting with 0.4 kb, 3 kb and 10 kb average fragment length, were sequenced to a depth of 30× using the Illumina MiSeq. The sequence analysis of these data demonstrated that the Lerp F6 allele is the result of a partial internal duplication combined with an insertion that places a nearly intact copy of the donor sequence downstream of the intended target sequence. This results in duplications of the intronic sequences flanking the donor, as well as duplication of 3.5 exons from the 5′ cluster of Lerp exons (Fig. 3A). If the Lerp F6 allele were transcribed in its entirety, the exon splice that would join the Lerp exons flanking the donor sequence would create a reading frame shift and in-frame stop, predicting a truncated protein missing the trans-membrane and cytoplasmic domains. Thus, the hypothetical protein product of (Chen et al., 2009). This was then established as a transgene on the X chromosome by P-element-mediated germline transformation. FLP recombinase catalyzes the excision of the knockout cassette and I-SceI cleavage creates a linear DNA fragment from the excised circle, which then can undergo homologous recombination with the targeted genomic DNA sequence to generate a Lerp mutation by homologous replacement. For simplicity, the representations of the two genes to either side of Lerp on the chromosome are omitted. (B) Schemes to isolate Lerp knockout mutations generated in the male (left) and female (right) germlines. Lerp F6 would not be functional, and as a truncated peptide, would likely be unstable. Microarray transcription profiling analysis of larval midgut tissue, where Lerp is very highly expressed normally (Brown et al., 2014;Dos Santos et al., 2015), indicates that Lerp transcripts containing the 3′ exon are detectable in mutant midgut cells at ca. 27% of wild type levels (M.H. and J.C.E., unpublished data). LERP protein expression was tested using western blot analysis in gut tissue isolated from yw control and Lerp F6 homozygous mutant larvae (Fig. 3B). In the guts derived from yw control flies, LERP was detected as two bands between 100 and 150 kDa; these bands were not detected in Lerp F6 mutant guts. Thus, Lerp F6 mutant flies do not express detectable LERP protein. To test for possible semi-lethality associated with the Lerp null mutation, we crossed Lerp F6 /TM6C, Sb adults inter se and scored progeny. Because TM6C, Sb homozygotes are not viable, the expected Mendelian ratio is 2 Sb: 1 Sb + . The observed ratio shows a statistically significant (P=0.0004) deviation from the expected ratio, indicating semi-lethality associated with the Lerp F6 chromosome (Fig. 3C). To test if the observed semi-lethality maps to the Lerp F6 mutant allele, we tested whether semi-lethality occurs in flies hemizygous for Lerp F6 and either of two independently isolated chromosome deficiencies that contain a deletion spanning the Lerp locus. The hemizygous crosses show a return to a 2:1 ratio (Fig. 3C), indicating that the observed semilethality is not due to the loss of Lerp. Thus, Lerp is not an essential gene under standard laboratory culture conditions. Although viable and fertile, both homozygous and hemizygous adult mutant flies exhibit a small but statistically significant decrease of ca. 8% (P<0.001 and P<0.0001, respectively) in body mass relative to their respective genetic controls (Fig. 3D). There is no significant difference in third instar larval weight (supplementary material Fig. S1). Decrease in adult body mass is the only morphological phenotype observed in Lerp null flies. Cellular levels of lysosomal hydrolases are reduced in Lerp null tissue To determine whether the loss of LERP results in alterations in lysosomal enzyme content, we measured the activities of three . Same blot probed with antibody to cytoplasmic actin as a loading control (bottom). (C) Crosses to test semi-lethality associated with the Lerp F6 allele. Homozygous Lerp F6 adults are recovered at significantly lower frequency (P=0.0004, Chi squared test) compared to heterozygous sibs (first line), but hemizygous Lerp F6 adults appear at Mendelian frequencies compared to sibs carrying a wild-type Lerp allele (second and third line). (D) Body weight of Lerp homozygous mutants (Lerp F6 /Lerp F6 ) and hemizygous mutants (Lerp F6 /Df(3R)BSC524), as compared to genetic controls, yw; Lerp F6 /+ and yw; Df(3R)BSC524/+. Body weight is expressed in mg/fly. The values are means±s.d. of twelve sets of 10 male flies; n=120; ***P<0.001; ****P<0.0001. lysosomal glycosidases in Lerp F6 mutant and yw control carcasses and hemolymph. Some cell types of mice deficient in the two MPRs are defective in sorting lysosomal enzymes and as a result, most of the newly synthesized lysosomal enzymes expressed in those cells are secreted into the bloodstream (Dittmer et al., 1998). If LERP functions to sort the lysosomal enzymes in an analogous manner, we would expect to find decreased levels of these enzymes in the carcass and increased levels in the hemolymph. As shown in Table 2, the levels of β-hexosaminidase, α-mannosidase and β-glucuronidase activity were decreased by 30-40% in the carcasses of Lerp F6 larvae compared to control, consistent with a role for LERP in sorting of lysosomal hydrolases to lysosomes (Table 2). However the activity of these hydrolases in the hemolymph of the Lerp mutant was also decreased relative to the controls. This indicates that the low level of glycosidases in the carcass is not the consequence of missorting into the hemolymph. We also measured the levels of the lysosomal protease cathepsin L in third instar Lerp F6 homozygous, Lerp F6 hemizygous, and yw control whole larvae by western blotting. This analysis showed a significant decrease in the level of the protease in both homozygous and hemizygous Lerp F6 mutant midgut relative to the control (Fig. 4). Specifically, the levels of the mature, or lysosomal, form of cathepsin L (∼35 kDa) are decreased in mutant cells. The cellular levels of the proforms of cathepsin L (∼50 kDa) are unchanged between the mutants and the control. The decrease in steady state levels of the lysosomal hydrolases in the LERP deficient cells could be the result of reduced synthesis. We determined the transcript levels of the unique lysosomal hydrolase gene Cp1 (encodes cathepsin L) in mutant and control larval midguts. The values did not differ significantly when assayed by microarray analysis (M.H. and J.C.E., unpublished data). This suggests that the observed defects are not due to decreased expression of the genes encoding these enzymes. Lerp null adult flies are hypersensitive to dietary chloroquine Exposure of Drosophila to 10-20 mM dietary chloroquine, a drug that raises lysosomal pH and impairs lysosome hydrolytic activity, is lethal to wild-type flies over a period of days (Luan et al., 2012). We hypothesized that if loss of LERP impairs lysosomal activity, further impairment due to chloroquine exposure would enhance the lysosomal defect and result in enhanced lethality. To test this, crosses were set up to generate homozygous and hemizygous mutant flies and the corresponding genetic controls using standard Drosophila food. Newly eclosed flies were then transferred to instant Drosophila medium reconstituted with 20 mM chloroquine. The median survival time for homozygous and hemizygous Lerp F6 mutants was four and five days, respectively, compared to a median survival time for the controls of eight days (Fig. 5A,B). Thus, the survival time is significantly reduced in the Lerp mutants (P<0.0001), consistent with a role for LERP in lysosomal homeostasis. Lerp null adult flies have conditional phenotypes of autophagy defects Autophagy is a lysosome-mediated pathway that degrades cytoplasmic material and organelles (Eskelinen and Saftig, 2009). It is activated during stress conditions, including amino acid starvation, to help cells meet the minimum nutrient requirements of starving cells (Scott et al., 2004). We reasoned that if lysosomal activity is impaired in Lerp null flies, lysosomemediated pathways, including autophagy, would also be impaired. To test this, autophagy was induced by maintaining newly eclosed flies on amino acid deficient medium (Scott et al., 2004). Crosses were set up to generate homozygous and hemizygous mutant flies and corresponding genetic controls. During amino acid starvation, the median survival time for homozygous Lerp F6 mutants and hemizygous Lerp mutants was 23 days and 25 days, respectively, compared to 30 and 32 days for the corresponding genetic controls (P<0.0001) (Fig. 5C,D). The reduced survival in the Lerp null flies is in agreement with impaired lysosome function in these flies. To further test the role of Lerp in autophagy, we examined the interaction of Lerp F6 with the autophagy-associated gene Blue cheese (Bchs). Overexpression of Bchs in the Drosophila eye causes a reduced eye phenotype, which is modified by mutations in genes thought to be involved in autophagy (Lim and Kraut, 2009;Simonsen et al., 2007). We tested the effects of loss of Lerp expression on the Bchs overexpression phenotype. The differences of eye size between Bchs overexpression in control, homozygous and hemizygous mutant flies was quantified by measuring the amount of red eye pigment in each genotype as an index of total eye volume. Lerp knockout in a Bchs overexpressing background enhances the reduced eye phenotype, directly or indirectly implicating LERP in autophagy (Fig. 6). DISCUSSION RNAi knockdown of Lerp in Drosophila S2 cultured cells resulted in no significant reduction in cellular levels of five lysosomal glycosidases, nor in cellular levels of the lysosomal protease cathepsin L. This is consistent with a previous report, also based on RNAi knockdown in S2 cells, suggesting that LERP is not a universal sorting receptor for lysosomal proteins in flies (Kowalewski-Nimmerfall et al., 2014). However, it should be noted that Lerp expression is normally low to moderate in S2 cells (Cherbas et al., 2011;Dos Santos et al., 2015), so a strong but incomplete knockdown of Lerp may not result in measurable sorting defects. Our successful generation of a Lerp knockout mutant in Drosophila has allowed us to test the role of this transmembrane protein in development and in lysosome formation and function in an intact organism. We find that LERP is not essential for development or fertility under standard laboratory conditions, although growth is mildly impaired. The external appearance of Lerp F6 adults is normal. In particular, the compound eyes of newly eclosed flies are wild-type in appearance. This is notable in that Kametaka et al. (2010) reported that knockdowns of the σ, γ and µ subunits of the adaptor protein AP-1 in the developing eye results in a rough eye phenotype in adults. While AP-1 is believed to contribute to LERP-dependent sorting, the observation that Lerp null adults have normal eyes shows that the reported AP-1 knockdown phenotypes are LERP-independent. Since LERP is an ortholog of the CI-MPR and has been reported to partially rescue sorting of lysosomal hydrolases in MPR-deficient mammalian cells (Dennes et al., 2005), we were especially interested in determining whether the LERP mutant flies exhibited defects in lysosome biogenesis and function. Loss of the MPRs in mice results in a lysosomal storage phenotype in many tissues and increased levels of lysosomal enzymes in the serum (Dittmer et al., 1998(Dittmer et al., , 1999. Loss of LERP in flies, however, results in only mild phenotypes under standard lab conditions. A moderate reduction in the level of mature cathepsin L was observed in the midgut. In addition, by assaying carcass tissue freed of hemolymph, we found that the LERP mutant had a 30-40% decrease in the level of several lysosomal glycosidases relative to wild-type flies. However, the levels of these glycosidases were not increased in the hemolymph, indicating that the enzymes were not missorted into the hemolymph. We cannot exclude the possibility that the hydrolases are being missorted elsewhere. Drosophila larval midgut and malpighian tubules, which express high levels of LERP, are highly polarized cells (Tepass et al., 2001). Thus, the hydrolases might be missorted apically into the lumen of the gut and subsequently excreted. Regardless of the explanation for the decreased levels of lysosomal hydrolases in the LERP mutant, a key finding of this study is that Lerp mutant cells retain 60-70% of wild-type levels of α-mannosidase, β-glucoronidase, and β-hexosaminidase, and possibly other enzymes. These findings establish that acid hydrolases are trafficked to lysosomes in a LERPindependent manner. Since cellular lysosomal enzyme levels are reduced in Lerp mutants, we considered the possibility that lysosome-dependent processes, such as autophagy, might be impaired. That this is the case is supported by the observation that Lerp mutant flies are hypersensitive to amino acid starvation, consistent with inefficient autophagy. Further evidence of cellular lysosome impairment in Lerp null flies is indicated by the hypersensitivity of Lerp mutants to dietary chloroquine and the enhancement of the reduced eye phenotype in Bchs overexpressing flies. Lerp is the only recognizable MPR ortholog in Drosophila. Why has it been conserved evolutionarily if it is not essential? It is likely that laboratory culture conditions don't adequately recapitulate the selective pressures experienced by wild flies. In particular, transient starvation is frequently experienced by animals in nature, so the hypersensitivity of Lerp null adults to amino acid starvation represents a conditional phenotype that could underlie an essential function for Lerp. The mechanism by which LERP influences lysosomal enzyme levels remains open. It should be noted that the mammalian CI-MPR binds multiple ligands in addition to lysosomal hydrolases (Ghosh et al., 2003). These include IGF-II, latent TGF-B1, retinoic acid and others. Since direct binding of lysosomal hydrolases to LERP has not been documented as yet, the possibility that LERP has an indirect effect on lysosome biogenesis cannot be excluded at this point. Future studies should first be aimed at defining the ligands for LERP. Once ligands are identified, biochemical and cell biology approaches can be used to determine the physiologic role of LERP. MATERIALS AND METHODS LERP knockdown in Drosophila S2 cells S2 cells were maintained at room temperature in Express Five SFM culture medium (Life Technologies) supplemented with 2 mM L-glutamine (Cellgro; Manassas, VA, USA), 100 U/ml penicillin and 100 μg/ml streptomycin (Life Technologies). To knockdown LERP, two dsRNAs (∼670 and ∼800 nucleotide fragments) targeting different regions of the LERP mRNA were generated. First, total RNA was isolated from Drosophila S2 cells using TRIzol Reagent (Life Technologies) and cDNA was synthesized with the SuperScript II RT kit (Life Technologies) according to the manufacturer's protocols. PCR was performed with genespecific primers flanked by the T7 RNA polymerase promoter sequence at the 5′-ends, as described in Rogers and Rogers (2008). The following primers were used: LERP1-forward: 5′ TAA TAC GAC TCA CTA TAG GCC TGC AGG TGA CAA AAT GCG 3′ and reverse: 5′ TAA TAC GAC TCA CTA TAG GCT GCA ACT ATT GGA TTG TAG ACC CTC 3′, LERP2-forward: 5′ TAA TAC GAC TCA CTA TAG GCA GCT CGC ACT TTG CTT AAG GAT G 3′ and reverse: 5′ TAA TAC GAC TCA CTA TAG GCG TTG AGA GCT CCG AGG TGT TG 3′ and Rho1 (control dsRNA) forward: 5′ TAA TAC GAC TCA CTA TAG GTT TGT TTT GTG TTT AGT TCG GC 3′ and reverse: 5′ TAA TAC GAC TCA CTA TAG GAT CAA GAA CAA CCA GAA CAT CG 3′. In vitro transcription was performed with the MEGAscript RNAi kit (Ambion) as instructed by the manufacturer. In RNAi experiments, 2×10 6 S2 cells were transfected with 2 μg dsRNA using Lipofectamine Plus (Life Technologies) and analyzed 5 days later. Mock-treated and mock-depleted cells were transfected without the addition of dsRNA or with Rho1 dsRNA, respectively. The level of knockdown relative to GAPDH ( primers Cat. #330001 PPD03944A, Qiagen) was determined by quantitative RT-PCR using SYBR green master mix (SA Biosciences) and 10 μM primers to LERP (Cat. #330001 PPD10274A, Qiagen). To evaluate the secretion of lysosomal enzymes into the culture medium, the cells were washed with PBS and incubated with fresh culture medium approximately 16 h before the analysis. The S2 cells were lysed in 1% Triton X-100/PBS containing a protease inhibitor cocktail (Complete, Roche) and the activities of β-hexosaminidase, β-glucuronidase, α-mannosidase, β-mannosidase and β-galactosidase were determined as described below. Pulse-chase labeling experiments were performed with S2 cells that were treated with LERP RNAi for 5 days or mock-treated, as described in van Meel et al. (2014) with minor modifications. The pulse labeling was performed in methionine/cysteine-free, serum-free DMEM supplemented with 18 mM L-glutamine for 20 min at room temperature. Cathepsin L was immunoprecipitated after a 4 h chase with the antibody (MAB22591) from R&D Systems, Inc. For western blot analysis, 15-20 μg of cell lysate was separated by SDS-PAGE on an 8% (in the case of LERP) or 12% (cathepsin L) Tris-glycine gel and subsequently transferred to 0.2 μm nitrocellulose membranes (Amersham Protran, GE Healthcare U.K. Limited). LERP was detected with the antiserum described below at dilution 1:1000-1:2000 and cathepsin L with an antibody from R&D Systems, Inc (MAB22591) at dilution 1:1000. Secondary antibodies used were donkey anti-rabbit or sheep anti-mouse IgG Horseradish peroxidase linked whole antibody (GE Healthcare U.K. Limited), respectively, at dilution 1:2000. Production of recombinant LERP For production of antibodies to LERP, the LERP cDNA encoding amino acids 1-816 encompassing the luminal domain of the protein was cloned into the baculovirus shuttle vector, pFastBac1, with the Flag epitope sequence appended to the 3′ end of the cDNA. Baculoviral bacmid DNA isolated from DH10Bac cells was transfected into Spodoptera frugiperda (SF9) insect cells adapted for growth in serum-free media (Life Technologies). Viral particles in the media were amplified for two rounds and subsequently used to infect SF9 cells for protein production. Since the LERP construct used here lacked the C-terminal transmembrane and cytoplasmic domains, the protein was secreted into the serum-free media. The soluble LERP secreted into the media was purified on a Flag affinity column (Sigma), concentrated and used to generate antibodies as follows: approximately 100 µg of purified soluble LERP diluted in sterile saline was combined with 0.5 ml of complete Freund's adjuvant and injected subcutaneously into 2 rabbits. Two weeks following the first injection, booster shots of 50 µg were administered in incomplete Freund's adjuvant and repeated again after another two weeks. Rabbits were bled 6 weeks after the initial injection to check for antibody production and a terminal bleed was performed at 6 months. Strategy for Lerp targeted knockout The overall approach for targeted knockout is described in Chen et al. (2009) and the strategy design is cartooned in Fig. 2A. The donor cassette was flanked by Lerp genomic sequences 3R:22,684,293-3R:22,686,400 and 3R:22,677,487-3R:22,680,473. Ca. 2.6 kb upstream of the Lerp exons targeted for knockout (using primers forward: 5′ CGGCCTCGAG TGG-CTCTCAGGACCATAATC 3′ , reverse: 5′ CCAGCTAGCCAAAAAAA-GCGAGGCCTGCGAAAAG 3′) was amplified from genomic DNA and cloned into the pXH87 vector with XhoI and NheI sites. Ca. 2.7 kb downstream of the Lerp exons targeted for knockout (using primers forward: 5′ CGACCGGTCTCGCAACCAGATTTCACCCAGG-AC 3′, reverse: 5′GCCGGTACCCAGATGAGCGGGGATGAGAGG-AG 3′) were amplified from genomic DNA and cloned into the pXH87 vector with AgeI and KpnI sites. Plasmid DNA was sent to BestGene Inc. (Chino Hills, CA, USA) to generate transgenic flies. Transgenic flies were selected based on eye pigmentation conferred by the Hsp70-miniwhite gene in the donor cassette. Lerp F6 genome sequencing Genomic DNA was extracted from flies by homogenization in 100 mM Tris-HCl ( pH 7.5)/100 mM EDTA/100 mM NaCl/0.5% SDS, followed by phenol extraction, chloroform extraction and ethanol precipitation. The genomic DNA was quantified using qubit fluorometry (Life Technologies) and 4 μg was used as input to the Illumina Nextera XT library preparation protocol. Three libraries were prepared: 350 bp, 4 kbp, and 9 kbp. Tagmentation of gDNA, and PCR amplification of tagged DNA were performed as per manufacturer's (Illumina) instructions. For the 350 bp library PCR clean up and library normalization steps were performed per Illumina protocol. However, for the longer length libraries PCR Clean-Up and Library Normalization steps were omitted and size selection was instead performed by running balanced and pooled samples in a 0.6% agarose gel. Gel fractions corresponding to 3-5 kb, 8-10 kb were removed and purified using Zymoclean large fragment DNA recovery kit. The size selected DNA was circularized and remaining linear fragments were eliminated using exonuclease. The circularized fragments were fragmented using Covaris sonicator. AMPure XP beads (Agilent Technologies) were used to purify the DNA and Illumina Truseq adapters were ligated to the ends of the DNA fragments. The fragments were captured on beads and emulsion PCR performed per Illumina's protocol. 4 nM of beads were sequenced using paired-end 250 nucleotide reads on Illumina MiSeq. For assembly and annotation, reads from all three libraries were assembled using wild-type Drosophila genome (Celniker et al., 2002) as reference in Illumina BaseSpace. The analysis of the disrupted Lerp locus was performed manually using the UCSC genome browser and custom scripts written for mapping all the reads containing at least some from eYFP and Lerp sequence and aligning that portion of the read to the locus. Measuring Drosophila body mass Lerp F6 virgin females were crossed to Lerp F6 males, yw males, and deficiency males (Df(3R)BSC524/T(2,3)CySerGFP), and yw virgin females were crossed to deficiency males to generate homozygous and hemizygous knockout and control flies. Immediately following eclosion, males were collected and aged for 24 h on standard Drosophila media. Measurements were recorded using 10 flies in a 1.5 ml Eppendorf tube per reading. Eppendorf tubes were pre-weighed and fly mass was determined by subtracting mass of the Eppendorf tube alone from the total mass of flies plus the tube. Lerp F6 , yw, and Lerp F6 /Df(3R)BSC524 male and female third instar larvae were grown on instant Drosophila media (Carolina Biological Supply Company) reconstituted with a 0.05% Bromophenol Blue water solution (Sigma Aldrich) and staged 6-12 h prior to pupariation (Andres and Thummal, 1994). Measurements were recorded using 5 larvae in a 1.5 ml Eppendorf tube per reading. Eppendorf tubes were pre-weighed and larval mass was determined by subtracting mass of the Eppendorf tube alone from the total mass of flies plus the tube. Chloroquine survival curves Lerp F6 virgin females were crossed to yw males, deficiency males (Df(3R) BSC524/T(2,3)CySerGFP), and yw virgin females were crossed to deficiency males to generate homozygous and hemizygous knockout and control flies. Flies were raised on normal fly food until pupation, and then transferred onto chloroquine-containing media, which consists of 2 g instant Drosophila media (Carolina Biological Supply Company) reconstituted with 6 ml of 20 mM chloroquine (Sigma-Aldrich), 0.3% Proprionic acid, and 0.3% Tegosept. The number of surviving flies was recorded daily. Starvation test Flies were raised on normal fly food until pupation, and then transferred to amino acid-deficient food (3% agar, 5% sucrose, 0.3% methylparaben and 0.3% proprionic acid in PBS). Adult males were collected within 6 h of eclosion and transferred to fresh amino acid deprived food. The number of surviving flies was recorded daily. Lysosomal enzyme assays The activities of β-hexosaminidase, α-mannosidase and β-glucuronidase were determined in carcasses and hemolymph using 1 mM 4-methylumbelliferylconjugated specific substrates (Sigma) in 50 mM sodium citrate buffer containing 0.5% Triton X-100 (pH 4.6) as previously described (Lee et al., 2007). The hemolymph was collected from larvae by the following method: 100 µl of Ringers solution was placed in a glass well chilled on ice. For each of ten consecutive larvae of each genotype, a small tear was made in the cuticle to release hemolymph into the Ringers solution. After accumulating hemolymph from 10 larvae, the well contents were placed in a microfuge tube, centrifuged at top speed for 10 min at 4°C, and the cell-free supernatant collected for assay. For each genotype, three drained carcasses were pooled and homogenized in 500 μl 1% Triton X-100/PBS containing a protease inhibitor cocktail (Complete, Roche). 10 μl of the clarified lysate or 5 μl of the hemolymph was used in each reaction. All samples were assayed in duplicate and in total 12 sets of carcasses/hemolymph of yw and homozygous Lerp F6 larvae were assayed in four independent experiments. LERP western blotting Two midguts of wild-type or Lerp F6 homozygous third instar larvae were pooled and lysed in 200 μl 1% Triton X-100/PBS containing a protease inhibitor cocktail (Complete, Roche). Approximately 1/10th of the clarified lysate was subjected to SDS-PAGE using a NuPAGE 4-12% Bis-Tris gel and NuPAGE Mops SDS running buffer (Life Technologies) and the proteins were transferred to a polyvinylidine fluoride membrane (Millipore). LERP was detected with a rabbit antibody generated to a soluble form of the protein (lacking amino acids 817-886). Actin was detected with a rabbitanti-actin antibody from Sigma (A2066). Cathepsin L western blotting (tissue samples) Third instar wandering larvae were staged and single larvae were lysed in 250 μl 1% Triton X-100/PBS containing a protease inhibitor cocktail (Complete, Roche). A standard Lowry protein assay was performed and ∼10 μg of the clarified lysate was subjected to SDS-PAGE using a 10% Bis-Tris gel, and the proteins were transferred to a polyvinylidine fluoride membrane (Millipore). Tubulin antibody (1:3000) was purchased from Sigma (T9026); Mouse anti-insect cathepsin L antibody (1:4000) was purchased from R&D Systems, Inc. (MAB22591). HRP-conjugated goat anti-mouse antibodies were purchased from Millipore. Analysis of Bchs overexpression eye phenotype Lerp F6 homozygous and hemizygous mutants were generated in a GMRGal4EP(2L)2299 background. Control flies were generated by crossing yw virgins with GMRGal4EP(2L)2299 males. Sons were collected and aged for three days before dissection. For each replicate, 10 fly heads were cut between eyes and placed in 1 ml acidified ethanol ( pH 2) for 24 h. Absorbance measurements on five replicates were taken at wavelength 480 nm.
2016-10-31T15:45:48.767Z
2015-09-24T00:00:00.000
{ "year": 2015, "sha1": "7ed6f431e28f979b3349f2f612e2abeebfd78807", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1242/bio.013334", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4581f748f7e753a09bdc145de118f82863ecaad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
49238003
pes2o/s2orc
v3-fos-license
Reverse Transcription Polymerase Chain Reaction in Giant Unilamellar Vesicles We assessed the applicability of giant unilamellar vesicles (GUVs) for RNA detection using in vesicle reverse transcription polymerase chain reaction (RT-PCR). We prepared GUVs that encapsulated one-pot RT-PCR reaction mixture including template RNA, primers, and Taqman probe, using water-in-oil emulsion transfer method. After thermal cycling, we analysed the GUVs that exhibited intense fluorescence signals, which represented the cDNA amplification. The detailed analysis of flow cytometry data demonstrated that rRNA and mRNA in the total RNA can be amplified from 10–100 copies in the GUVs with 5–10 μm diameter, although the fraction of reactable GUV was approximately 60% at most. Moreover, we report that the target RNA, which was directly transferred into the GUV reactors via membrane fusion, can be amplified and detected using in vesicle RT-PCR. These results suggest that the GUVs can be used as biomimetic reactors capable of performing PCR and RT-PCR, which are important in analytical and diagnostic applications with additional functions. (200 nm diameter) and these vesicles were used to transfect culture cells. As the vesicles are present in a water environment, reaction products can be directly administered into biological systems without extraction, unlike in the case of water droplets in oil. Unlike small vesicles, giant vesicles (GVs) have sizes comparable to that of cells and water droplets in digital microfluidics (>1 μm in linear size and >1 fL in volume). Therefore, GVs can harbour complex biochemical reaction systems composed of multiple components. For example, spherical containers of 100 fL volume (~3 μm diameter) and 1 fL volume (~0.6 μm diameter) can contain 60 and 0.6 molecules, respectively, of a chemical species with 1 nM concentration (typical for enzymes and other macromolecules). Therefore, the large size of GVs increases the reaction efficiency by suppressing the stochasticity. To date, various multicomponent and multistep reactions such as the amplification of nucleic acids 27,28 as well as translation and transcription [29][30][31] were demonstrated in the GVs. However, most of these reactions were conducted at isothermal conditions at 37 °C. Although PCR is one of the most widely used techniques in biotechnology, its application in the GVs has been poorly explored. In fact, a very few studies have reported PCR in GVs. Recently, Shohda et al. 32 performed PCR of 1229 bp DNA harbouring green fluorescent protein (GFP) gene in giant multilamellar vesicles (MLVs) obtained by the freeze-dried empty liposome method. They found that the reaction efficiency was ~20% and ~80% in 2.7 μm (10 fL) and 10 μm (~500 fL) vesicles, respectively. They concluded that the apparent low reaction efficiency might be attributed to the multilamellar and multivesicular structures of vesicles. The same group also conducted PCR in GVs formed by the natural swelling 33 and freeze-dried rehydration methods 34 in order to study the growth-division dynamics and its relevance to the DNA amplification. However, in these studies, they focused on the prebiotic behavior of "artificial cells" rather than the quantitative aspects of PCR. We are developing a microreactor system using giant unilamellar vesicles (GUVs), which consists of single lipid bilayer similarly to the plasma membrane of cells, obtained by the water-in-oil (W/O) emulsion transfer method. This method was originally developed by Pautot et al. 35 , and can produce unilamellar vesicles of sizes up to ~100 μm in diameter 36 , and exhibits 100% encapsulation efficiency for a large range of molecular sizes and concentrations of reagents. Recently, several groups demonstrated the production of uniform-sized GUVs by transferring uniform-sized emulsions generated by the microfluidic devices [37][38][39][40][41][42][43][44][45] . Therefore, the W/O emulsion transfer method is suitable for constructing a well-defined reaction environment in the lipid bilayer. Moreover, GUVs can be used as a platform to obtain dynamic microreactors in which addition and extraction of reagents can be performed via the fusion and division of membranes [46][47][48][49] , similar to the vesicle trafficking which occurs in living cells. However, to the best of our knowledge, PCR in GUVs has not been reported till date. In this study, we assessed the applicability of GUVs produced by using the W/O emulsion transfer method, to perform in vesicle RT-PCR which has practical importance for detection of transcripts. The RT-PCR mixture that contains the template mRNA, primer pair, and Taqman probe was encapsulated into GUVs to perform thermal cycling. The results of fluorescence microscopy images and flow cytometry analysis clearly indicated the amplification of cDNA which was synthesised from the template RNA. The serial dilution experiment of the template synthetic mRNA proved that the RT-PCR was successfully conducted using a small number of RNA molecules. Results RT-PCR in GUVs using total RNA. Initially, we examined the conditions necessary to perform RT-PCR in GUVs using the total RNA extracted from human cell cultures. We prepared the one-pot reaction mixture including the total RNA, reverse transcriptase (RT), DNA polymerase (DNA pol), dNTPs, primer pair, Taqman probe conjugated with 6-carboxyfluorescein (FAM) reporter dye, and reaction buffer. Sucrose (200 mM) was supplemented into the reaction buffer to increase the relative density during centrifugation. This solution was emulsified in the oil phase (liquid paraffin) by vortexing with phospholipids as an emulsifier. Regarding lipids, we employed the mixture of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC), 1-palmitoyl-2-oleoyl-sn-glycero-3-[phospho-rac-(1-glycerol)] (POPG), and cholesterol at 18∶2∶1 weight ratio, which was identical to that used in our previous reports 28,[48][49][50] . In a test tube, this emulsion was mounted onto another water phase that will become the outer solution. This mixture was centrifuged to allow the water phase to pass through the lipid monolayer at the W/O interface to form lipid bilayer vesicles (Fig. 1). During RT-PCR, initially the cDNAs are synthesised from the RNA with reverse transcriptase and then these cDNAs are amplified using PCR. We used a commercial onepot RT-PCR kit, which enables combining the reaction mixtures of the two-step process into a single preparation. During the in vesicle reactions performed in our previous experiments, the outer solution of GUVs was the same as the buffer of inner solution to avoid the generation of concentration or osmotic gradient. In the present experiment, as the buffer components of the commercial kit were unknown, we employed the common buffer solution to perform PCR (see Methods). Generally, the formation efficiency of GUVs in the hydration-based methods deteriorates as the ionic concentration of the buffer increases (the present buffer contains 50 mM KCl and 1.5 mM MgCl 2 ), but it was possible to obtain unilamellar vesicles without notable negative effect by using the W/O emulsion transfer method (Fig. 2, left panels). We included a lipophilic dye 1,1′-Dioctadecyl-3,3,3′,3′-Tetramethylindocarbocyanine Perchlorate (DiI; DiIC18(3)) to visualize the membrane under a microscope. The fluorescence images demonstrated that the membrane was almost unilamellar without complex internal membrane structures. The size of GUVs ranged from one to several tens of micrometers in the fluorescence images. Importantly, GUVs were kept on ice to suppress the PCR reaction during all the preparation steps. We quantified the amplification of target cDNA using the 5′ nuclease qPCR assay with Taqman probes. In this assay, an oligonucleotide with FAM dye at the 5′-end and a quencher at the 3′-end bound to the target sequence as a probe is cleaved by the endogenous 5′ nuclease activity of Taq DNA polymerase during each extension cycle. Therefore, as each quencher molecule gets separated, the FAM dye fluoresces to report the amplification of target DNA. In our first trial, we encapsulated the total RNA (at a final concentration of 2 ng/μL) extracted from the human culture cells as a template, a primer pair and a probe of eukaryotic 18S (chosen owing to 18S rRNA abundance in total RNA) along with the RT-PCR mixture. Prior to performing the in vesicle reaction, we prepared GUVs that encapsulated the RT-PCR product and the reaction was completed in the test tube. The fluorescence images of these control GUVs exhibited intense green fluorescence from FAM solely within the internal volume, confirming that the amplification was detectable by microscopy (Supplementary Figure S1). Further, we conducted the RT reaction (42 °C for 5 min and 95 °C for 10 s) followed by the thermal cycling (95 °C for 5 s and 60 °C for 34 s for 20 or 40 cycles) using GUVs that encapsulated the unreacted mixture. The fluorescence intensity did not increase in the control GUVs without the total RNA as template, although primers and probes were present (Fig. 2, top panels). However, when the total RNA was included, intense green fluorescence was observed especially in the large GUVs (>5 μm diameter) (Fig. 2, bottom panels). The number of fluorescent GUVs increased as the number of thermal cycles increased from 20 to 40. Flow cytometry analysis. We performed flow cytometry analysis of GUV populations to quantify the fraction of fluorescent GUVs and its dependence on the GUV size (i.e. reaction volume). To this purpose, we included the fluorescence-tagged protein (transferrin from human serum, Alexa Fluor 647 Conjugate; TA647) in the reaction mixture as a volume marker instead of the membrane dye. We prepared 25 μL of GUV suspension that was subjected to thermal cycling in a test tube. Then, 10 μL of the resultant suspension was sampled and diluted 20 times using the outer solution and was analysed by flow cytometry. Four different signals i.e. forward scattering (FSC), side scattering (SSC), fluorescence intensities from FAM (I FAM ), and TA647 (I TA647 ) were recorded. Then, we constructed the 2D scatter plot with I FAM and I TA647 signals for ~7,000 GUVs (Fig. 3A). In these plots, the vertical axis (I TA647 ) was proportional to the reaction volume of GUV, and the horizontal axis (I FAM ) was proportional to the amount of amplified DNA. Prior to RT-PCR reaction, the population was present along the line rising to the right with a slope of approximately 1, irrespective of the presence or absence of the template RNA (left panels in Fig. 3A). This result was similar to those of our previous reports [51][52][53] . Ideally, the I FAM must be negligible, irrespective of the reaction volume. This linear dependence might be due to non-specific weak fluorescence from the uncleaved probe, and these plots are regarded as the background. After 40 thermal cycles, the distinct subpopulation with prominently high I FAM appeared in the template RNA-positive GUVs (Fig. 3A, bottom right), whereas the overall distribution remained almost unchanged in the control GUVs without RNA As we look into the detail, there is a noticeable decrease in the relative density of large GUVs (I TA647 > 3 × 10 5 ) and a corresponding increase in the small GUV population (I TA647 10 4 -10 5 ). We observed similar decrease in the signals with large FSC value, which indicates the size of particles (Supplementary Figure S2). These results indicated that the GUVs with relatively large size tend to break or rupture into small GUVs or to lose their internal contents including marker molecules by leakage. Generally, there is a volume dependency in the biochemical reactions at small scales, mainly because of the stochastic encapsulation of low-concentration molecules and/or depletion of molecules onto/through the interface. The low amplification efficiency was also reported for PCR in small vesicles (<1 μm diameter) 25,32 . To elucidate this effect, we analysed the distribution of I FAM after subdividing the populations into distinct ranges of the reaction volume (V). The reaction volume was estimated from the calibration curve that relates the I TA647 and the number of molecules obtained from the measurement of a set of calibration beads (Supplementary Figure S3; For more details, see Materials and Methods in Fujii et al. 50 ). This volume conversion revealed that the number of GUVs with 5 × 10 2 fL (~10 μm diameter) to 10 4 fL (~30 μm diameter) decreased up to 70%, but the number of GUVs smaller than this range did not decrease after 40 thermal cycles (Supplementary Figure S4). As seen in Fig. 3A (the right bottom panel), a substantial shift in I FAM is observed in the GUVs with I TA647 > 5 × 10 3 a.u. after RT-PCR. In the calibration curve, this intensity value using 1 μM Alexa 647 corresponds to V = 65 fL (volume of 5 μm sphere). Therefore, we analysed the I FAM distribution after segregating the GUVs into 65 < V < 524 fL (volume of 10 μm sphere) and V > 524 fL groups, in addition to the sum of these populations (V > 65 fL). To eliminate the intrinsic linear dependence of I FAM to I TA647 , we plotted the histograms of I FAM segregated based on that of Alexa 647 in logarithmic scale (log 10 I FAM /I TA647 ) (Fig. 3B). This value corresponds to the internal concentration of fluorescent FAM probe. In all the three histograms, I FAM /I TA647 distributions of RNA-negative GUVs after RT-PCR were identical to those before RT-PCR (RNA negative as well as positive GUVs), indicating that non-specific amplifications did not occur in these minute compartments. Contrarily, using RNA-positive GUVs, bimodal peaks were observed in the whole population (Fig. 3B, left panel). Regarding subpopulations, a prominent distinction of the second peak with high I FAM /I TA647 values was observed in the large GUVs (V > 524 fL; Fig. 3B, right panel) compared to that in the small GUVs (65 < V < 524 fL; Fig. 3B, middle panel). In mammalian cells, the total RNA consists of 80% rRNA, 1-5% mRNA, and 10-15% tRNA in mass 54 . The present result suggests that it is possible to amplify the abundant rRNA in the total RNA at a concentration recommended by the manufacturer of RT-PCR kit (10 pg to 100 ng in 25 μL) in the GUVs with V > 524 fL. Further, in order to verify whether the mRNA present in low copy number among the total RNA could be amplified in the GUVs, we chose one of the housekeeping genes i.e. β-actin. We used the same concentration of the total RNA as that during rRNA amplification and included the primer pairs as well as probes for β-actin. The amplification curves of real-time PCR after RT reaction (Supplementary Figure S5) in tubes demonstrated that the copy number of β-actin mRNA was lower than that of rRNA. After encapsulating this reaction mixture into the GUVs, thermal cycling was performed. As depicted in the fluorescence images in Fig. 4, green fluorescence appeared solely in the GUV population containing the total RNA, but at lower frequency compared to that of rRNA probes (Fig. 2). In this case, the fraction of GUVs containing amplified-DNA was too low to be detected in flow cytometry analysis. The observed behaviour of the occurrence of in vesicle amplification should reflect the stochastic encapsulation of target mRNA molecules as a template. Efficiency of in vesicle RT-PCR using synthetic mRNA template. In the in vesicle RT-PCR using the total RNA, the amplification of mRNA encapsulated in the GUVs was possible even at a low copy number. To quantitatively estimate the minimum detectable number of mRNA, we conducted in vesicle RT-PCR using synthetic mRNA encapsulated at a specific copy number. We prepared various RT-PCR mixtures containing 0, 1.6 × 10 −3 , 1.6 × 10 −2 , 0.16, and 1.6 ng/μL of β-actin mRNA as these dilutions correspond to 0, 0.1, 1, 10, and 100 copies of mRNA, respectively, in a spherical GUV with 5 μm diameter (65 fL). These mixtures were encapsulated into GUVs and subjected to thermal cycling. The fluorescence images of GUVs that encapsulated 0.16 ng/μL (10 copies/65 fL) mRNA prior and subsequent to RT-PCR are presented in Fig. 5. It was clear that the I FAM was detected solely in the GUVs that encapsulated synthetic mRNA after 40 thermal cycles and the proportion of DNA-amplified GUVs increased with an increase in the concentration of mRNA (Supplementary Figure S6). Further, we analysed these GUV populations by using flow cytometry. In the absence of mRNA (0 ng/μL; Fig. 6), subpopulation deviation was not observed relative to the original population in the 2D scatter plot of I FAM and I TA647 . As the concentration of encapsulated mRNA increased, the number of GUVs exhibiting intense I FAM gradually increased. At 1.6 × 10 −3 and 1.6 × 10 −2 ng/μL mRNA, a few scattered dots with high I FAM values can be seen. Using >0.16 ng/μL mRNA, we observed a distinct subpopulation in numbers comparable to that of the original population. We analysed the I FAM /I TA647 distribution after subdividing the population into GUVs with 65 < V < 524 fL and V > 524 fL groups (Fig. 7A). Although the distributions of GUVs containing 1.6 × 10 −3 and 1.6 × 10 −2 ng/μL mRNA were similar to that with no mRNA, subpopulation with high value of I FAM /I TA647 appeared in the distributions of GUVs with 0.16 and 1.6 ng/μL mRNA. In these conditions, distinct bimodal peaks were observed in the GUV subpopulation with V > 524 fL (Fig. 7A, right panel). There is a slight upward shift in the frequency of GUVs with 1.6 × 10 −2 ng/μL mRNA sample at the range of these second peaks for GUVs with V > 524 fL. Moreover, it is noteworthy that in the V > 524 fL subpopulation, the distributions of GUVs with 1.6 × 10 −3 and 1.6 × 10 −2 ng/μL mRNA were similar. The presence of concentration-insensitive population indicates that these GUVs have lost their amplification ability. Based on the aforementioned analysis, we calculated the reaction efficiency i.e. the fraction of cDNA-amplified (fluorescent) GUVs. We determined the threshold of non-amplified GUVs by using the I FAM /I TA647 value of mRNA negative control experiment, below which 95% of GUVs after thermal cycling was included. The dependence of reaction efficiency on the template RNA concentration is presented in Fig. 7B. The number of cDNA-amplified GUVs was not increased in the V > 65 fL population with 0.1 copy/65 fL, while it increased slightly by 3% in the case of 1 copy/65 fL solely in the V > 524 fL population. In the case of 10 and 100 copies/65 fL, the overall reaction efficiency in the V > 65 fL population was 25 and 38%, respectively, whereas in the V > 524 fL population was 47 and 64%, respectively. RT-PCR in GUVs after electrofusion. One of the predictable advantages of lipid vesicle reactors compared to solid microchambers and W/O emulsion droplets is its structural similarity with the biological membrane. In living cells, lipid vesicles are ubiquitously present to encapsulate and transport bioactive molecules during the membrane trafficking. Analogous to these phenomena, we envisioned that the vesicle-based reactors can incorporate internal contents of the membrane-enveloped biological samples such as cells, organelles, and extra cellular vesicles via membrane fusion 46,55 . In this strategy, we expected that the membrane-encapsulated nucleic acids in intact biological samples could be directly transferred into the PCR or RT-PCR mixture in GUVs to perform detection and quantification, without involving the extraction processes. As a proof-of-concept experiment, we assessed whether RT-PCR could be performed after transferring the template RNA via the membrane fusion between GUVs. We prepared two GUV populations that contained either the reaction mixture (enzymes) or the template total RNA (4 ng/μL) (Fig. 8A). The primer pair and probe for rRNA were included in both the populations. The former population was marked using the membrane dye (DiI), while the latter was marked by the internal phase marker (TA647) to distinguish between these populations under the microscope. Here, 1,2-Dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE) was supplemented in the lipid mixture as phosphatidylethanolamines with reverse cone-shaped molecular structure are reported to enhance the membrane fusion by accumulating at the stalk of fusion intermediate 56 . There was no noticeable difference in GUV formation process owing to the addition of 5% (w/w) DOPE. After mixing the suspensions of these two GUV populations at 1:1 volume ratio, the resultant suspension was introduced into the electrofusion cuvette with 1 mm electrode gap. Electrofusion of membranes was induced by applying short DC pulses (6 kV/cm, 60 μs, three times) after 1 MHz AC signal (15 V/cm, 15 s). The fluorescence images of GUVs prior and subsequent to electrofusion, and after thermal cycling are presented in Fig. 8B. Prior to electrofusion, two distinct populations either DiI-stained membrane or TA647-stained internal volume were clearly visible at similar frequency (Fig. 8B, left panel). After electrofusion, GUVs containing membrane as well as internal markers appeared at a certain frequency (Fig. 8B, middle panel; indicated by arrows). These GUVs were mostly flaccid indicating that, when more than two spherical vesicles fused together, the fused vesicle exhibits larger membrane area than that of a sphere exhibiting identical total volume. After RT-PCR, we observed intense FAM fluorescence in a portion of flaccid GUVs as a result of cDNA amplification in fused GUVs (Fig. 8B, right panel). Discussion In this study, we demonstrated that the reverse transcription and amplification of transcripts using the total RNA as well as synthetic mRNA could be conducted in GUVs. Although there was a significant decrease in the number of large-size GUVs (>10 μm diameter) during thermal cycling, we obtained a consistent amplification probability (fraction of fluorescent GUVs) depending on the copy number of encapsulated RNA templates within the remaining GUVs. Notably, the amplification probability in the GUVs > 5 μm in diameter was sufficiently high to be detected in the population analysis performed by flow cytometry. The amplification probability reached up to 64% in the GUVs with >10 μm diameter. Assuming that 80% of the total RNA is rRNA, its number density in 2 ng/μL was calculated to be ~26 copies/65 fL using the total base-pair number of a ribosome (7,180 bp). Furthermore, the typical number of β-actin mRNAs per single human cell was reported to be ~1,000 copies 57 . Therefore, by assuming that the amount of total RNA and volume of single cell as 20 pg and 1 pL, respectively, the number density of β-actin transcripts in 2 ng/μL total RNA was calculated to be 6.5 × 10 −3 copy/65 fL. In the experiment using the synthetic mRNA, we observed a population of fluoresced GUV prominently distinct compared to the control (RNA negative) population when the concentration of target RNA was >10 copies/65 fL. Therefore, the results obtained using the total RNA was consistent with those obtained using the synthetic mRNA. We might conclude that our system is able to detect the target RNA (10-100 copies) obtained from the total RNA in the GUVs with 5-10 μm diameter. Nowadays, reverse-transcription droplet digital PCR (RT-ddPCR; BioRad) kit is commercially available for the absolute counting of virions and transcripts. Thus, it is likely that the limit of detection (LOD) in GUV can be further improved by optimizing the reaction system. However, the amplification probability did not reach 100% even at high template concentrations (rRNA and 10 or 100 copies/65 fL of synthetic mRNA). The amplification probability remained 64% in the large GUVs (V > 524 fL) with 100 copies/65 fL mRNA. It is probable that 40% of GUVs were non-reactive. Previously, Shohda et al. 32 reported that 20% of the GUV population was non-reactive even in the template-rich condition. As they used vesicles obtained by freeze-dried empty liposome method, they concluded that the loss of efficiency was because of the substantially reduced apparent reaction volume owing to the subpartitioning in MLVs. In this study, we expected that the amplification probability might reach 100% after using GUV, as the occurrence of isothermal reactions such as gene expression was demonstrated in most of the GUVs in our previous report 53 . We presume that there might be leakage of small molecules (possibly dNTPs, primers, and probes), which exist solely inside the GUVs, because permeability of lipid bilayer membrane increases with temperature due to the loosened molecular packing of lipids 58,59 . As we defined that the large GUVs contain a high amount of TA647 (transferrin protein), large molecules such as enzymes must have remained in the GUVs. To confirm these hypotheses, we need to perform further experiments such as real-time observation under microscope during thermal cycling. Similar to the discovery of thermostable polymerase to perform PCR, improvement in the thermostability of the lipid membrane might be expected to provide a more reliable reaction environment. The use of bolalipids (tetraether lipids) found in Archaea bacteria 58,60 or amphiphilic block copolymers might be one of the promising solutions. Moreover, the amplification of target transcripts, which were directly incorporated via the membrane fusion, demonstrated a characteristic capability of the biomimetic reaction compartment. In conventional RT-PCR procedure, there is a possibility of degradation of active RNA during the extraction, purification, and preparation steps. In principle, our approach might overcome this problem to develop a highly sensitive detection system with minimum sample loss and simplified procedure. Although challenges remain in order to improve the efficiency, the fact presented by our study that PCR and RT-PCR can be conducted in the GUVs highlights the possibility of utilising these biomimetic compartments for developing nucleic acid detection system without the bulk extraction procedure. RT-PCR solution. We used one-step real-time RT-PCR kit (RR064A; Takara Bio Inc., Shiga, Japan) supplemented with TaqMan probe and primers (Thermo Fisher Scientific Inc., Waltham, MA, USA) as the GUV inner solution. We employed the TaqMan probe (FAM/MGB) for rRNA and β-actin. Additionally, the reaction buffer accompanied to the kit was supplemented with 200 mM sucrose to obtain a density gradient with respect to the outer solution (200 mM glucose) during centrifugation. Regarding the template RNA, we used total RNA extracted from human culture cells (Agilent Technologies, Santa Clara, CA, USA) and synthetic mRNA of β-actin (Nippon Gene, Tokyo, Japan). The final concentration of total RNA was adjusted to 2 ng/μL and that of synthetic β-actin mRNA (1874 bp, MW 6.2 × 10 5 ) was adjusted to 0, 1.6 × 10 −3 , 1.6 × 10 −2 , 0.16, and 1.6 ng/μL, which corresponds to 0, 0.1, 1, 10, 100 copies in GUV with 5 μm diameter (65 fL), respectively. To perform flow cytometry analysis, TA647 (Thermo Fisher Scientific Inc.) was included at 1 μM concentration in the inner solution. Preparation of GUVs. GUVs containing RT-PCR mixture were prepared using the W/O emulsion transfer method 35,50 . POPC, POPG (Avanti Polar Lipids, Alabaster, AL, USA), and cholesterol (Nacalai Tesque, Kyoto, Japan) lipids at 18:2:1 (w/w), or POPC, POPG, DOPE (Avanti Polar Lipids), and cholesterol at 17:2:1:1 (w/w) were dissolved in chloroform. Liquid paraffin (Wako Pure Chemical Industries, Osaka, Japan) was added to the aforementioned solution to adjust the final concentration of lipid mixture to 2.1 mg/mL. To observe under a microscope, a lipophilic dye DiI (0.05%, w/w; Thermo Fisher Scientific Inc.) was included in the lipid mixture. The liquid paraffin solution was warmed to 80 °C for 30 min to remove chloroform residue. After adding 25 μL of RT-PCR mixture, 400 μL liquid paraffin lipid solution was vortexed to obtain W/O emulsion. This emulsion was placed on 400 μL outer aqueous solution that contains 10 mM Tris HCl (pH 8.3), 1.5 mM MgCl 2 , 50 mM KCl, and 200 mM glucose in a test tube. This two-layered solution was centrifuged at 20,630 × g at 4 °C for 20 min to obtain GUVs precipitated at the bottom of tube as a pellet. This suspension of precipitated GUVs (~100 μL) was collected through a hole pierced at the bottom of test tube. Finally, after adding 400 μL of outer solution, GUV suspension was further centrifuged at 20,630 × g at 4 °C for 10 min and the supernatant was removed to obtain concentrated GUV suspension (~35 μL). Thermal cycling. GUV suspension (20 μL) placed in a PCR tube was subjected to PT-PCR using qPCR system (Mx3005P, Agilent Technologies). The thermal conditions were as follows: 42 °C for 5 min, 95 °C for 10 s, and [95 °C for 5 s and 60 °C for 34 s] × 40 cycles. The identical thermal conditions were also applied for 20 μL PCR solution without GUV to perform PCR in the test tube for the post-encapsulation experiment ( Figure S1) and for checking the amplification curves ( Figure S5). Flow cytometry analysis. Quantitative analysis of RT-PCR in individual GUVs was conducted using Attune NxT flow cytometer (Thermo Fisher Scientific Inc.). The TaqMan probe and TA647 were excited by a 488 nm and 638 nm lasers, respectively, to obtain their fluorescent intensities. The number of TA647 molecules in an individual GUV were calculated from the fluorescence intensity of TA647 (I TA647 ) according to the calibration curve relative to the fluorescence intensity and number of Alexa 647 molecules attached to the calibration beads (Alexa Fluor 647 MESF calibration beads; Bangs Laboratories, Inc., IN, USA). Further, GUV volume was calculated assuming that 1 μM TA647 molecules were encapsulated at 100% efficiency. The final equation for conversion was V (fL) = (I TA647 × 9.1779)/602. Flow cytometry analysis was performed using FlowJo software (Tomy Digital Biology Co., Ltd., Tokyo, Japan). Electrofusion of GUVs. Two populations of GUVs were prepared using a lipid mixture of POPC, POPG, DOPE, and cholesterol (17:2:1:1, weight ratio) to perform the fusion assay. One population of GUVs was made to encapsulate the enzymes (RT and DNApol) without RNA, while the other population was made to encapsulate the total RNA and TA647 without enzymes. The primer pairs and probe for rRNA were included into both the populations. The membranes of the former GUVs were stained with DiI. To compensate for the dilution after fusion, the total RNA and enzymes were encapsulated at double the usual concentrations. One-to-one volume mixture (30 μL in total) of suspensions of these GUV populations was introduced into an electrofusion cuvette with 1 mm electrode gap, which was connected to an Electro Cell Fusion Generator (LF201; Nepa Gene Co., Ltd., Chiba, Japan). The alternative current was applied (15 V/cm, 15 s) to induce pearl chain alignment, and then, direct current short pulses (6 kV/cm, 60 μs, three times) were applied to induce the membrane fusion. Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2018-06-16T13:08:45.219Z
2018-06-15T00:00:00.000
{ "year": 2018, "sha1": "07b9624752ab81de6143aa0d7e1e62f14838f2a4", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-27547-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8b93162a0ccb605299b0d6e2a0f31e33986f5a0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
215444387
pes2o/s2orc
v3-fos-license
Medicine Wm. J. Mayo Jmcr. Med. Asm., March 1916) discusses shortly the possible relationships between the spleen and the liver, ant g his results by splenectomy in the various forms of enlargement o the spleen. The spleen has a close and distinct relationship to the liver It sends the material it takes from the blood to the liverthe sp . hypertrophies secondary to cirrhotic processes in c 'lippnrnpt, Banti's disease (the terminal stage of splenic amemia) ,'ho becc? cirrhosed. Probably the spleen conserves what is obtame down red cells, and sends them to the liver for farther use, jd a so It sends the material it takes from the blood to the liverthe sp . hypertrophies secondary to cirrhotic processes in c 'lippnrnpt, Banti's disease (the terminal stage of splenic amemia) ,'ho becc? cirrhosed. Probably the spleen conserves what is obtame down red cells, and sends them to the liver for farther use, jd a so r a rM-nrl of defeneration, and passes acts as a sieve for parasites and product = ^ has a relationthem on to the liver for conservation. The l , . ., ship to the blood. In the foetus it is a blood-forming organ abirth it loses power to produce red cells, and after ir 1 P worn-out red cells. When the spleen becomes enlarged Mayo su e this may be a work hypertrophy, as seen in splenic anamna ,h;emoh i -^ primary tuberculosis of the spleen its removal has cured a few instances. In 3 cases Mayo removed very large spleens from cases of chronic syphilis with marked antemia. Ordinary specific treatment had not the desired effect, but following splenectomy marvellous improvement in the aneemia took place. He removed the spleen in 7 cases of chronic recurring septic conditions. The benefits were not marked, such cases being liable to be carried off by cardio-renal insufficiency. (b) Splenic Enlargements in Association with Hepatic Disease.?In primary biliary cirrhosis of young adults in which the spleen is also enlarged, splenectomy has been followed by marked benefit and even cure. In 4 cases of portal cirrhosis of the liver and greatly enlarged spleen Mayo removed the spleen with great improvement in 3 of the cases, the ascites and anaemia soon disappearing. (c) Enlargements in Blood Conditions.?Cases of splenic anaemia are cured in most instances by removal of the spleen, and in several instances Mayo has obtained cure in the advanced stages of the condition, i e. after the liver had become cirrhosed and ascites established. In 3 cases of Gaucher's disease in which the spleen was of enormous size Mayo performed splenectomy, with cure. In haemolytic jaundice?an anaemia of splenic origin with acholuric jaundice?Mayo has performed splenectomy in 9 cases, and with striking results. In 24 hours the jaundice begins to disappear, and in a few days the complexion is clear. The anaemia soon disappears, and the cure is permanent. Mayo has removed an enlarged spleen in 19 patients with pernicious anaemia, and he is of the opinion that splenectomy, if performed in the early stages, will permanently check, if not cure, the condition. Petersan is inclined to think that at present the procedure is often adopted without discrimination. He believes transfusion ought to be employed in (1) cases of haemorrhages of various degrees and types ; (2) selected cases of anaemia ; and (3) selected toxic and septic cases. Petersan's 27 recorded cases belonged to the first category of simple and pathologic haemorrhage, the latter including haemophilic and purpuric cases. In anaemia, from simple loss of blood?(e.g.) following trauma?transfusion of blood is always efficacious and prompt. In cases where the haemorrhage cannot be reached without operation?(e.^.)gastric or duodenal ulcer or typhoid fever?transfusion does well, and, as a rule, it is better to wait till bleeding has ceased before transfusing. In chronic post-haemorrhagic anaemia following repeated losses of blood? (e-?/-) haemorrhoids, epistaxis?transfusion is easily the best remedy, and here it is better to employ several small transfusions than a single large one. All Petersan's 7 cases of this group are successful. In haemophilia, while the disease is not cured, transfusion offers the best results in helping those who have lost a quantity of blood. The transfused blood introduces into the patient an excess of the elements necessary for coagulation. Petersan's 6 cases were markedly benefited. In purpuric conditions transfusion is of great service. He reports one case of hemorrhagica neonatorum cured, and says that, from the experience of others, transfusion has reduced the death-rate from 60 percent. to 5 or 10 per cent. In idiopathic purpura there is a tendency to uncontrollable haimorrhage. Petersan treated 8 cases by transfusion. Two died, one of acute nephritis months after last transfusion, and one was moribund at the time the treatment was given. In 3 cases a single transfusion cured the condition. In 2 cases, where death was threatening from profound anaemia, transfusion was successful in sa\ing them. In secondary hemorrhagic diseases?(e.g.) in haemorrhage seen in uterus, sepsis, nephritis, blood diseases, etc.?transfusion is useful in controlling the bleedings. In conclusion, he points out that transfusion a short-time method being essential, as transfusion is often required in an emergency. The Pituitary Body and Kenal Function. Motzfeldt of Christiania (Boston Med. and Surg. Joum., 4th May 1916) contributes further information on the relations of the pituitary gland to renal function. Clinical cases of diabetes insipidus treated with pituitary extract, and observations on normal individuals with pituitary, have led him to the conclusion that the pituitary gland controls the amount of urine passed and also certain of the solid constituents. In 3 cases with diabetes insipidus he was able to show that administration of extract of the posterior lobe of the pituitary gland considerably diminished the quantity and increased the concentration of the urine. In these cases, as rontgenograms showed, there was no tumour of the gland ; and he concludes that his cases were the result of a hypofunctional activity of the hypophysis. This was corroborated also by the fact that they answered in their symptoms to the four cardinal symptoms laid down by Cushing as indicating a lowered functional activity of the posterior lobe of the pituitary. Motzfeldt is of the opinion that the majority of cases of diabetes insipidus are due to this lack in the posterior lobe. Following his clinical cases he made numerous observations on normal individuals, and ascertained that injection of extract of the posterior lobe had a distinct effect on the quantity of urine passed. In each case the diuresis was diminished, and in most cases to a considerable extent, and the concentration of the urine increased both as to nitrogen and sodium chloride. The effect of a single dose is not very lasting, and is not influenced by sex, age, or disease, and the results are obtained without any change in bloodpressure. This influence of the posterior lobe of the pituitary gland on the renal functions may be brought about in various ways?(1) Directly on the kidneys through the circulatory system ; (2) through influences on the nervous system (autonomic or sympathetic); (3) through influence on other internal secreting organs. Assuming, Motzfeldt says, that all three factors are in action, the influence may be supposed to be exerted directly on the secreting epithelium of the kidneys or on the renal vessels. Diagnosis, Symptomatology, and Therapy of Dilatation of Aneurysms of the Descending Thoracic Aorta. Neuhof (Amer. Joum. Med. Sci., May 1916) says that the object of his paper is to suggest that such aneurysms are of greater frequency than is supposed, that they have a definite symptomatology, <and that the condition is a clinical entity. He describes 5 cases with
2018-05-31T15:20:25.351Z
1916-08-01T00:00:00.000
{ "year": 2019, "sha1": "6297927ecb0bc5b6da9051c7390e18c3c3408a29", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6297927ecb0bc5b6da9051c7390e18c3c3408a29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }